Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

SANs and Excessive Disk Utilization?

Cliff posted more than 7 years ago | from the tweaks-and-performance-improvements dept.

Data Storage 83

pnutjam asks: "I work for a small to medium mental health company as the Network Administrator. While I think a SAN is a bit of overkill for our dozen servers, it was here when I got here. We currently boot 7 servers from our SAN, which houses all of their disks. Several of them have started to show excessive disk load, notably our SQL server, and our old domain controller (which is also the file/print server). I am in the process of separating our file/print server from our domain controller, but right now I get excessive disk load during the morning when people log on (we use roaming profiles). I think the disks need to be defragged, but should this be done on the servers, or on the SAN itself? When it comes to improving performance, I get conflicting answers when I inquire whether I would get better throughput from newer fibre-channel cards (ours are PCI-x, PCI-e is significantly faster), or mixing in some local disks, or using multiple fibre channel cards. Has anyone dealt with a similar situation or has some expertise in this area?"

Sorry! There are no comments related to the filter you selected.

No Acronyms! (-1, Redundant)

rueger (210566) | more than 7 years ago | (#18441583)

And "SAN" means what? Don't ever assume that everyone else knows your acronym. Spell it out the first time that you use it.

Storage Area Network (2, Informative)

haplo21112 (184264) | more than 7 years ago | (#18441605)

For those not in the know, Typically cabled to the machines via fiber connections.

Re:No Acronyms! (1)

W2k (540424) | more than 7 years ago | (#18441609)

Storage Area Network [] . You could have just googled it. It's not an uncommon term.

Re:No Acronyms! (4, Informative)

Anonymous Coward | more than 7 years ago | (#18441637)

If you don't know what a SAN is, and are too lazy to consult Google, then why post? He's asking for someone who might be able to help, not trying to teach a lesson.


The whole SAN part is a red herring. He just has a storage area network (presumably Fibre Channel, as opposed to iSCSI), which just is a means of connecting servers to storage enclosures. The storage protocol is still SCSI, it's just over a different transport layer.

In other words, he has multiple servers connected to a single storage enclosure, and he's seeing capacity and performance issues.

The disks should be considered just like internal disks: defrag from the respective servers.

I would bet that his problem is simply having insufficient disks (spindles) to serve the morning peak workload... just like if you had a few internal disks.

In short:
- Defrag from each server, if you have a fragging issue
- Add more disks to spread the workload out
- Consider leaving the boot disks in each server, and just put data on the san. One main reason is that swapping to the SAN can be a problem by consuming storage enclosure cache (presuming there is any)

Re:No Acronyms! (1)

Stephen Samuel (106962) | more than 7 years ago | (#18552395)

If you don't know what a SAN is, and are too lazy to consult Google, then why post? He's asking for someone who might be able to help, not trying to teach a lesson.
There are two reasons why Slashdot posts 'ask slashdot' questions (that I can think of). One is to get an answer for the original poster (a minor point). The other is so that the other million or so readers have a chance to show off their knowledge and/or learn something new.

The latter is actually a lot more important (for the site as a whole) than the former.

Re:No Acronyms! (0)

Anonymous Coward | more than 7 years ago | (#18441763)

you are very, VERY right - but in this case, don't you think someone who doesn't even know the acronym is going to be pretty fucking useless as far as helping goes?

Re:No Acronyms! (1)

CastrTroy (595695) | more than 7 years ago | (#18441787)

If you don't know what SAN stands for, then you probably aren't qualified to give advice on the subject. If I ask a question about something I'd rather only have people give answers they are qualified to give, rather than a bunch of people pulling ideas out of thin air.

Re:No Acronyms! (0)

Anonymous Coward | more than 7 years ago | (#18442803)

"If I ask a question about something I'd rather only have people give answers they are qualified to give, rather than a bunch of people pulling ideas out of thin air."

Me too, but this is "ASK SLASHDOT" after all.

Re:No Acronyms! (1)

vtcodger (957785) | more than 7 years ago | (#18441897)

***And "SAN" means what? Don't ever assume that everyone else knows your acronym. Spell it out the first time that you use it.***

Yeah, that'd be good style. But unless you intend to undertake an editorial Jihad on Slashdot and strafe every article that makes slips like this, this is probably not a good place to start. It's a request for specific information in a specialized area. Seems likely that anyone who can contribute anything useful will know what SAN means.

Don't let me discourage you from making useful editorial comments. I'm only suggesting that Slashdot being what it is, you're probably going to have to target them at the really egregious stuff if you plan to retain your sanity.

(SAN="Storage Area Network" ... I think)

Who's San Box is it? (4, Informative)

haplo21112 (184264) | more than 7 years ago | (#18441587)

You might want to consider calling the maker for technical support. Some SAN devices require defrag at the BOX level and doing it from the Server will adversely affect your data. Others its OK either way.

Re:Who's San Box is it? (1, Informative)

Anonymous Coward | more than 7 years ago | (#18441745)

Defrag at the box level? Methinks you don't know what you're talking about*

Disk enclosures, from low end to high end, simply employ differing RAID levels to present logical disks as sequential block extents (ranges). A disk enclosure is not in the business of layout... block 001 is immediately before block 002 (although might be RAID-1 or RAID-5 or RAID-6 on the backend).

Also, to the submitters question: Throughput is rarely the issue with SANs, 1Gb or more is more than adequate for most apps. The bottleneck is probably the disks themselves, they can only spin so fast. Look at at the disk throughput metrics in Windows PerfMon.

Sometimes there is a mismatch between IO size and stripe widths (readwrites larger than stripe width, meaning that each server IO results in multiple disk IOs), but don't think about this now.

Either add more disks or add more memory to reduce the amount of disk IO. Alternatively, spread the data out. Also, if you're doing database stuff, make sure logs and data are on different disks. Look at whether you are using mirroring or RAID5, and whether this is done in the enclosure or by Windows.

* The only time this could be true is if you're using so-called "thin provisioning", which is pretty uncommon and unlikely for the customer description. Thin provisioning systems work in a log-structured manner to create LUNs out of non-sequential storage, so you don't use up storage that isn't used.

Re:Who's San Box is it? (0)

Anonymous Coward | more than 7 years ago | (#18442015)

Anonymous here because I already moderated comments before disagreeing with parent.

Well, yes, you have to pay attention on how the data is spread in the enclosure, because SANS systems have the function of exposing seemingly raw linear device while in effect spreading the data according to their own secret recipe(tm) on RAID/XYZ arrays.

So yes, I would consult with you vendor before going the defrag way, mostly because what your system sees is most probably not the physical reality.
And maybe file fragmentation is not you problem, either.

P.S: to parent: you're either yet another ignorant, or a troll.

Re:Who's San Box is it? (1)

haplo21112 (184264) | more than 7 years ago | (#18446847)

Neither, I just over simplified for the purposes of the conversation a more in depth explaination was not required in my opinion. The point was was trying to make was Call the company that made the equipment because if you do the wrong thing you might be up the creek real quick.

Re:Who's San Box is it? (1)

lanswitch (705539) | more than 7 years ago | (#18441793)

Windows network, right? don't know much about that, but let's try.
the problems started after splitting the pdc and the file/print sharing. where are the profiles stored? on the pdc or on the file/print server? try moving them to the server that really needs them at login time.

Re:Who's San Box is it? (1)

pnutjam (523990) | more than 7 years ago | (#18441927)

The problem is ongoing, we are trying to split the PDC from the file/print server to address the problem. That hasn't been accomplished yet.

Re:Who's San Box is it? (1)

pnutjam (523990) | more than 7 years ago | (#18441955)

I've talked to three people who are either certified on this product or work for the manufacturer. I got three different answers with little in the way of explanation.

Re:Who's San Box is it? (1)

sammy baby (14909) | more than 7 years ago | (#18442651)

I've talked to three people who are either certified on this product or work for the manufacturer. I got three different answers with little in the way of explanation.

Then I have two words for you: "support ticket."

Open a ticket with their tech support. Tell them about the diagnostic steps you've taken. If possible, get someone to come out and examine the box or do maintenance on it for you.

Re:Who's San Box is it? (1)

pnutjam (523990) | more than 7 years ago | (#18588625)

I'm posting this reply since I found your information useful. It turns out our SAN had 97% of it's disk space allocated to LUN's. These LUN's were nowhere near full, but the SAN still doesn't like this. Xiotech indicated their SAN's start to degrade substantially after they become 80% allocated. We added a bay of disks and have seen a vast improvement in performace. Since our license doesn't allow us to use all the space we added we shouldn't have this problem again.

Re:Who's San Box is it? (2, Interesting)

Sobrique (543255) | more than 7 years ago | (#18442271)

Really? I must admit, I'm surprised. SAN is just a way to attach disks over scsi. I've yet to see an array that allows you to defragment a volume (a host device). EMC Clariions will let you defrag raid groups, but that doesn't do anyhting more tham move the LUNs around on they physical devices.

Or are you perhaps thinking of NAS (Network attached storage) devices?

Re:Who's San Box is it? (1)

narf (207) | more than 7 years ago | (#18442905)

I've seen that option on some of the hybrid NAS/SAN boxes. For example, on NetApp filers, the command is called "wafl scan reallocate". You can tell it to reallocate by NetApp volume, or just a single LUN. The option exists because you can spread multiple volumes and/or LUNs over a single group of disks.

Re:Who's San Box is it? (1)

Wdomburg (141264) | more than 7 years ago | (#18443739)

SANs rarely use SCSI as an interconnect (even if it uses SCSI drives). Most often fiber channel (as seems to be the case here), increasingly ethernet.

But yeah, with a SAN you're talking about something that provides a block level interface to storage. Fragmentation is a filesystem level issue.

Re:Who's San Box is it? (2, Informative)

Sobrique (543255) | more than 7 years ago | (#18444657)

SCSI is the protocol, FC or ethernet is the transport. Those chunky big cables you may be thinking of, might be correctly referred to as 'Parallel SCSI', which might be the source of the confusion.

Re:Who's San Box is it? (1)

Wdomburg (141264) | more than 7 years ago | (#18449483)

Actually, no. SCSI is a group of standards, including a command set and signalling. The protocol would be SCSI-1, SCSI-2, SAS, iSCSI, etc. What you're talking about is the SCSI command set, which runs over a number of signalling protocols on a variety of physical interconnects. You spoke of "attaching" disks which implies (to me, at least) interconnect (from physical connection up through device signalling).

That seems like an odd configuration (3, Insightful)

hattmoward (695554) | more than 7 years ago | (#18441617)

I've always kept the system disks local so the server isn't dependent on the SAN connection to boot. That said, do you have this SAN configured as a single shared filesystem or as a group of raid containers that are isolated from one another and provisioned to a single server? If it's shared, I'd say you need to take all but one server down and defragment from that. If it's not shared, they can all defragment their private filesystems at once (though I'd only do one or two at a time anyway).

Re:That seems like an odd configuration (0)

Anonymous Coward | more than 7 years ago | (#18441893)

I work for a mental health provider as well. We boot our blade servers from a SAN.

There has been more than 1 instance where a blade server experienced a hardware failure and we were able to remotely boot up a spare blade by configuring the SAN to assign the system disk of the broken server to the spare server.

Re:That seems like an odd configuration (1)

hattmoward (695554) | more than 7 years ago | (#18445741)

True enough. Blade servers are basically designed to be expendable. I was guessing that these are varied servers with possibly differing hardware.

Re:That seems like an odd configuration (1)

Gr8Apes (679165) | more than 7 years ago | (#18441941)

This is definitely a good suggestion. I'd only keep data needing solid backups on the SAN. Systems should have their own boot disks and pagefile locations locally (lowers SAN utilization).

Then again, it truly sounds like he probably needs to review his SAN architecture. I'd probably have the DB on its own set of spindles, and having 2 domain controllers, with the primary being standalone and the secondary (and potentially tertiary if needed) doubling as print servers. Other than that, we'd need a lot more information to truly address how he should go about altering hsi configuration to handle the load(s).

Re:That seems like an odd configuration (1)

NetJunkie (56134) | more than 7 years ago | (#18442079)

No. Booting off the SAN is usually a good idea if done correctly. You can easily snapshot and replicate the boot drives. Have a server fail? No issue. Just point another server to that volume on the SAN and boot. Plus you can do an easy DR by replicating those server drives offsite to cold systems.

Re:That seems like an odd configuration (1)

Gr8Apes (679165) | more than 7 years ago | (#18442477)

I assumed it was a bad idea based on the little we could glean from the submission. I'm guessing he has one set of spindles divided up into multiple partitions. In that case, it's not a good idea.

Re:That seems like an odd configuration (0)

Anonymous Coward | more than 7 years ago | (#18443973)

Still, a small drive locally for the swap file makes sense if using one, especially if the disk utilization on the SAN spikes during times of heavy swap file usage. The swap file won't be reused after a reboot anyway. As I'm sure you know, it's often not a great option to run Windows without a swap file, since it likes to pre-swap to mke more physical memory available.

Disk utilization can be dealt with a number of ways. Splitting up high-use, less-critical data from lower use, more critical data can make a big difference.

Not directly Windows nor SAN related, but I once had a Linux mail server that was dog slow and thrashing. I inherited it from a small ISP my company bought, so my initial thought was just to upgrade the memory and maybe the processors. I took a look and figured out that wasn't the problem. It was doing SMTP spooling (in and out), user mailboxes, temporary files, and swap on the same physical disk. They had partitions, but just one spindle. I put in one drive for the mail spools and another for logging and mailboxes. It was a quick fix and the box worked like a champ. Once I had the tuits, I broke inbound SMTP (MX), outbound SMTP, and POP out onto separate boxes like I already had my stuff set up. All the new boxes had local copy of the logs (also being remote logged for redundancy) and the swap on a drive apart from the RAID set the data lived on. It was a good balance of cost vs. performance, since I didn't want to requisition drives just for swap. I built these systems with enough memory that swap was smallish and mostly used to let me now if I needed more RAM, so having it share a drive and go almost always completely unused didn't hurt anything.

Re:That seems like an odd configuration (1)

pnutjam (523990) | more than 7 years ago | (#18442971)

This is probably a huge part of our problem. The SAN was sold to my predecessors as a kind of "magic box". I've avoided taking any training on it because it seems so simple to administer. In hindsight this was a mistake and I should probably take the effort to learn more about how the physical architecture relates to the assigned disks.

What kind of SAN? (1)

AltGrendel (175092) | more than 7 years ago | (#18441629)

Are we talking an HP XP12000 or Bob's "Box-o-disks".

Re:What kind of SAN? (3, Informative)

pnutjam (523990) | more than 7 years ago | (#18441987)

Xiotech 3d 1000, w/ qlogic fibre channel cards.

Re:What kind of SAN? (2, Informative)

Animixer (134376) | more than 7 years ago | (#18446755)

Here are a few things to consider.

1. How many target ports on the magnitude 3d? For that model, I'm not sure but they are probably 2gbit each. Try to balance load across the ports via multipathing software or manual balancing (server a uses port 1, server b uses port 2, etc).

2. What is your SAN switch topology? If hopping across ISLs make sure that you have an adequate amount of trunked bandwidth between the switches.

3. What speed are your SAN switches? Using 1gbit switches would bottleneck a lot faster than 2 or 4g ones.

4. If possible balance multipathing across independent fabrics. e.g. one port from each server to one fabric to one controller on array. this helps for HA.

5. Make sure the port speeds for all links are operational at the fastest common speed. (check that your 2g links are running at 2g, etc)

6. Check your zoning on the fabrics. I find it best in practice to mask by WWN and make sure that each hba port can only see the storage it should see, and nothing else. YES, even if you have masked your luns on the array. No sense having the OS go poking around trying to make device files and whatnot for devices it can't or shouldn't use.

7. Which qlogic cards? QLA21xx are ancient fc-al only. QLA22xx are 1gbit variants. QLA23xx are 2gb variants, the best being the qla234x series. QLA24xx should all be 4gbit.

8. Even a single 66mhz 64bit pci slot should be more than enough bandwidth for a single port 2gb card.

9. the magnitude 3d is if I remember correctly an entry level array. performance on it may be limited depending on how many physical spindles are installed. Also check the array health. if running degraded you will see a perf hit. higher end arrays should handle the failed disks better and not flinch but I've not tried this on a xiotech in practice. They actually may be quite good, I just don't know.

Re:What kind of SAN? (1)

Lxy (80823) | more than 7 years ago | (#18448505)

Call Xiotech, they have some excellent SEs on staff. They can help you out far batter than /.

One thing to keep in mind (1)

Degrees (220395) | more than 7 years ago | (#18454497)

The Xiotech machinery tends to run along fine ... right up until you cross the 90% full threshold. Then performance goes in the toilet, fast. Which is probably better than getting to 100% full and crashing hard....

Seconded on the suggestion to call Xiotech. They know their stuff and should be able to help you out.

It's kind of funny - I'm at Novell BrainShare, and my fourth session of the day was how to diagnose poor server performance due to SAN congestion. In NetWare we have always had tools to measure how well the disk subsystem is handling - but I have no idea if Windows/Citrix can provide the same. I would think so, using the WMI interfaces, but I don't know. We were talking about disk subsystem statistics, and a Mainframe guy asked about stuff. As it turns out, they have way more detailed information about the disk subsystems than we do in the Intel world. The mainframe keeps track of how many nanoseconds it takes for the disk heads to position themselves, how many nanoseconds the File Open takes, how many milliseconds the Read takes, how many nanoseconds the File Close takes. In the NetWare world, we can (easily) tell how many disk writes are pending, how many disk reads are pending, how long ago did a file read not come out of cache - but not a whole lot at such a really low level. I have no idea where we would get those kinds of statistics in Linux.

Re:One thing to keep in mind (1)

pnutjam (523990) | more than 7 years ago | (#18458877)

do you mean 90% full of data, or 90% full on allocated LUN's?

Re:One thing to keep in mind (1)

Degrees (220395) | more than 7 years ago | (#18475877)

Oooof - I don't remember. I went to Xiotech training last summer, and I remember hearing "90% full is a problem" several times. 90% full on allocated LUN's makes the most sense, because the Xiotech needs to deal with striping and parity data behind the scenes. To the server, it's just a hard disk with an (initial) long spinup delay. I don't see the Xiotech caring if the blocks being requested by the OS are near the front of the (virtual) disk or the end of the virtual disk. So I'm going with option C - call Xiotech, and ask them to run their performance diagnostics on your SAN. They can take a snapshot of the current performance statistics, and tell you if you need to buy more disk. They'll love to remote in and check out whether they should tell you to buy more disk. ;-)

Re:One thing to keep in mind (1)

pnutjam (523990) | more than 7 years ago | (#18588657)

I'm posting this reply since I found your information useful. It turns out our SAN had 97% of it's disk space allocated to LUN's. These LUN's were nowhere near full, but the SAN still doesn't like this. Xiotech indicated their SAN's start to degrade substantially after they become 80% allocated. We added a bay of disks and have seen a vast improvement in performace. Since our license doesn't allow us to use all the space we added we shouldn't have this problem again. Thanks for your insight.

Glad to have been of help :-) (1)

Degrees (220395) | more than 7 years ago | (#18591739)

And thanks for reminding me that it is 80% full, not 90%.

Re:What kind of SAN? (0, Offtopic)

DaMattster (977781) | more than 7 years ago | (#18442421)

Personally, I like Bob's "Box-o-Discs." Easier, cheesier, and square . . . .

Re:What kind of SAN? (0)

Anonymous Coward | more than 7 years ago | (#18444677)

Or SAN points? (As in you lose them all while maintaining the beast.)

Answer #1 (0, Troll)

Anonymous Coward | more than 7 years ago | (#18441757)

Don't use windows.

After that, everything else should stop being wildly unpredictable.

Not enough information (5, Insightful)

PapaZit (33585) | more than 7 years ago | (#18441873)

First, what do you mean by excessive disk load? I'm not being facetious here. Do you mean that the SAN unit is pegged. How do you know that? Are the servers spending a lot of time waiting for I/O? Is the unit making loud noises? Or are the machines that are connected to the server just slow without the processor being pegged?

Also, while "have you tried defragging?" is a common home troubleshooting tip, it's not clear how you came up with the idea that the SAN has to be defragged. If you have reasons and you're just simplifying to keep the post short, great. Defrag away according to the SAN manufacturer's recommendations. However, don't become obsessed with it unless you know that fragmentation's an issue.

You need to spend some time benchmarking the whole system. Figure out how much disk, processor, network IO, and SAN IO are being used. Know what percentage of the total that is. Figure out exactly which servers are causing performance problems at which times.

"Find the problem" is always the first step in "fix the problem."
Once you know what's going on, you can deal with the problem intelligently. Are all the servers booting at the same time? Give them different spindles to work from or stagger the boot times. Are all of the users logging in at once? Figure out why that's slow (network speed, SAN, data size, etc.) and split the data across multiple servers and SANS or improve the hardware.

If you can make the case with hard data that the SAN is swamped, you can probably pry money from management to fix the problem. However, guessing that it -might- be something won't get you very far. They don't want to spend $20k on a fix to be told, "Nope. It was something else."

Re:Not enough information (3, Interesting)

pnutjam (523990) | more than 7 years ago | (#18442333)

I use Big Sister to monitor all my servers. I get nice graphs that show memory, CPU, network load, disk utilization, etc. I looked and looked at this trying to find the cause of my problems. People complained about slow login times, sometimes they would get temporary profiles because their roaming profile would time out. The also complained about slow access times in our SQL dependant EMR (Electronic Medical Records) system. All my graphs showed everything within an acceptable range.

I finally found an SNMP query for "disk load". This purports to be a percentage, but I've seen it showing way over 100, sometimes as high as four or five hundred. If it gets above 50 or 60 people start to complain. My disk load spikes in the morning when people are logging in, it generally goes to about 80% or higher on my graphs [] . My SQL server doesn't have these problems and I have yet to find a suitable way of monitoring the SQL log where I think the problem is originating.

Re:Not enough information (1)

LivinFree (468341) | more than 7 years ago | (#18444507)

Assuming you mean Microsoft SQL server, check the Avg. Disk Queue Length metric to see if your bottle neck is on the server rather than the storage. On Linux, you'd find the disk queue length in /proc/scsi/qla*/#, where # is different based on number of ports / HBAs / etc. Check out a site such as this one [] for some good metrics to look at.

There's lots of tuning that can take place on the server side before you start re-striping. That being said, more spindles will likely help on the storage side.

A couple of things to point out:
Utilization Law:
(Utilization) = (Throughput) × (Service Time)

For utilization 1
(Queue Length) = (Arrival Rate) × (Service Time)

So if it takes 5ms service time per physical I/O, 2000 I/O operations could take up to 10 seconds. Listing a large directory share might do that to you, as Windows stat()s each directory / file, then starts traversing the filesystem *for you* to get those pretty "You have X objects in this folder taking Y MB of space" mouse-overs.

It may be a huge number of things. It could be a poorly laid out directory tree, with 2 million object in a single directory, or bad SQL db design, or horribly written SQL script, or a combination.

In short, call your vendor, ask for help. If that doesn't do it, call your sales person, complain, and be prepared to reluctantly accept professional services to come on-site for a fee, assuming they have some contingency (i.e. if this doesn't work better, I don't have to pay.)

Re:Not enough information (1)

TopSpin (753) | more than 7 years ago | (#18443937)

I have to agree; there is not enough information given to reach a credible conclusion. My read of the article and subsequent posts from pnutjam [] indicate that not enough data has been gathered. For instance, an SNMP query called "disk load" is too general to isolate specific performance bottlenecks.

Monitor and analyze a few common metrics on your servers. Physical Disk IO Bytes/sec can help you determine whether the FC HBAs are a bottleneck; a 2Gb/s HBA is good for (at most) 200MB/s either direction; are you actually seeing that rate? Is the server spending it's time waiting for the SAN to catch up? Observe the queue length to find out. Transfers/sec; how many IOPS are your servers demanding? A modern SAS disk is good for perhaps 150 IOPS. This is an optimum; various RAID configurations will serve to lower this, and SAN device caches obscure it further. Nevertheless, if you observe ~3000 physical IOPS across 20 fast disks (20 * 150) then you might conclude you lack sufficient spindles for your workload.

I dunno what "Big Sister" is and I'd suggest you probably need to look beyond it to find the bottleneck(s). Frankly I have yet to encounter IO performance problems with small system like this that don't yield to the built in monitoring tools your OS vendors provide; perfmon is sufficient for Windows, iostat (usually a part of a package called sysstat) on Linux. You can also aggregate these figures remotely. Configure these tools to gather meaningful data, then watch and think hard.

BTW, SAN isn't "overkill" for a dozen servers. Fault tolerance via clustering is only the most obvious reason why someone employ a SAN among a dozen servers. Not stranding large amounts of storage among multiple discrete, neglected SCSI devices is another. If you embrace and leverage the platform you'll probably discover you miss it when, for whatever reason, you're no longer working with it.

You seem in over your head... (0)

Anonymous Coward | more than 7 years ago | (#18441883)

You don't mention what type of SAN you have. Is it Fibre Channel or iSCSI? As much as it's likely FC, it's really important to make the distinction.

Now, either type of SAN it's still just a network. It just connects servers to disks instead of computers to computers. The next thing is that the disk controller on the SAN will present LUNs to servers as disks. So unless you have storage virtualization, most disk controllers present LUNs from arrays and the LUNs are contiguous and can't fragment. LUNs instead can share the disks in an array, so if for instance all the LUNs are carved out of one array you'll have major performance problems because the slowest link in the system (disks) is going to have to work for every server. This would mean separating the LUNs onto separate disks should be a huge help.

The next thing is how the servers access the disks. Unless you have a SAN filesystem, each LUN will only be exposed to one server (if clustering is involved, only one server owns the disk, unless you have some really fancy and expensive clustering software). So if you want to defragment the filesystem, you have to do this from the server because the disk controller will not be aware or care about the filesystem (again, unless they have way overkill).

You don't give much of these details, so I would guess you haven't worked with SANs before. Not much advice can be given until you clarify how everything is setup.

stop guessing (1)

prgrmr (568806) | more than 7 years ago | (#18442011)

I can tell you that it's highly doubtful that the problem is the speed of your fibre cards. If you are getting latency at the host, it is likely that either you've got conflicting access on a number of the same drives in the storage server, or a problem with your fiber switch.

I can definitely say that without vendor make, model, and software version information, you're not likely to get much helpful information in this venue, and you properly ought to be going to the vendor for technical support.

Performance Troubleshooting (1)

HockeyPUcX (791205) | more than 7 years ago | (#18442129)

I would not assume the disks need to be defragged without other evidence. There are many components that can affect the performance of SAN storage. PCI-e is a faster bus architecture. However, it is unlikely that would increase your performance unless you are running dual-port 4Gb/s host bus adapters (HBA's). What type of storage array is it? What is the speed of your SAN switches, or are you using direct connect? What is the speed of the storage array's fiber channel ports? How many fiber connections does each server have, and how many connections does the array have? If your servers have multiple connections are you using some type of load balancing? If you have SAN switches have you looked at the per port performance to ensure you are not hitting any bottlenecks on the switch? In general SAN boot adds complexity and can make the servers more difficult to manage. I typically only use SAN boot if the server is not capable of housing internal disks, or it is needed for a disaster recovery scenario to replicate the boot drives to another site. Another issue that can degrade performance is swap. If you have no internal drives, and your servers have insufficient memory they could be swapping out to the storage array which can degrade performance. If you are running Windows 2003 you should look into the StorPort drivers as that can increase performance, and if the servers have more than one connection you should look into Multi-Path I/O (MPIO) unless the array offers a proprietary multi-pathing software.

defrag? (0)

Anonymous Coward | more than 7 years ago | (#18442163)

What is "defrag"?

Defrag explained (1)

tepples (727027) | more than 7 years ago | (#18442531)

What is "defrag"?

As a file grows, pieces of it may be strewn across the disk, causing the head to seek back and forth across the disk while reading it. This happens faster on some file systems than on others, and it happens faster on disks that are more than half full. Defragmentation [] assembles the pieces of each file into one piece for faster access. Some defrag programs can also put related files next to one another.

Re:defrag? (1)

MobileTatsu-NJG (946591) | more than 7 years ago | (#18442593)

"What is "defrag"?"

It's when you kill a team-mate and lose a point.

hmm.. (5, Informative)

Anonymous Coward | more than 7 years ago | (#18442185)

I am the Sr. Storage Architect for a Fortune 100 company. If you gave the type of array you have specifically, I'd be able to give more specific advice. That said: 1. You should have at least two fibre cards in each box anyway, and it has nothing to do with throughput. 2. Generally, your bottleneck is the disks themselves. If you want to increase performance, You need to increase the number of spindles that the data is striped across. Depending on the type of array, this may be a non-disruptive operation. The other big thing to look at is the type of RAID being used. You can usually get better performance from something striped with RAID10 vs. Raid 5, especially for write intensive data, because RAID 5 incurs an IO write penalty in calculating parity. 3. If you are going to defrag, do it on the server. It could help. There are some defrag functions available in most mid tier storage arrays, but it isn't what you think. The defrag there typically refers to lining up LUNs in a raid group. So, if you have a raid group with 5 LUNs in it, then delete one, you end up with a big empty space in the middle of the group. Defragging that raid group lines up all the LUNs inside that raid group.

Re:hmm.. (1)

sarathmenon (751376) | more than 7 years ago | (#18450113)

I am the Sr. Storage Architect for a Fortune 100 company.

Hmm, are you a friend of Essjay [] ?

Wrong side of the problem (2, Interesting)

JamesTRexx (675890) | more than 7 years ago | (#18442187)

Maybe you're looking at it all wrong.
You state that the disk load is high in the morning when everyone logs in with roaming profiles, which suggests to me that the roaming profiles are way too large.
Depending on the Windows versions used, move the contents of the "My documents" folder to their personal network shares (give them one if they don't have any), tell them to move data in their Desktop folder to that share and only create shortcuts, maybe even create a mandatory quota limit on the clients.
Check your favorite search site on "Windows reduce roaming profile size" for more tips.

Re:Wrong side of the problem (1)

pnutjam (523990) | more than 7 years ago | (#18443023)

I don't think this is a problem. Our roaming profiles are only used for Citrix, which is where the majority of our users are. I have user directory going to a network drive (on the same server as profiles). I've never seen a significant hit on the network cards on either the file/print/pdc or the citrix servers. I've also never seen significant hit on any of switch ports (servers are on gigabit).

Re:Wrong side of the problem (2, Informative)

Bastardchyld (889185) | more than 7 years ago | (#18444793)

I believe he is actually referring to the disk usage caused by having to copy the contents of My Documents out to the workstation, not the network utilization. He is stating that you should redirect My Documents to a file share. Then they are not included in the Roaming Profiles that are copied, this will reduce your disk access at the peak login/logout times. It also could solve your problem if the roaming profiles are the culprit. Although an assessment of your SAN probably would not hurt anything.

Re:Wrong side of the problem (1)

JamesTRexx (675890) | more than 7 years ago | (#18456905)


Sounds familiar (1)

Kraegar (565221) | more than 7 years ago | (#18442519)

I work in IT, in healthcare. I manage our Storage, and our AIX hosts. We boot from SAN.

We boot some hosts from our SAN (McData SAN switches, IBM SVC, Multiple DS4800's). First, is it your SAN that's bottlenecking (the switches?), the storage controller, or the hosts? When they bottleneck, are you seeing a lot of paging. Large roaming profiles loading all at once could be causing you to page, since your swap is out on your storage controller, you're doing double duty, and paying a penalty for it. As others have said, work on reducing the size of the roaming profiles. If you're doing heavy paging, fix it. Buy some RAM, or move things around so it's not an issue.

Booting from a SAN can work fine, but it has some disadvantages to it besides the obvious. It means you have to watch things closer, keep histories of your IO utilization on a per-path basis. That's the only way to find where the bottlenecks are, and when they start. You need to know exactly which host, what paths it has from the HBA(s) all the way to the disk, and where you're hitting the limits along that path.

Re:Sounds familiar (1)

pnutjam (523990) | more than 7 years ago | (#18443121)

I don't have any good way to measure throughput on my fibre cards or McData switch (this I should probably start monitory). The bottleneck seems to show up in the disk load. I attached some graphs here [] to show what I'm looking at.

Our page file is on a dedicated partition on the SAN also. I do notice that it is usually 80% utilized and at night when our backups run it goes close to 100%. Our diskload also spikes at that time, but not as high as it does in the morning. When I get the high diskload spikes in the morning the page file seems to maintain at 80%.
I speculate this is because the things being accessed in the morning are more random and the server doesn't load them into memory or use the page file. While the backup client probably stages things to memory and uses the page file.

Re:Sounds familiar (1)

Kraegar (565221) | more than 7 years ago | (#18443281)

If you're using Qlogic HBAs, they have something you can install called SanSurfer that's good for looking at performance type things. Newer McData switches have some built in performance monitoring, but I'm not familiar with the older models. There are some awesome software packages to help track utilization, but they're pricey. (IBM's TPC for Data is stellar, and is pricer per TB managed).

Defragging at the OS level might help you, and there's no harm in it. The more sequential (at both a logical placement, and physical allocation of LUNs) the data is, the better you'll do...

Re:Sounds familiar (1)

Sobrique (543255) | more than 7 years ago | (#18443947)

What's that disk load counter actually measuring though? I had a look, but I can't actually tell.

Disks are a bit more complicated that processors or memory in terms of measuring how much of their 'performance' is in use.

Factors to look at, include.... well most of the ones you'll see under 'PhysicalDisk' in Windows perfmon.

I/Os per sec and bytes transferred per sec are of interest, but the one that's _really_ in indicator in terms of performance is disk queue length. A long queue, means that for 'whatever reasons' requests are not being serviced fast enough to keep up with the system. Usually there will be some items queued, when there's a burst load. It's when your queue is high, and remains high that you have a problem indicator

You might also find that you can tweak the queue depth on your HBAs. Check your manufacturer what your HBA settings _should_ be for their particular array. I seem to recall that Windows has a 'design feature' that can lead to the queue depth being set to 0 in some situations on a SAN. Which leads to every IO being confirmed before the next IO will happen, which vastly reduces performance.

I'm afraid I can't recall any more, and a quick google doesn't bring anything to light, but ... well, chances are if you're using your 'default' HBA settings, they will be wrong. At the very least most SANs can handle a _much_ larger SCSI queue depth. So yes, check your HBA parameters, and compare them to what your vendor recommends. Confirm you have the latest driver, and check if there's a more specific driver/firmware for your particular array. (Some array vendors 'brand' their own, with the right defaults, settings and that have been tested)

SANs (4, Informative)

Sobrique (543255) | more than 7 years ago | (#18442523)

The fact you're using a SAN is likely to be fairly irrelevant here. SANs are a way to move data between server and disks. They're not really much more complicated than that.

First question, is what's the symptoms of the problem - how do you know you're 'pegging your disks'? If you're seeing IO load to your HBAs being really high, then yes, you might find that you need to upgrade these. From experience though, HBAs are rarely your limiting factor.

Much more likely is that you're experiencing local disk fragmentation, as you correctly point out. I can't offer specific advise for your array, but in my experience, SANs are 'blind' to filesystems. They work on disks and LUNs. LUNs are the devices a host sees. This can be safely and easily be defragmented, in all the normal ways that you would do normally.

Are you accessing your SAN over fiber channel or iSCSI? IF it's fiber, then again, you _may_ have network contention, but it's unusual in my experience (especially on a 17 servre SAN). If it's network, then you have contention to worry about. Is it possible that your 'gimme profile' requests across your network are also contending with your iSCSI traffic?

You may find that your SAN has 'performance tools' built in. That's worth a look, to see how busy your spindles are. Because of the nature of a SAN, you may find that the LUNs are being shared on the same physical disks. This can be a real problem if you've done something scary like using windows dynamic disk to grow your filesystem - Imagine having two LUNS striped, when in acutality on the back end, they're on two different 'bits' of a RAID 5 set. This is bad, and is worth having a look at.

One place where SANs do sometimes have issues is in page files. Which is possibly a problem if you're SAN booting. SANs have latency, and windows doesn't like high latency on page files. If you really push it, it'll start bluescreening.

This is fixed by local disks for OS, or just moving swap file to local disk.

HBA expansion _might_ improve performance, assuming this is your bottleneck. However you'll need to ensure you are multipathing your HBAs. (Think of them like network cards, and you won't go far wrong - you need to 'cheat' a bit in order to share network bandwidth on multiple cards). But like I say, you probably want to check this is actually a problem. If they're not very old, then it's unlikely, although it might be worth checking which internal bus the HBAs are on. (Resilience and contention).

It's possible your SAN is fragmented, but it's unlikely this is your problem - SANs don't have the same problem with adding and deleting files (LUNs) so all your backend storage will be in contiguous lumps anyway.

And I apologise if I use terminology that you're not familiar with. Each SAN vendor seems to have their own nomenclature when it comes to the 'bits', but they all work in roughly the same way. You have disks, which are ... well disks. RAID groups, which are disks bundled together, with a RAID 1, RAID 1+0, RAID 5 (with variable numbers of parity ratio) and very occasionally RAID 0. You have LUNs. Logical Units. These are ... well, chunks of your bundles of disks. The first 100Mb of a 5 disk RAID 5 group, might be a LUN. The LUN is what the host 'sees', as a single atomic volume. Most disk groups can have multiple LUNs on them, which is why you do need to watch out for how volume management is operating. I have seen a case where a Windows 2000 server added a second LUN, and used dynamic disk to stripe. Not realising that on the back end, both those LUNs were on the same RAID 5 (4+1). Which cause the disks to seek back and forth continually, and really hurt performance.

Oh, and this is also probably a good excuse to be booking SAN training. IMO SANs are fun and interesting, not to mention in demand and well paid :)

Pay someone to come in and take a look at it. (0)

Anonymous Coward | more than 7 years ago | (#18442583)

Presumably you've RTFMed already, checked the manufacturer forums and googled Usenet and drawn a blank. May I suggest that Slashdot isn't really the next logical place to look? You'll get 101 posts from people with no experience of your kit explaining why you should have some disks in shoeboxes connected to some old Linux PCs instead. You might get lucky and get someone who has a similar config to yours who can help a bit; but because they can't touch your network, they're not going to be able to say much. You'll also get the odd smug idiot - in this case me - who's no help whatsoever. So seriously, get someone in. If you don't know anyone, talk to the manufacturer, or ask around, and get them to recommend someone.

As Hilaire Belloc put it:

Lord Finchley tried to mend the Electric Light
Himself. It struck him dead: And serve him right!
It is the business of the wealthy man
To give employment to the artisan.

Re:Pay someone to come in and take a look at it. (1)

Sobrique (543255) | more than 7 years ago | (#18442711)

Actually, looking that the responses, there's quite a few people replying who would appear to have 'real world' SAN experience.

Getting someone in to fix it _may_ end up being the right choice, but it does help to check first, where the problem lies - there's no point in getting a 'SAN expert' in if your problem is merely filesystem fragmentation.

SANS don't need to be defragged... (1)

illumin8 (148082) | more than 7 years ago | (#18442679)

SANS do not need to be defragged. Let me be a little more clear on this: When you allocate a LUN (logical unit) on your SAN and present it to a server, you are doing one of two things:

1. Presenting physical spindles to the server as raw disks -or-
2. Presenting a RAID volume to the server, which consists of a section of many disks.

All SAN vendors that I'm aware of allocate LUNs as contiguous areas of disk. It's faster this way because heads don't have to seek very far to find data within the same LUN. Even if you're allocating a RAID 5 LUN spread across 20 disks, the SAN is going to take a small section of each of those 20 disks and that is dedicated to your LUN. It's a contiguous section and it stays the same throughout the life of the LUN, that is unless you extend it or grow the LUN at a later date.

Now, the server's data on that LUN can and will become fragmented, but this needs to be taken care of at the OS level, not the SAN level. If you're running Windows file servers, use the Windows defrag tool. If you're using Veritas file system they have a defragmentation tool of their own.

There is one possible way that SANs can become fragmented but it's very unlikely that this would affect performance: If you grow or extend your LUNs many many times by adding new sections to them, I suppose this could theoretically affect performance adversely, but it's highly unlikely.

I seriously doubt fragmentation is an issue. More likely some of your users think it's ok to have gigabytes worth of data on their desktop and roaming profiles are killing you. This is a user education issue, not a SAN issue.

Re:SANS don't need to be defragged... (1)

Kraegar (565221) | more than 7 years ago | (#18442925)

If you do a lot of extending and reducing or deleting of LUNs, you can get fragmentation. (Especially with deleting LUNS, as new LUNs fill in the leftover gaps of space) That's pretty dependent upon whose storage controller you're using, how many spindles, LUN size, etc.

I'd tend to agree, not usually a problem. But then if the storage controller has been in place for a long time, with multiple admins, hosts added & deleted, etc. The (mis)management of it over the years could have lead to lots of little accumulated changes. Any decent storage controller, though, has built in software to see which LUNs are "hot", and what disks they're hitting within a raid group. If you've got one 'hot' LUN hittingn a small number of disks, it might be good to reorg it on the SC side, or move it to a new LUN spread across more spindles.

Re:SANS don't need to be defragged... (1)

pnutjam (523990) | more than 7 years ago | (#18443151)

I should have mentioned our roaming profiles are only used for Citrix. We don't publish a desktop. I have noticed that some users who have excessive login times have a profile that uses much more space on disk then the size of the profile. Recreating the user's profile complete seems to clear this up. That's one of the things that makes me look at disk fragmentation.

Re:SANS don't need to be defragged... (1)

lanswitch (705539) | more than 7 years ago | (#18443277)

the consensus seems to be that there is
a) too little information
b) no good reason to associate the problem with the SAN and
c) a noted problem with the profiles you are using.

SAN & "Disk" load (1)

straybullets (646076) | more than 7 years ago | (#18442775)

You need to knwow the actual layout on the SAN's physical disk, that is how many spindles are available for each of your servers and which servers use the same set of spindles.

The most likely cause of bad performance is that the same spindles are overloaded while some other do nothing, as it is very rare to have the link elements (fibre and cards) over loaded. As one other poster noted you need to know the load on your disk to decide if the link may be in cause, for example are you doing more than 1 Gb/s IO on one card ? But most likely it's the physical layout on the SAN that's in cause, and in that case you will need to redesign it : adding more spindles is one solution, or you could reorganise the SAN by moving LUNs around different sets of disks.

Regarding defrag, it makes no sense to me at the SAN level, as it could only mean moving LUNs around a set of disk, which is used to consolidate continuous space but does not impact performance. Defrag at the filesystem level makes sense, as with a standard DAS.

SQL is a Memory Hog (1)

jafiwam (310805) | more than 7 years ago | (#18442929)

Maybe your SQL servers need a few more Gig of ram, or the databases themselves need some TLC.

An SQL server doing something that is too big for it can get you in "slower than my last 486" territory pretty quick.

I know very little about SANs, but assume the file system is pretty fast... so maybe it's not the problem at all.

SANS and SQL (1)

theonetruekeebler (60888) | more than 7 years ago | (#18443033)

By and large, SANs don't need defragging: They do all sorts of striping and mirroring internally that there's almost no guarantee that what your servers think are adjacent blocks are anywhere near each other on the SAN itself.

Where I would take a look is at your RDBMS. If you're getting 80% disk utilization at the SAN you may be doing far more sequential/full-table scans than you need to be. Turn explain plan on and start looking for opportunities to add indexes.

Finally, check virtual memory on the connected servers. If they're mapping VM to a SAN volume, you've got all sorts of pointless trouble. Always swap to local disk, because most Unixes and Windows both do pre-emptive paging and the 80% disk loading you see at login time may be from memory images of the new applications being mapped out to the SAN.

I'll be curious to see how this works out for you.

Re:SANS and SQL (1)

pnutjam (523990) | more than 7 years ago | (#18444137)

A couple people have mentioned moving my swap volume to a local disk. I will do that because it will hopefully help, but there is also no reason for it to be on the SAN, it doesn't facilitate Disaster Recovery. I should have thought of that sooner.

I'll try to post here if that helps or nor.

Re:SANS and SQL (1)

edmudama (155475) | more than 7 years ago | (#18460681)

remote swap on a shared resource is a disaster under load

A few answers (3, Informative)

sirwired (27582) | more than 7 years ago | (#18443113)

1) No, it isn't your Fibre cards. The PCI-whatever bus (or the line speed of the card, for that matter), usually only affect high-bandwidth operations like tape backup. One thing you must remember is that loads that can beat the crap out of disk (random operations spread all over the platters), do not affect the I/O bus of the Fibre adapters at all, which cares only about total throughput.
2) It is far more likely your OS needs defragging than your disk array. Your disk array CAN become fragmented if you add and delete LUNs often, though.
3) Yes, you need multiple fibre cards, but for redundancy, not for bandwidth.
4) Try and put your major workloads on their own RAID arrays on your disk controller.
5) Check to see if you have enough memory in those boxes. If you have one server that keeps swapping out to disk and you are booting from SAN, you are going to get very hosed, very quickly. If these boxes have any internal disk at all, put the swap there.
6) If it is possible with your arrays, max out the segment size. (Engenio/LSI - based arrays can do this.)

This should be enough to get you started.



C_Kode (102755) | more than 7 years ago | (#18443279)

Chances are you problem is memory. Of course your database could be *ugly* too, but if your problem pretty much happens only in the morning during logons and such, you're probably choking your servers with a lack of memory.

Doesn't sound like a framentation problem... (1)

teflaime (738532) | more than 7 years ago | (#18443519)

SAN doesn't write directly to the disk, and as such, a "high disk utlilization" situation in the OS, is not really a high disk utilization situation in the san...With the san I have worked with, it either means that the disk array doesn't have enough memory (some vendors can expand, some cannot) or you are having contention issues from improper zoning. Other possibilities that occur to me are not enough memory on the server (although my experience is with Oracle, not MSSQL in the case of db servers) and a poor san design that spools database data to the same spindles as another io intensive application. I would recommend that you get someone who knows the san product you are using to look at your setup.

Zoning and Drivers (0)

Anonymous Coward | more than 7 years ago | (#18444643)

Ensure your zoning on your switches is setup properly and you are only seeing the necessary info for the server (array, tape library, etc.). Since you are running Windows, ensure you are using the respective STORport drivers for the HBA's and not using the SCSIPort drivers. SCSIPort drivers are a dated technology, in our environment when it was fielded we had some huge issues, updating to STORport drivers resolved some of our I/O related problems.

I would look into downloading IOMeter and carving up a test LUN for one of the problematic servers and doing some benchmarks after-hours when customer impact will be limited.

Hope you figure your issue out and share your information with us. It will be interesting to see what the actual problem was. Alot of good advice above, hope everyone is able to help you out! Good Luck!

"I think the disks need to be defragged" (0, Troll)

Gothmolly (148874) | more than 7 years ago | (#18445411)

The fact that you run a shop, that uses a SAN, and you uttered this statement, means one thing: You are hopelessly underqualified for the job. a) You do not defrag disks, you defrag filesystems, b) you don't need to defrag on a modern OS (are you running WfWG as your file server?), c) you mention no actual data that the disk utilization is "high", and d) you asked Slashdot instead of calling your vendor and asking for an SE.

ITT may be able to help you, ask for their syllabus.

Re:"I think the disks need to be defragged" (1)

lanswitch (705539) | more than 7 years ago | (#18446545)

taking cheap shots at somebody who asks for help, and not answering the question itself is rude. you should apply for a position at our helpdesk!

Duh - Call the manufacturer (1)

NateTech (50881) | more than 7 years ago | (#18454101)

Ask Slashdot or call the manufacturer, let's see... which one will make me look more like a huge retard not qualified to do my job?


SAN performance tuning (0)

Anonymous Coward | more than 7 years ago | (#18467155)

The first thing I'd try is changing the position of the SAN (data) relative to your servers. Many sites locate the SAN storage at the bottom of a rack, with the servers above the SAN. This can lead to "data pooling" in the SAN, similar to the way your face goes red if you hang upside-down for too long.

If your server load is mostly "read" access, then the SAN has to pump all the read data "up hill" to the servers. In this case the SAN should be above the servers (the higher the better). OTOH, if you have mostly "write" access, then the typical "server above SAN" configuration is correct, as this allows all the writes to flow down to the SAN easier.

Another thing to be aware of is the directionality of your SAN fibre-optic cabes. Make sure that you orient the fibre-cable's "input" end to the server's "read" port, and the "output" end to the server's "write" port. If you get the cable orientation wrong you end up forcing the data "against the grain" in the cable, and this will greatly decrease performance. You can detect this problem simply by touching the fibre cable - if the cable is the wrong way around then the extra "friction" will cause if to be warm to the touch - just reverse its direction and performance should be much better.

I doubt that "defragmenting" your SAN disks would help - but you might want to try a "disk scrub". Many older disks develop a "rust-like" build up on the disk platters, and this can retain a magnetic charge. Try opening the disks and give the platters a good scrub, nothing too abrasive of course. I've found a toothbrush and toothpaste works fine - make those platters nice and shiny!

I hope that's helpful - and remember *always have a backup*!

Data B. Gone.
Mircosoft Certified Data Loss Expert (MDLE)
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?