Queue Depth

Post Reply
mtjones
Posts: 2
Joined: Thu Feb 28, 2013 1:16 pm

Queue Depth

Post by mtjones »

Hi All,

(first post)

I'm just configuring a new Windows 2008 R2 host for our 2-node 3PAR v400 array. We have a total of eight (8) 8Gbit/s target ports on the array, split between two HP B Series (Brocade) SAN fabrics. The server has a connection to each fabric via a AP769B (brocade) HBA.

Should I be looking to tweak the servers HBA config from the offset with things like max I/O size and max queue depth etc?

Any advice well received!

Cheers,
Mike
zQUEz
Posts: 33
Joined: Mon Aug 20, 2012 1:54 pm
Location: Atlanta, GA

Re: Queue Depth

Post by zQUEz »

I'm of the school of thought, that you should ensure your drivers and firmware are up to date, but otherwise run with your standard settings unless or until you have a reason/issue to address.

We use DCX switches and don't tweak anything from standard unless needed.
User avatar
Richard Siemers
Site Admin
Posts: 1333
Joined: Tue Aug 18, 2009 10:35 pm
Location: Dallas, Texas

Re: Queue Depth

Post by Richard Siemers »

I concur with zQUEz. Up to date drivers/firmware, and minimal host edits beyond that. The only good case I have heard for tweaking queue depth on a host is if you intentionally need to throttle it down to use less storage resources. Reducing the queue depth could be used to hinder a high SAN traffic box if its causing trouble and victimizing other hosts on the san... however, I find that dropping the speed at the SAN switch port to be easier to manage and far more effective.

Zoning and round robin balancing is important. I suggest each server port be zoned to each node for a 1:2 fan out ratio.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
mtjones
Posts: 2
Joined: Thu Feb 28, 2013 1:16 pm

Re: Queue Depth

Post by mtjones »

Thanks Gents. Have stuck with as standard with latest dr/fw. All going well in testing at the moment using RR load balancing.

Richard; Interesting point you make about the zoning - we have always zoned 1:1 as recommended by our HP consultant who implemented the 3PAR with us this time last year.
User avatar
Richard Siemers
Site Admin
Posts: 1333
Joined: Tue Aug 18, 2009 10:35 pm
Location: Dallas, Texas

Re: Queue Depth

Post by Richard Siemers »

I think we may be talking about 2 different best practices that confusingly sound the same.

I suspect that you are referring to Single Initiator/Single target zoning. This is to avoid unintended communications between zone members, and limits one path per zone. If you add a 3rd member to the zone, then the zone allows all 3 members to talk to each other which is generally un-intended. Most commonly I see 1 host HBA and 2 storage ports in the same zone, and generally this works fine, however it will permit the 2 storage ports to talk to each other if one should try to log into the other (Clariion SANcopy comes to mind).

The concept I was recommending with 1:2 fan out is that one HBA port should have paths to 2 front end ports. If you follow the best practice you mentioned, that would mean you would need twice the number of zones, and not just add one more storage front end port to the existing zone.

The reason I do this is to distribute host loads over multiple ports to avoid storage front end port saturation. If your hosts are running at 8gb, and your storage is running at 8gb... then a 1:1 fan out ratio has the potential for 1 host to saturate a storage port or two. By splitting each HBA's traffic down two paths, the worst a single host can do is push the storage front end ports at 50%... I have 157 hosts connected across 32 front end ports (two T800s with 4 nodes ea), which were arranged into 8 "sets" of 4 front end ports that I manually assign hosts to when they are initially connected to the SAN.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Post Reply