HPE Storage Users Group
https://www.3parug.com/

AO conundrum??
https://www.3parug.com/viewtopic.php?f=18&t=1100
Page 1 of 1

Author:  ibar78 [ Tue Jan 06, 2015 6:44 am ]
Post subject:  AO conundrum??

Hi

I was wondering if someone more experienced in 3PAR can help me with this conundrum. I recently started a new job and this is the first time I’ve dealt with SANs and HP 3PAR in particular. We have a 3 tiered 3PAR system in place with total storage capacity of approx. 55TB as follows:

Tier0 – SSD 2.6TB (7% free)
Tier1 – FC 22TB (70% free)
Tier2 – NL 30TB (1.92% free)

We keep on getting alerts (major) that SSD raw space usage is above 85% and NL raw capacity is above 95%. All our CPGs are configured to use all 3 tiers in performance mode but it seems AO is mainly using tier0 (SSD) and tier2 (NL) and under utilising Tier1 – can someone explain why this is happening?

To address this I made a change to one of the AO configurations and removed tier2 leaving only tier0 and tier1 in the hope that AO will shift data from NL to FC but after leaving this for over 24 hours no data appears to have moved from NL to FC. Is this the right way to shift data about or is it better to simply create a new 2 tiered CPG and move the virtual volumes on there instead?

If anyone can help me and provide advice on how best to configure our CPGs and AO so as to avoid generating free capacity alerts that would be greatly appreciated.

Many thanks :)

Author:  hdtvguy [ Tue Jan 06, 2015 12:35 pm ]
Post subject:  Re: AO conundrum??

There is some info you may need to get from HP that will explain how AO looks at metrics to determine movement. We have been complaining about how poor AO is at effectively using the tiers, it seems to do the same for us, push to the top or bottom with little landing in the middle. Also we have been told AO's job is to push data down even in performance mode, the different modes just change the formula AO uses to move data up or keep it up. Again a poor design IMO as you can have huge space available in higher tier CPG yet the array will let itself crush NL performance rather than take some data that "may benefit" and push it up to at least help alleviate the bottlenecks on NL.

Author:  Cleanur [ Tue Jan 06, 2015 1:17 pm ]
Post subject:  Re: AO conundrum??

The problem with AO is that it's designed to move the data to the most appropriate tier based on averages throughout the day, which is often not where you believe the data should reside. Potentially it could be made more aggressive but that could also have the reverse affect without some careful planning :-) and I have Customers who are also in that reverse situation.

You could try setting a capacity warning (NOT A LIMIT) on the NL CPG to reduce the amount of space that can be consumed by AO in that CPG, which should push some data back to FC, you can also set the differing capacity alerts with setsys RawSpaceAlertXXX or look at suppressing at the SP. Once you have the data where you want it you can also look at the using the min_iops option on the startao command. All of the above is discussed in the linked document below.

http://h20195.www2.hp.com/v2/GetPDF.asp ... 867ENW.pdf

Adaptive Flashcache might also be an option.

Author:  ibar78 [ Wed Jan 07, 2015 4:52 am ]
Post subject:  Re: AO conundrum??

Very informative posts guys many thanks - I'll certainly have a look at the HP documentation.

Again many thanks for your help :)

Author:  JohnMH [ Wed Jan 07, 2015 7:42 am ]
Post subject:  Re: AO conundrum??

If you're looking to re-engineer the AO configuration take a look at page 17 of the linked document, with 3.2.1 firmware you have the option to filter AO moves based on VVsets within a CPG (use VVsets as a proxy for given application) rather than every volume being moved in that config / CPG. So you now have much finer granularity around individual VVsets for AO reporting, monitoring windows and schedules without the need for multiple AO configurations and subsequent CPG's.

Author:  Architect [ Sun Jan 11, 2015 8:47 am ]
Post subject:  Re: AO conundrum??

Do you have tpvv volumes created on SSD? if not you can set the warning level to 10GB to work around the error (setsys RawSpaceAlertSSD 10). Do this only if you have no volumes that grow directly into SSD. AO can then use all available SSD storage.

If you do have tpvvs on SSD, you'll have to carefully balance the AO usage (with warnings set on the AO SSD CPGs) with the growth of the tpvvs, to prevent running out of SSD storage.

(The 3par will be forced to add FC storage to the SSD CPG, if it runs out of SSD storage)

Page 1 of 1 All times are UTC - 5 hours
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
http://www.phpbb.com/