How do you configure this? I'm new to 3 PAR, been through the InForm Mgmt Console with a fine tooth comb and can't find anywhere where I would configure something to dynamically move storage based on performance?
The sales pitch was that I should be able to create a policy that will move storage around to higher performing disk on the fly as required. It sounds pretty neat but there doesn't seem to be anywhere to configure this? Do I need an extra piece of software? InForm says I have Dynamic Optimization installed but there's no mention of it anywhere else in the system, not even the help files.
Dynamic Optimization???
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: Dynamic Optimization???
I suspect 2 separate features are being confused, but you bring up an interesting issue.
The name "Dynamic Optimizer" does SOUND like it should be the name of a product that dynamically optimizes something... right? Well, ask 3PAR marketing what they were thinking... I don't know.
Dynamic Optimizer, or D.O., is the tool used to move a LUN from one CPG to another hot and online... thus you can change its raid type and settings on the fly. It is not dynamic in a sense of automatic, since this is a manually initiated task, it is dynamic is a sense of it can be done hot and online. The actuall commands to use these features are called tune. TUNEVV, TUNECPG for example, and in the IMC, you can right click on a VV (lun) and select tune to initiate the gui equivelent.
I think the other feature you are looking for is Adaptive Optimizer and sub-volume tiering... policies are defined within "System Reporter", which runs on a separate server in your environment and accessed via web browser. The setup for which is a bit more involved than the typical 3PAR deployment. You will need to make sure that under your "sampling policies" under the Inservs tab, that each of your systems has the option enabled to collect AO data... this was not enabled by default on my systems, but I dont have AO licensed either. There is a separate section for creation AO policies.
The name "Dynamic Optimizer" does SOUND like it should be the name of a product that dynamically optimizes something... right? Well, ask 3PAR marketing what they were thinking... I don't know.
Dynamic Optimizer, or D.O., is the tool used to move a LUN from one CPG to another hot and online... thus you can change its raid type and settings on the fly. It is not dynamic in a sense of automatic, since this is a manually initiated task, it is dynamic is a sense of it can be done hot and online. The actuall commands to use these features are called tune. TUNEVV, TUNECPG for example, and in the IMC, you can right click on a VV (lun) and select tune to initiate the gui equivelent.
I think the other feature you are looking for is Adaptive Optimizer and sub-volume tiering... policies are defined within "System Reporter", which runs on a separate server in your environment and accessed via web browser. The setup for which is a bit more involved than the typical 3PAR deployment. You will need to make sure that under your "sampling policies" under the Inservs tab, that each of your systems has the option enabled to collect AO data... this was not enabled by default on my systems, but I dont have AO licensed either. There is a separate section for creation AO policies.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: Dynamic Optimization???
P.S.
AO, unlike DO, moves only parts of LUN from one CPG to another based on recorded historical activty of those parts. DO, unlike AO, moves an entire lun from one CPG to another because a human told it to.
AO, unlike DO, moves only parts of LUN from one CPG to another based on recorded historical activty of those parts. DO, unlike AO, moves an entire lun from one CPG to another because a human told it to.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Re: Dynamic Optimization???
Thanks for that, it now makes sense. I can 'tune' my LUNs and hot move them to other CPGs so I guess DO is working. So the thing I'm looking for is AO, which I'm not sure if I have. I do have a 'System Reporter' CD but as far as I'm aware that it's not installed anywhere. The only thing I've got is the Inform Mgmt Console and a web interface to something called SPOCC.
Re: Dynamic Optimization???
SPOCC is basically a web interface for the Service Processor.
As Richard mentioned, AO is configured from within System Reporter - there are no settings for it in the InForm Mgmt Console.
If you are licensed for AO, it'll be listed as an 'Enabled Feature' in the IMC (under Systems >> InServs >> Software).
As Richard mentioned, AO is configured from within System Reporter - there are no settings for it in the InForm Mgmt Console.
If you are licensed for AO, it'll be listed as an 'Enabled Feature' in the IMC (under Systems >> InServs >> Software).
Re: Dynamic Optimization???
Cool, so next question, what to name the LUNs?
Normally I have something like SAN01-FC-R5-LUN01, but since I can 'tune' LUNs to different disk, with different RAID levels I'm guessing the names should be as generic as possible (since they could be moved anywhere)?
Normally I have something like SAN01-FC-R5-LUN01, but since I can 'tune' LUNs to different disk, with different RAID levels I'm guessing the names should be as generic as possible (since they could be moved anywhere)?
Re: Dynamic Optimization???
As the RAID level is set at the CPG, one way of doing it is to name the CPG according to the RAID level and the type of disk, and then name your LUN according to what's most useful for you. If you change the RAID level on the CPG, you only have to rename one CPG rather than a bunch of virtual volumes.
We do it like this:
CPGs (Disk type_RAID level)
e.g
NL_R5
FC_R5
FC_R1
SSD_R5
We also use separate CPGs for virtual volumes that are part of an AO policy
AO_FC_R5 etc
Virtual volumes (Environment_Application_Type)
e.g.
PRO_MAIL_DB
PRO_ORA_DPP
TST_SQL_LOG
There's no right or wrong way or doing it. You may choose just to number your LUNs rather than indicate application on the LUN, etc. Whatever you find most useful for your environment
We do it like this:
CPGs (Disk type_RAID level)
e.g
NL_R5
FC_R5
FC_R1
SSD_R5
We also use separate CPGs for virtual volumes that are part of an AO policy
AO_FC_R5 etc
Virtual volumes (Environment_Application_Type)
e.g.
PRO_MAIL_DB
PRO_ORA_DPP
TST_SQL_LOG
There's no right or wrong way or doing it. You may choose just to number your LUNs rather than indicate application on the LUN, etc. Whatever you find most useful for your environment
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: Dynamic Optimization???
I can share the naming policy I use, and it has served me well thus far.
For CPGs, I name them according to the Tier levels of storage I support.
DEV_TIER2_CPG_01 (02, 03 etc) = FC, Raid 5, set size 9, Magsafe, slow inner tracks
DEV_TIER3_CPG_01 (02, 03 etc) = NL, Raid 5, set size 9, Magsafe, slow inner tracks
PRD_TIER2_CPG_01 (blah blah blah) = FC, Raid 5, set size of 5 (we have 10 shelves and 4 nodes), cagesafe, FAST outer tracks.
PRD_TIER3_CPG_01 = NL, Raid5, set size of 5, cagesafe, FAST outer tracks.
For VV's we use the following naming convention:
(HOSTAME)_(LUN#) and optionaly will append a (_ROLE)...
Some examples:
R6KORAPD1_0 (Bare minimum, hostname + lun#)
NTFWEXCHDB1_7_DATA1 (bare minimum plus the name of the echange object on the drive)
NTFWEXCHDB1_8_LOG1
NTFWEXCHDB1_9_BU2D (backups to disk)
VMWARE_0_VMFS1 (name of the Vmware datastore)
NTFWSQLD1_0_E (Name of the drive letter assigned)
NTFWSQLD1_1_F
NTFWSQLD1_2_G
By having the host name, it makes it easy to filter the list of VVs by host.
By having the lun#, you can easily unmap and remap drives back to hosts in their proper LUN location, and clusters that share disks (we use the cluster name as the host name) can use the same LUN# for the same disk on both sides of the cluster. Plus LUN# is the ONLY way you can positively and uniquely identify a LUN from the host perpsective, to a lun on the 3par on ALL operating systems. OS admins asking you to grow the J drive, or hdisk7 don't help you to identify which disk is the one the grow. However, "Grow LUN 10, on the server NTFWFSP1 to 100g" is an efficient method to communicate* (unless you have 2 lun 10s from 2 different 3PAR systems assigned to the same server).
By having the role appended at the end, it helps you make better sence of the performance data collected and evaluated by your System Reporter... a report of the top 10 luns by iops make more sense when you see the hostnames and role information in the list. PLus it allows you to expedite communications with your admins.... "Hey windows guy, why is your K: drive on that server constantly busy?"
We have recently done some SQL server consolidations and we have some beefy boxes running 30+ databases each. We designed a standard that uses mount points, each DB has a minimum of 2 luns, one for log and one for data (one of these servers has over 80 luns assigned). The idea that we can move DBs from one server to another for balancing work load by detaching the DB, and moving the LUN to a new host. Also, this strategy is ideal for snapshot backups/restores of individual databases. Another benefit is the ability to track within 3PAR system reporter and performance tools, the workload and capacity on the server by database... as long as I label all the luns associated with a database with the DB name... for example: NTFWSQLD1_56_PRIMSTR_DB
NTFWSQLD1_57_PRIMSTR_LG
NTFWSQLP1_12_PRIMSTR_DB
NTFWSQLP1_13_PRIMSTR_LG
I can then use the gui/cli or reports to filter on PRIMSTR to get a list of all resources used by that particular application, monitor performance, growth rate, etc etc.
Long story short, you can create functionality that doesn't exist with a clever naming standard.
For CPGs, I name them according to the Tier levels of storage I support.
DEV_TIER2_CPG_01 (02, 03 etc) = FC, Raid 5, set size 9, Magsafe, slow inner tracks
DEV_TIER3_CPG_01 (02, 03 etc) = NL, Raid 5, set size 9, Magsafe, slow inner tracks
PRD_TIER2_CPG_01 (blah blah blah) = FC, Raid 5, set size of 5 (we have 10 shelves and 4 nodes), cagesafe, FAST outer tracks.
PRD_TIER3_CPG_01 = NL, Raid5, set size of 5, cagesafe, FAST outer tracks.
For VV's we use the following naming convention:
(HOSTAME)_(LUN#) and optionaly will append a (_ROLE)...
Some examples:
R6KORAPD1_0 (Bare minimum, hostname + lun#)
NTFWEXCHDB1_7_DATA1 (bare minimum plus the name of the echange object on the drive)
NTFWEXCHDB1_8_LOG1
NTFWEXCHDB1_9_BU2D (backups to disk)
VMWARE_0_VMFS1 (name of the Vmware datastore)
NTFWSQLD1_0_E (Name of the drive letter assigned)
NTFWSQLD1_1_F
NTFWSQLD1_2_G
By having the host name, it makes it easy to filter the list of VVs by host.
By having the lun#, you can easily unmap and remap drives back to hosts in their proper LUN location, and clusters that share disks (we use the cluster name as the host name) can use the same LUN# for the same disk on both sides of the cluster. Plus LUN# is the ONLY way you can positively and uniquely identify a LUN from the host perpsective, to a lun on the 3par on ALL operating systems. OS admins asking you to grow the J drive, or hdisk7 don't help you to identify which disk is the one the grow. However, "Grow LUN 10, on the server NTFWFSP1 to 100g" is an efficient method to communicate* (unless you have 2 lun 10s from 2 different 3PAR systems assigned to the same server).
By having the role appended at the end, it helps you make better sence of the performance data collected and evaluated by your System Reporter... a report of the top 10 luns by iops make more sense when you see the hostnames and role information in the list. PLus it allows you to expedite communications with your admins.... "Hey windows guy, why is your K: drive on that server constantly busy?"
We have recently done some SQL server consolidations and we have some beefy boxes running 30+ databases each. We designed a standard that uses mount points, each DB has a minimum of 2 luns, one for log and one for data (one of these servers has over 80 luns assigned). The idea that we can move DBs from one server to another for balancing work load by detaching the DB, and moving the LUN to a new host. Also, this strategy is ideal for snapshot backups/restores of individual databases. Another benefit is the ability to track within 3PAR system reporter and performance tools, the workload and capacity on the server by database... as long as I label all the luns associated with a database with the DB name... for example: NTFWSQLD1_56_PRIMSTR_DB
NTFWSQLD1_57_PRIMSTR_LG
NTFWSQLP1_12_PRIMSTR_DB
NTFWSQLP1_13_PRIMSTR_LG
I can then use the gui/cli or reports to filter on PRIMSTR to get a list of all resources used by that particular application, monitor performance, growth rate, etc etc.
Long story short, you can create functionality that doesn't exist with a clever naming standard.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.