if your FC-drives are really hitting 400 IOPS, that would explain your performance issue. 10.000 RPM FC drives are rated for ~150-180 IOPS 15.000 RPM FC drives are rated for ~220-250 IOPS (random, 8k, 50/50rw)
Doing more is possible but usually with a significant loss in response time. With 400 iops i would guess it is actually doing a lot of sequential read IO, but still at 40+ msecs.
you can see 5 minute average values in the on-node system reporter (if licensed), e.g. via cli (this can then be copied and graphed in excel if need be, or directly accessed in the IMC or SSMC interface as well. Frontend port usage of the last 24 hours (vary with the -24 to show the period you want to see): srstatport -port_type host -btsecs -24h -rw Cache usuage of the last 24 hrs: srstatcmp -btsecs -24h and what the drives are doing: srstatpd -disk_type FC -btsecs -24h
you are welcome to post a few lines so we can analyze what is going on as well. If you would do that, please mention the exact specification of your array as well. So: - how many drives (of what size, type and rpm) - how many cages - which firmware are you running
_________________ The goal is to achieve the best results by following the clients wishes. If they want to have a house build upside down standing on its chimney, it's up to you to figure out how do it, while still making it usable.
|