I design and engineer datacenter infrastructure for a significant portion of my job—and I have no clue where you're really going with this.
Creating a demonstrative scenario is fine, but the same questions that need to be answered in real projects also need to be answered in a demonstrative. What are the customer requirements? How do we weigh cost, performance, workloads, connectivity, access, serviceability, power, or available space? Each of these, and many others, factor heavily into a well-designed solution.
In other words, we see you're trying to build
something, but without knowing what your designing for, our answers are just shots in the dark, since we're left guessing.
Off the cuff, for example, you could do the following:
- 2x Cisco Nexus 7706 for your cores.
- Each with one or two N77-F324FQ-25 modules (40 Gbps ports, each divisible into 4x10Gbps ports), with FCoE licenses
- Additional modules may be desired, e.g., N77-F348XP-23 for 1/10Gb connectivity.
- 4x Cisco Nexus 2248UPQ fabric extenders, as top-of-rack switches, two in each server rack, with 40 servers per rack. Each would connect to a single N77 core.
- 2x Cisco MDS 9250i for your SAN traffic, each connecting 12 FC ports (one SAN side) of the 3PAR and sending the traffic upstream over FCoE to a single N77 core.
- 80x dual-port 10 Gbps CNAs for your servers (e.g., Qlogic QLE8362, Emulex OCe14102-UX, or HP CN1100E).
This is a very basic, though modern, configuration, that converges LAN & SAN traffic into your cores, using FCoE. If wired up and configured correctly, you will maintain SAN-A/B separation of your SAN traffic, while delivering LAN traffic over the same wire to your servers.