Rack Switching or End of Row???
I just finished making a few slides comparing switching architectures in the data center on the LAN side. (as much as I love the SAN side let’s leave it out of this discourse for a few minutes). The architectures we deploy when aggregating switches or when building a DC Core network seem pretty consistent. Most customers use a modular and highly available platform for these functions (I think, feel free to disagree).However in the server access layer there are three or three-and-a-half predominant designs I see within the data center architecture.1) When aggregating servers larger than 1RU or servers with a mixed amount of interface types and densities I often see an end-of-row design employed with a larger modular switch (Cat6500) or two deployed there to aggregate the servers. (Do you usually use one or two switches to aggregate them???)2) When I have 1RU servers stacked 40 high in a rack one or two 1RU Rack Switches (like the Catalyst 4948-10G) is often used to aggregate all of these servers with Gigabit Ethernet and then run a couple 10GbE links back to the aggregation switches.3) When using blades most customers seem to deploy blade switches. These are suualyl home-run back into the agregation tier since it is quite hard to fit more than 2-3 Blade server chassis into a rack because of thermal constraints.3.5) Had to have my half… I have seen one other design deployed in larger data centers using the pass-thru module or sometimes even the blade-switch where it is aggregated into a series of rack switches. The main reason is so that the rack is a compute entity of its own accord, can be rolled in, wheels locked, powered up, and two-four fibers plugged in and its online. It really optimizes for ease-of-deployment in large-scale facilities. But at a cost trade-off.Some questions I have-a) How will 10GbE Servers be connected into the network? Straight into the Aggregation switches? End of Row Switches?b) Are there other designs and architectures that you commonly see used?Thanks!dg