tubbynetreminds me of the danse russePremium,MVM
|reply to HELLFIRE |
Re: [H/W] nexus 6k released...
said by HELLFIRE:dense compute environments (cisco ucs, et al) all attached to a single pair of n5k.
Dafaq are you moving through the chassis that needs 40Gbe uplinks?!?
at this point -- the n5k is a tough balancing act. you want high-speed, line-rate, cut through switching to provide optimum performance to your compute pods -- but in order to support the density -- you need to have additional uplinks which takes away your ability to have "a bunch of servers" off the n5k.
with technologies like fc-multihop becoming realities, its not just pure ethernet frames running over the wire. you'll need to support your converged infrastructure back to the aggregation tier -- meaning higher throughput rates to support losses fc and highspeed ethernet on the same wire.
buffers in the d/c space carry a much different weight than in the access/campus/wan space. the idea is pure line-rate, cut through switching until you hit a "step-down" (i.e. 10gb --> 1gb transition) -- at which case, your kit performing the stepdown has enough buffer/intelligence and isn't handling a full 10gb of traffic. anything internal to the dc (even dci) past the access layer shouldn't be heavily oversubscribed (i.e. you're playing a statistical game about flows, etc to ensure that while you're oversubscribed in pure line-rate -- the actual throughput is modestly (if at all) oversubscribed). throughput is your qos mechanism.
And to answer my favorite question about the buffer architecture...
do the math on how many microseconds 16meg is at a full 40gb/s. these buffers are 'skid' buffers at best -- especially on egress.
"...if I in my north room dance naked, grotesquely before my mirror waving my shirt round my head and singing softly to myself..."