dslreports logo
site
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
2
share rss forum feed

HELLFIRE
Premium
join:2009-11-25
kudos:18
reply to tubbynet

Re: [H/W] nexus 6k released...

The Cisco Nexus 6001 Series Switch is a wire-rate Layer 2 and Layer 3, 48-port 10 Gigibit Ethernet (GE) switch with 40 GE uplinks. It is optimized for high-performance, top-of-rack 10 GE server access and Cisco Fabric Extender (FEX) aggregation.

The Cisco Nexus 6004 Series Switch is a high density, wire-rate, Layer 2 and Layer 3, 10 and 40 Gigabit Ethernet (GE) switch.

Dafaq are you moving through the chassis that needs 40Gbe uplinks?!?! Then again, I don't do alot of data center / "cloud"
deployments.

And to answer my favorite question about the buffer architecture...

The Cisco Nexus 600x supports a 25-MB packet buffer shared by every 3 ports of 40 Gigabit Ethernet or every 12 ports of 10 Gigabit Ethernet. Of the 25-MB buffer, 16 MB are used for ingress and 9 MB are used for egress buffering.

Thanks for the info tubbynet!

Regards


tubbynet
reminds me of the danse russe
Premium,MVM
join:2008-01-16
Chandler, AZ
kudos:1

said by HELLFIRE:

Dafaq are you moving through the chassis that needs 40Gbe uplinks?!?

dense compute environments (cisco ucs, et al) all attached to a single pair of n5k.
at this point -- the n5k is a tough balancing act. you want high-speed, line-rate, cut through switching to provide optimum performance to your compute pods -- but in order to support the density -- you need to have additional uplinks which takes away your ability to have "a bunch of servers" off the n5k.
with technologies like fc-multihop becoming realities, its not just pure ethernet frames running over the wire. you'll need to support your converged infrastructure back to the aggregation tier -- meaning higher throughput rates to support losses fc and highspeed ethernet on the same wire.

And to answer my favorite question about the buffer architecture...

buffers in the d/c space carry a much different weight than in the access/campus/wan space. the idea is pure line-rate, cut through switching until you hit a "step-down" (i.e. 10gb --> 1gb transition) -- at which case, your kit performing the stepdown has enough buffer/intelligence and isn't handling a full 10gb of traffic. anything internal to the dc (even dci) past the access layer shouldn't be heavily oversubscribed (i.e. you're playing a statistical game about flows, etc to ensure that while you're oversubscribed in pure line-rate -- the actual throughput is modestly (if at all) oversubscribed). throughput is your qos mechanism.
do the math on how many microseconds 16meg is at a full 40gb/s. these buffers are 'skid' buffers at best -- especially on egress.

q.
--
"...if I in my north room dance naked, grotesquely before my mirror waving my shirt round my head and singing softly to myself..."