dslreports logo
 
    All Forums Hot Topics Gallery
spc
Search similar:


uniqs
4153

tubbynet
reminds me of the danse russe
MVM
join:2008-01-16
Gilbert, AZ

tubbynet

MVM

[H/W] nexus 6k released...

heard a few rumblings about this from someone i know in cisco-land. started doing some digging. its impressive.
now -- for the general publics information:

nexus 6001 -- »www.cisco.com/en/US/prod ··· to_N6001

nexus 6004 -- »www.cisco.com/en/US/prod ··· to_N6004

q.

phantasm11b
Premium Member
join:2007-11-02

phantasm11b

Premium Member

Very interesting... will have to see what I can drum up on the back end as well. Thanks for the links!!
meta
join:2004-12-27
00000

meta

Member

From my perspective they are an appealing 40gig successor to the 5596UP's. A big dumb L2 FEX aggregator downstream of the 7k's.
HELLFIRE
MVM
join:2009-11-25

HELLFIRE to tubbynet

MVM

to tubbynet

The Cisco Nexus 6001 Series Switch is a wire-rate Layer 2 and Layer 3, 48-port 10 Gigibit Ethernet (GE) switch with 40 GE uplinks. It is optimized for high-performance, top-of-rack 10 GE server access and Cisco Fabric Extender (FEX) aggregation.

The Cisco Nexus 6004 Series Switch is a high density, wire-rate, Layer 2 and Layer 3, 10 and 40 Gigabit Ethernet (GE) switch.

Dafaq are you moving through the chassis that needs 40Gbe uplinks?!?! Then again, I don't do alot of data center / "cloud"
deployments.

And to answer my favorite question about the buffer architecture...

The Cisco Nexus 600x supports a 25-MB packet buffer shared by every 3 ports of 40 Gigabit Ethernet or every 12 ports of 10 Gigabit Ethernet. Of the 25-MB buffer, 16 MB are used for ingress and 9 MB are used for egress buffering.

Thanks for the info tubbynet!

Regards

tubbynet
reminds me of the danse russe
MVM
join:2008-01-16
Gilbert, AZ

tubbynet

MVM

said by HELLFIRE:

Dafaq are you moving through the chassis that needs 40Gbe uplinks?!?

dense compute environments (cisco ucs, et al) all attached to a single pair of n5k.
at this point -- the n5k is a tough balancing act. you want high-speed, line-rate, cut through switching to provide optimum performance to your compute pods -- but in order to support the density -- you need to have additional uplinks which takes away your ability to have "a bunch of servers" off the n5k.
with technologies like fc-multihop becoming realities, its not just pure ethernet frames running over the wire. you'll need to support your converged infrastructure back to the aggregation tier -- meaning higher throughput rates to support losses fc and highspeed ethernet on the same wire.

And to answer my favorite question about the buffer architecture...

buffers in the d/c space carry a much different weight than in the access/campus/wan space. the idea is pure line-rate, cut through switching until you hit a "step-down" (i.e. 10gb --> 1gb transition) -- at which case, your kit performing the stepdown has enough buffer/intelligence and isn't handling a full 10gb of traffic. anything internal to the dc (even dci) past the access layer shouldn't be heavily oversubscribed (i.e. you're playing a statistical game about flows, etc to ensure that while you're oversubscribed in pure line-rate -- the actual throughput is modestly (if at all) oversubscribed). throughput is your qos mechanism.
do the math on how many microseconds 16meg is at a full 40gb/s. these buffers are 'skid' buffers at best -- especially on egress.

q.
tubbynet

tubbynet to meta

MVM

to meta
said by meta:

From my perspective they are an appealing 40gig successor to the 5596UP's. A big dumb L2 FEX aggregator downstream of the 7k's.

for the most part.
while i haven't seen anything off of bock-bock -- i'd assume that the 6004 has some sort of primitive l3 capability. depending on the limitations -- i can see them being small, dense aggregation layers for your compute clusters -- maybe spanning a pod or two.
6001 access to 6004 agg to 9010 core. sounds compelling.

q.
meta
join:2004-12-27
00000

meta

Member

Tubby, as they cited low latency (cut through) for high frequency trading you can count on it having one of the worlds smallest TCAM tables ever.

Imagine having to check N layers deep to find egress forwarding information for every packet, doing that 40 gazillion times per second limits how big the table can be.

The Gumball platform is similarly crippled with ~3000 effective ipv4 routes in the forwarding table when in "warp" mode.

"Warp mode: For those customers with smaller environments who demand the lowest latencies possible, warp mode consolidates forwarding operations within the switching ASIC, lowering latency by up to an additional 20 percent compared to normal operation. In this mode, latencies as low as 190 ns can be paired with the smaller of the Layer 2 and Layer 3 scaling values listed later in this document in Table 5."

As an added benefit, "ipv6" does not appear on the gumball datasheets... anywhere... any of them... Another victim of the quest for lower latency through the switch.
aryoba
MVM
join:2002-08-22

aryoba

MVM

We have been trying to get the Nexus 3K (that Cisco claimed as the low-latency switch) to no avail. Apparently Cisco only produced very few of it, and every time customer tried to do benchmarking or demo on it; Cisco had to wait until other customer had completed their test on the switch. In other word, the very same switch had been circling around between customers and none of those customers were ever be able to buy it due to limited production situation.

I wonder if the same thing happen with this Nexus 6K

tubbynet
reminds me of the danse russe
MVM
join:2008-01-16
Gilbert, AZ

tubbynet to meta

MVM

to meta
said by meta:

Tubby, as they cited low latency (cut through) for high frequency trading you can count on it having one of the worlds smallest TCAM tables ever.

Imagine having to check N layers deep to find egress forwarding information for every packet, doing that 40 gazillion times per second limits how big the table can be.

The Gumball platform is similarly crippled with ~3000 effective ipv4 routes in the forwarding table when in "warp" mode.

3 -- not disputing that it will have limited viability.
however -- if you're requiring low-latency bits -- then what good is using a low-latency platform for access -- just to have that gain lost in the agg layer?
sure -- its crippled. sure -- it has small tcam. sure -- they bill it as the next big thing until you look behind the curtain.

however -- if you can keep your tables small -- keep the 6004 as a small pod agg -- keep only a default route up to your core -- you may be in good shape.

i'm not saying its a panacea. cisco has to walk a fine line about where this platform is to be positioned. make it too kick ass -- you'll piss off the four years of n7k install base.
make it too sucktastic -- you'll never recoup anything near your r&d costs.

i can see a use for it. not saying its for everyone -- but i can see a use.

q.

sk1939
Premium Member
join:2010-10-23
Frederick, MD

sk1939

Premium Member

I'm still wondering if this will compete with the N5k (we just added a handful for top-of-rack/core use).

tubbynet
reminds me of the danse russe
MVM
join:2008-01-16
Gilbert, AZ

tubbynet

MVM

said by sk1939:

I'm still wondering if this will compete with the N5k (we just added a handful for top-of-rack/core use).

the 6001 -- most definitely.
the 6004 can -- as it will be a very dense 40gb aggregator -- but due to the throughput through the box -- it can also be used as a small aggregation.
its all based on use case, price, and features required.

you'll want to look at the deeper details (mac limits, routing, tcam, etc) for if it supports what you need it to.

just because its newer -- doesn't always mean better.

regards,
q.
HELLFIRE
MVM
join:2009-11-25

HELLFIRE to tubbynet

MVM

to tubbynet
@tubbynet
Already admitted I don't do datacenter / "cloud" deployments

As for the buffer question, most of my (albeit) limited experience is with the X6148-RJ45 vs X6748-GE-TX
modules, and any poor connectivity with endhosts usually starts with checking for X6148-RJ45s and
buffer drops.

Regards

tubbynet
reminds me of the danse russe
MVM
join:2008-01-16
Gilbert, AZ

tubbynet

MVM

said by HELLFIRE:

@tubbynet
As for the buffer question, most of my (albeit) limited experience is with the X6148-RJ45 vs X6748-GE-TX
modules, and any poor connectivity with endhosts usually starts with checking for X6148-RJ45s and
buffer drops.

6148 is an older, bus-style card. its capable as an end user access-layer card, but for anything with 'oomph' you're going to need to upgrade.
the linecard itself has to be mux'd -- classic 6gbs interconnect on a potential 48gbps card. you've got an 8:1 mux; each 8 ports share a single 1gb "link" to the bus/fabric and a single 1mb buffer for queue. in this regard -- you're going to have a potential oversubscription due to the same reasons as i outlined in my earlier post -- you've got to shove a lot of shit down a small pipe.
compare this to the 6748, where you're interconnecting at 40gbps and you have 1.3mb per port buffering. you've got 1.2:1 oversub across the entire blade -- much less likely for contention.

again -- its all about knowing the architecture of the card to position the best options possible for your customer while keeping the cost within budget.

q.
HELLFIRE
MVM
join:2009-11-25

HELLFIRE

MVM

Thanks for the technical lesson there tubbynet, like I said, that's the extent of my experience and ONLY in those
two linecards where I work.

Speaking of architecture, you have any further comment on this datasheet from Cisco? From the surface, looks like
you'd want to use the WS-X6524-100FX-MM, WS-X6548-RJ-45, WS-X6148A-RJ-45 or WS-X6148-FE-SFP for high traffic
volume 10/100Mb access ports. As for the 10/100/1000Mb arena, how's the WS-X6148A-GE-TX / WS-X6148A-GE-45F
stack up overall?

Regards

tubbynet
reminds me of the danse russe
MVM
join:2008-01-16
Gilbert, AZ

tubbynet

MVM

buffers and queues are only important when you know the full story.
just looking and choosing a linecard based on queueing depth can give you a false sense of "working".

this white paper on catalyst 6500 architecture will give you a little better insight into the differences between each of the linecard series -- 6100, 6300, 6500, 6700, etc.
this is just as important as buffering -- as all the buffers in the world won't help if you're always oversubscribed.

q.
HELLFIRE
MVM
join:2009-11-25

HELLFIRE to tubbynet

MVM

to tubbynet
I've read that article once or twice, but not to any great depth yet. Rule of thumb... 6x00-series linecard, the higher X is, the higher the bandwidth?

I'm also trying to get familar with the bandwidth limitations of the SUPs themselves -- 720 is 40Gbps per slot, and 2T is 80Gbps
per slot. To say nothing of the whole headache of keeping in mind Nexus and "Fabric Modules." Well, that's why types like us
draw a paycheck...

Regards

tubbynet
reminds me of the danse russe
MVM
join:2008-01-16
Gilbert, AZ

tubbynet

MVM

said by HELLFIRE:

Rule of thumb... 6x00-series linecard, the higher X is, the higher the bandwidth?

meh. close. its not always 100% foolproof, as the series of linecard may vary in the mode of operation (6700, 6800 series especially).

from the whitepaper:
quote:
Cisco Catalyst 6500 Line Cards
The lineup of Cisco Catalyst 6500 line cards provides a full complement of media and speed options to meet the needs of deployment in the access, distribution and core layers of the network. Each line-card slot provides a connection into the 32-Gbps shared bus and the crossbar switch fabric (if either a Supervisor Engine 720 or switch fabric module is present). Cisco Catalyst 6500 line cards fall into one of four general families of line cards:

• Classic: In this mode the line card has a single connection into the 32-Gbps shared bus.

• CEF256: The line card in this mode supports a connection into the 32-Gbps shared bus and the switch fabric-these line cards will use the switch fabric for data switching when the Supervisor Engine 720 is present-if a Supervisor Engine 32 is present it will revert back to using the 32-Gbps shared bus.

• CEF256 and CEF720: The line card in this mode supports a connection into the 32-Gbps shared bus and the switch fabric: these line cards will use the switch fabric on the Supervisor Engine720 for data switching.

• dCEF256: These line cards require the presence of the switch fabric to operate-these line cards do not connect into the shared bus.

• dCEF720: Like the dCEF256 linecards, they only require the switch fabric to be present to switch packets. They connect into the switch fabric channels at 20Gbps as opposed to the 8Gbps that the dCEF256 linecards connect.

keep these modes of operation in the back of your head. also keep in mind the difference between bus or fabric interconnects.

as you look at each card -- and their mode of operation/interconnect -- you start seeing patterns about different series of cards and how they interact with each other (yes -- what operationally exists is just as important as what you're buying).

this is why the c6k needs to go away. while the limitations of the chassis are documented -- its such a finicky platform and has limitations that are consistent with the growth of the technology at each interval of production. its a swiss army knife -- it'll whittle a stick, pick your teeth, and saw some small limbs -- but don't expect to skin a deer with it.

I'm also trying to get familar with the bandwidth limitations of the SUPs themselves -- 720 is 40Gbps per slot, and 2T is 80Gbps
per slot. To say nothing of the whole headache of keeping in mind Nexus and "Fabric Modules." Well, that's why types like us
draw a paycheck...

the sup throughput limitations are just a byproduct of the way the sup interfaces with the backplane fabric. nothing magical there. the magic of the sup is in the pfc and what version (if you're looking at s720/s2t) you're running. each pfc has slightly different features/improvements (pfc3a, pfc3b, pfc3c) over the previous generation. s2t runs pfc4 -- which is consistent with the nexus -- so there is a tighter parity of support between the two (within reason). the pfc is what does all of the "magic" (qos, acl, etc) -- and these limitations are just as important as the linecards -- because the c6k will shit the bed quite spectacularly if you exceed the hardware support within the pfc and its forced to punt excessive amounts of traffic.
this is to say nothing about the limitations of copp, etc -- which can also have undue effects if done incorrectly -- affecting even required traffic for production (i.e. glean traffic for arp queries, etc).

the c6k is a terribly idiosyncratic box that if you know it well -- can perform like a rockstar. if you don't -- its going to be a long trip with them.

q.
HELLFIRE
MVM
join:2009-11-25

HELLFIRE

MVM

Got some light reading to do then tubbynet See Profile Thanks man!

Regards

sk1939
Premium Member
join:2010-10-23
Frederick, MD
ARRIS SB8200
Ubiquiti UDM-Pro
Juniper SRX320

sk1939 to tubbynet

Premium Member

to tubbynet
said by tubbynet:

the c6k is a terribly idiosyncratic box that if you know it well -- can perform like a rockstar. if you don't -- its going to be a long trip with them.

Agreed, although the last 6500 I worked with "only" had a 720-3CXL. So far I like the Nexus line a lot better, but the drawbacks are the cost and lack of general knowledge out there of NX-OS for some of the more complex things like dealing with an F2 linecard on a chassis with a Sup 1.

tubbynet
reminds me of the danse russe
MVM
join:2008-01-16
Gilbert, AZ

tubbynet

MVM

said by sk1939:

lack of general knowledge out there of NX-OS for some of the more complex things like dealing with an F2 linecard on a chassis with a Sup 1.

there is a good deal of stuff out there on cco. additionally -- there have been an abundance of slicks produced for live! every year since the nexus line started taking off.
the f2 in sup1 isn't difficult. there are limitations that you need to know about as it pertains to mixing linecards within a given vdc -- but its pretty well documented online.

however -- i've been working with the n7k line for nearly four years now -- so i'm pretty used to moving around the chassis.

regards,
q.

sk1939
Premium Member
join:2010-10-23
Frederick, MD
ARRIS SB8200
Ubiquiti UDM-Pro
Juniper SRX320

sk1939

Premium Member

said by tubbynet:

said by sk1939:

lack of general knowledge out there of NX-OS for some of the more complex things like dealing with an F2 linecard on a chassis with a Sup 1.

there is a good deal of stuff out there on cco. additionally -- there have been an abundance of slicks produced for live! every year since the nexus line started taking off.
the f2 in sup1 isn't difficult. there are limitations that you need to know about as it pertains to mixing linecards within a given vdc -- but its pretty well documented online.

however -- i've been working with the n7k line for nearly four years now -- so i'm pretty used to moving around the chassis.

regards,
q.

Yeah, there are some stupid quirks I found, some obvious ones like F1 and M1 have to be in different VDCs, and less obvious ones (and rather annoying frankly) FCoE on the F2 card seems to require a Sup2, and some are saying licensing as well.

I admit I have never been to Cisco Live!, but I'll take a look and see on CCO. You've been at it longer than I have, I'm coming up on a year and a half from even hearing about Nexus (hands on for about 8 mos).