dslreports logo
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
2430
share rss forum feed


yaplej
Premium
join:2001-02-10
White City, OR

[H/W] What sup for a 6500?

I am trying to get a 6506-E to put in our lab at work. Even though we don't even have these on our production network yet but I am going to push hard over the next year to get them. I wanted to get some hands on experience before we (meaning I) deploy them.

There seem to be a few different options for the sup720 we could go with the original sup720(A), sup720-3b or the sup720-3bxl.

Because this would be just for a lab is there any need to go for the sup720-3b? The only thing that might be nice about having the 3b is when its time to put them in production that will probably be the sup we go for so having a spare in the lab isn't a bad idea.

That is unless there is some issue or limitation that might make a case for going with the sup720-3bxl when we start looking at putting these in production. There does not seem to be a huge difference between the 3b and 3bxl. I mean 500000 vs 1000000 routes is a big difference but both are far more than we will need in the next "few" years.
--
sk_buff what?

Open Source Network Accelerators
»www.trafficsqueezer.org
»www.opennop.org


aryoba
Premium,MVM
join:2002-08-22
kudos:6
Before deciding the sup to use, are you sure that the lab power outlet support the power supply requirement? Depending on what you are testing, you could just use 6000W just for one power supply.

paarlberg

join:2000-07-28
Duluth, GA
reply to yaplej
I would try to mirror the live network in the lab. If you have the same SUP, then you can use it as a backup/spare/live in the future.

The biggest issue will be if you want to run dual sup engines. We found that a SUP720-3BXL and a SUP720-3BXL that had been upgraded to a BXL were not compatible in the same chassis, even though they were both 3BXLs now. So be careful on these..

The difference in price is substantial between the SUP720, B, BXL, and even the CXL.

All of the above will require a FAN2. We have dual SUP720-3BXL and 2-4 modules installed with dual 2500W PS. If you were looking at a 6509 or the 6513 chassis, then I would look at the 4 or 6k PS if you were populating a lot of the slots.

I would definitely get the max mem/flash in any of the configs you go with.


yaplej
Premium
join:2001-02-10
White City, OR
reply to yaplej
We wont be running any PoE line cards so from everything I read the minimum power supply would be the 2500W. I have not figured out exactly how many line cards that 2500W can power.

The line cards would probably be WS-X6748-GE-TX and/or the 4-8 port 10G line cards.

paarlberg

join:2000-07-28
Duluth, GA
I am thinking that ours have 2x 6516A and a 6748-GE, plus the dual 3BXL SUPs. They run fine with a 2500W.

Just remember that the 6748-GE do have some bandwidth limitations due to the bus architecture on them. So don't expect to get 96 gbps out them in full duplex.


yaplej
Premium
join:2001-02-10
White City, OR
I think the WS-X6748-GE-TX is fabric enabled and supports the distributed forwarding daughter card WS-F6700-DFC3A. It should not be using the bus with this line card.

Maybe I am reading the spec sheet wrong. I hate how they put all the line cards on a single spec-sheet so it appears like they all do everything that only the most expensive line cards can.

paarlberg

join:2000-07-28
Duluth, GA
It is fibre enabled. It related to it having 8 ports sharing a section buffer memory. So you would have 6x 8-ports sharing buffer memory. Below is the info

Actually, it might be the 6548-GE and not the 6748-GE that we have.

»www.cisco.com/en/US/prod/collate ··· eet.html

nosx

join:2004-12-27
00000
kudos:5
Dumb question, but what is the use case for these?

At work we are getting ready to retire them from our approved hardware list. They will be replaced with Nexus 7k for datacenter environments and 4510R+E's for the enterprise. Both of which are cheaper than the 6500 and do a better job in their respective roles.

Our architecture team's roadmap has them falling off the list sometime in Q2 after NXOS 5.2 is release and certified. The only reason they waited this long was for MPLS support to be fully baked into NXOS.


yaplej
Premium
join:2001-02-10
White City, OR

1 edit
Doing some more reading on the 6500E and I really dont like the whole 40Gbps per slot. I assume that's not full duplex either.

This seems to hint that its 40Gbps full duplex so actually 80Gbps per slot. Maybe someone here can confirm either way. »etherealmind.com/cisco-superviso ··· ormance/

So even with the fabric enabled line cards its going to be over subscribed from line card to line card.
»www.cisco.com/en/US/prod/collate ··· 385.html

It would not be extremely difficult to design around those limitations like only using 2 10G ports in a single slot for up-links between two 6500Es then placing all server interfaces on a single line card and all iSCSI interfaces on another line card with the DFCs in them. At least then the majority of the traffic would be localized to their respective line cards.

The 4500E has been upgraded considerably from 8Gbps to 48Gbps per slot but I assume that's still not full duplex from the looks of it so its still over subscribed. Unless it is full duplex in that case the 4500 has surpassed the 6500 in performance. That seems a little wrong being a lower model number and all. Maybe the 6500 needs some engineering love and re-release as the 6500E2 or something. Not that we would be getting a new device though.
--
sk_buff what?

Open Source Network Accelerators
»www.trafficsqueezer.org
»www.opennop.org



yaplej
Premium
join:2001-02-10
White City, OR
Looks like the 6500E supports up to 80Gbps per slot with the sup2t. At least the 6513E and 6504E will support the 80Gbps line cards. Is that full duplex though? I find anything to indicate the fabric is full or half duplex.
»www.cisco.com/en/US/prod/collate ··· 410.html
»www.cisco.com/en/US/prod/collate ··· eet.html

If the sup720 fabric is 40Gbps full duplex it's not as over subscribed as I originally thought. If it's only 40Gbps half duplex well then bummer.
--
sk_buff what?

Open Source Network Accelerators
»www.trafficsqueezer.org
»www.opennop.org


paarlberg

join:2000-07-28
Duluth, GA
Cisco usually lists the full duplex totals for ports and slots. So a GE interface will be 2 gbps, FE is 200 mbps.

nosx

join:2004-12-27
00000
kudos:5
Regarding the performance of 65 vs 45, we tested the 4510R+E with SUP7's and wow what a heavy hitter. It deffinetly outperforms the 6500 in an ENTERPRISE environment. I wouldnt push them for datacenters due to feature support for current and upcoming technology shifts.

The SUP2T is supposedly going to permit greater fabric speeds (probablly providing you purchase all new linecards) and fix alot of long standing architectural issues. However its still at least a year from first customer shipment based on our latest meetings.

The NX platform wins for datacenter environments due to greater speed, 10gig density, and current/future feature support that the 6ks wont get. The 4500 wins at enterprise due to POE density, supervisor & software architecture, and price.

The 6500 was a great all-purpose multilayer switch but due to age, architecture, and price vs the comprable specific role switches its losing its place in more companies.


yaplej
Premium
join:2001-02-10
White City, OR
reply to yaplej
Generally we won't get any latest generation technology due to the cost. So looking at the latest SUP7E in the 4500E chassis isn't really an option when we could build out all three locations with dual 6506E + SUP720 for the price of one 4500E + SUP7E. So if we limit the comparison is between last generation 6500E and 4500Es I think the 6500E + SUP720 40Gbps/slot wins over the 4500E best case 24Gbps/slot.

If we start looking at future/latest generation stuff there is the future SUPT2 that would bring the 6500E to 80Gbps/slot so it seems the 6500E does have some more life left in it. Not that we would be getting that anytime in the next 10 years.

These will basically be our collapsed core/distribution layer in our server rooms. We are in the process of building out a dual WAN composed of a MPLS network with DS3 circuits between our three primary locations and a 100M MAN as a backup for those sites. The rest of our network will remain on the MPLS due to some availability issues our MAN provider is trying to get worked out.

I wanted to get an update to our switching core on the table as our current switching core is ummm well Dell.

--
sk_buff what?

Open Source Network Accelerators
»www.trafficsqueezer.org
»www.opennop.org


nosx

join:2004-12-27
00000
kudos:5
While i disagree with your price comparison, i dont know how you are buying this equipment (im assuming its a bunch of used stuff based on your comparison)

The requirements of network equipment today are not really comprable to what they were 10 years ago when alot of the current architecture was cooked up.

When you start talking about the widescale voice over the network QOS is no longer optional. It is required end to end.

With video comes the requirement of supporting multicast across the corporate network. Multicast video is cheaper than upgrading circuits and saves on travel budgets.

IPv6 brings into consideration what platforms deployed support it in hardware? A very large customer demanded we have IPv6 deployed by this fall.

The same holds true with path virtualization technology such as mpls to separate out environments for different business units or customers.

I understand the desire to do things cheap, but just make sure that its not going to cripple the long term business goals or your competition will end up looking far superior on every call.


tubbynet
reminds me of the danse russe
Premium,MVM
join:2008-01-16
Chandler, AZ
kudos:1
reply to yaplej
said by yapelj :

If we start looking at future/latest generation stuff there is the future SUPT2 that would bring the 6500E to 80Gbps/slot so it seems the 6500E does have some more life left in it. Not that we would be getting that anytime in the next 10 years.

i'll reiterate what nosx See Profile has said -- the 6500e's days are numbered. at this point, once feature parity with nx-os comes to where the c6k is, there will be no use. if cisco actually updates the service modules to take advantage of the sup2t, i can see it being relegated to a services chassis in datacenter environments.

i look at it like this -- your c6k is going to be upgraded to 80gbps/slot. upgraded. n7k does that out the door now. sup2t is going to be earl8 based (iirc), so you're going to have pfc4 and (hopefully) reduce the number of idiosyncrasies with qos, etc. n7k does that now. to take advantage of the distributed forwarding on c6k, you've got to run 65xx or 67xx series line cards (or better) and add on the dfc. all nexus linecards are dfc enabled out the door. add in additional redundancy, rebuild ability, and nifty knobs -- nexus is a no brainer at this point -- especially when you throw in software modularity and just starting the nexus lifecycle (whereas the c6k is near the end) and you've got a sound business decision.

q.
--
"...if I in my north room dance naked, grotesquely before my mirror waving my shirt round my head and singing softly to myself..."


yaplej
Premium
join:2001-02-10
White City, OR
reply to yaplej
We do get all our equipment used so I imagine that creates a drastic difference in our comparisons. A pair of 6500Es would be a major improvement on what we currently have and will still be a stretch to even get them.

I was able to pickup a 6506E chassis for less than $400 shipped so I jumped on that and am selling some of my own lab to get a SUP720 and a line card to play around with. That's what lead to the question. There seemed to be some features the standard SUP720 was missing so I was leaning more to the SUP720-3B but wanted some input from others who might have used them. The smaller routing table of the 3B is no problem and there is no IPv4/IPv6 performance difference from 3B to 3BLX except number of routes.

What we call our "data centers" are far from what you probably picture. These are really much smaller server rooms with less than 10 cabinets but things have been going ok and the network has been becoming more a critical piece of our business as more services are relying on our network not just on our partner and vendor networks. Downtime is become less tolerable as our employees now rely more on the network to do their daily jobs so the dual WAN is going to be a huge improvement.

I guess we would be somewhere between medium business and small enterprise depending on what you would use for definitions. In the next three to six months we will be around 250+ employees, 48 locations and three "data centers" using the term loosely now. Still we dont have an IT budget so non-IT management makes the calls as to what we can/cannot do so going used has allowed us to do much more with less $$$.

We are not looking at "unified fabric" or anything like that we dont even have a fiber SAN to deal with so given our rather small set of requirements I image that the 6500E would be more than adequate for us. There are some hardware limitations in our current switching core that bug me to no end that the 6500E more than resolve.

I hope that gives a better image of where I am coming from and trying to go with this.
--
sk_buff what?

Open Source Network Accelerators
»www.trafficsqueezer.org
»www.opennop.org