dslreports logo
    All Forums Hot Topics Gallery


how-to block ads

Search Topic:
share rss forum feed



[Other] Fast Hub Network? How can this be?

Ok, I'm not beginner when it comes to networking but not an expert either. I understand the fundamental differences between hubs and switches, but this one has got my perplexed. When I was at college years ago (~9 years ago) our college had a class B network (147.16.x.x) if I remember correctly. Before I graduated they upgraded the entire system to cisco equipment with high speed fiber rings and gig capability. However the reason I ask this question is before that was put it, they had the most Interesting setup which I still don't understand to this very day how it worked as well as it did.

All the buildings on campus had LOTS of these LINKSYS EF2H16 16 Port 10/100 hubs everywhere. The dorm I was in had 10 for all the floors and half of them were daisychained together via UTP patch cables with the top most one connected to a fiber converter (canary boxes they called them) which would then go to this one building on campus and in the basement was a Cisco 24 or 48 port fiber switch if I remember correctly. I was told by someone on site that they only had one big switch on the entire campus and the rest of it was hubs. I never entirely understood their ISP connection, I know it was optical but I don't remember if it was OC3 or something else (they kept that a secret for some reason after I asked). On top of that, I worked for IT Student Desktop Support at one point and the big room we were in was in the basement of this one building and in the room we had as many as 7 hubs daisy chained together using UTP cable between each of them. The uplink ran up to some Pipes that ran along the ceiling into another room. Before the cable went into the next room, there were two Linksys 5 port hubs dangling from the pipes and I was told we were connected to the one and the other was another subnet.

Anyrate......the reason I bring this up is because after all I have said, the network (both LAN and WAN) was actually very fast and I could never understand why especially with all these hubs inline and a VERY limited number of switches in the setup. You would think that all those hubs with all the students, professors and faculty would be plugging up the network with so much chatter from all the nodes trying to get out to the internet and connecting to mail and remote share drives.

Any ideas as to how they did this? Modify the configuration in the Cisco switch so that MTU is smaller? Packet shaping? QOS? Any ideas how on would go about setting something like this up? I used to know the network admin of the campus but never got to ask him how the network was setup or how he got it so fine tuned.

Don't get me wrong, there were LOTS of collisions going on those hubs as I was able to gain access to the network closet in my dorm room and one night while everyone in the building was still up and naturally online doing work or other activities, I watched some of the hubs in action, and even though there were collisions on the hubs as indicated by the indicators, people were still able to watch online videos, download music, surf the internet, network shares, etc.

I don't plan on setting one of these networks up, it just baffles me how it ever worked.

Any thoughts are appreciated!

PowerOf I.T.
"Ahh....the power of I.T."

San Jose, CA
·AT&T U-Verse

2 edits

1 recommendation

9 years ago a 3Mbps DSL line would have been "blazingly fast", so clearly the gigabit FDDI with your OC3 uplink (or whatever it was; more likely T3) seemed incredible. Enough to mask the inefficient hubs on the LAN. Yeah there'd be collisions for everything plugged into a hub.

But 9 years ago there were also a lot fewer Internet apps than there are today. The web was still pretty new back then, so not as many devices (or apps on each device) were vying for bandwidth.

Tsar of all the Rushers

Greenwood, MS

1 recommendation

reply to PowerOfIT
»downloads.linksys.com/downloads/ ··· bs_1.pdf
Any 10/100 hub has at least a 2 port switch in it, so it can run at both speeds. This one also has the stacking port switched, so that while there are collisions between ports, there may not be between hubs.

The Canary boxes may also have offered internal bridging, to limit collisions.
»www.canarycom.com/pdflib/onlinec ··· _4_6.pdf
I tried to remain child-like, all I achieved was childish.


reply to Wily_One
I may not differentiated the two setups correctly. The HUB network I was talking about was between my first and second year at college. Those are the years I was curious about the setup. So the setup in question I believe was large quantities of these Linksys Hubs daisy-chained together and then one canary box back to a Cisco Fiber switch (I think it was a 2900 series switch but I don't remember and I am pretty sure it was a Layer 3 switch as well as the Class B network was subnetted down to Class C IP address ranges for different buildings and even many subnets in one building.). I know we didn't have any Gigabit at the time (I asked the network tech, he told me 100mb was the fastest thing they had at the time). I remember the ISP connection was fiber, but you are right it very well may have been just a T3, but it looked like some form of OC connection, I honeslty am not sure. I was in their "datacenter" once [if you could call it that, it was pitifull] and they had "bakers racks" with standup servers on them and I think they had as many as 15 in there with one IBM AIX server in it's own rack (that supposedly ran the website). All the servers connected to these Linksys Hubs that were their own subnet (i think it was 147.16.2.x). They also had two Packeteer systems (I think one was a backup) that was used for the internet connection.

I remember Youtube was still very young back then but everyone was downloading from it back then. You would think with all that video streaming data, online gaming, bit torrent stuff, Napster, Kazaa, Streaming Audio, that the network would come to a halt but it never seemed to. I guess the Packeteer helped that a lot.

But the other thing I never understood was how LAN traffic was as fast as it was. I had a network share on my dorm pc to a fileshare server (more than likely one of those upright servers) and i could upload a large file relatively quickly even during high peak hours of the day.

Keep in mind that not all network segments used Canary boxes to link themselves back to the cisco switch.

I know data throughput was different back then but still many people were doing high bandwidth applications back in it's day and the Idea of using hubs everywhere just makes me think that the collisions domains are so large even for their own subnet that I can't help but think that we still should have had large amounts of delay.

Is there anything that could be done at the center switch to alleviate some of this congestion? Forcing smaller frame sizes or larger ones? I don't know if QoS was ever used or not.

Just curious.
PowerOf I.T.
"Ahh....The power of I.T."

Powered By Infinite Improbabilty Drive
Stone Mountain, GA
·Atlantic Nexus
A large collision domain is not as bad as it sounds. At my old job, they had the entire corp on one LAN. Ethernet handles collisions well.

We all still got good speeds. The network light was always busy with broadcasts.

Locally we added a router to separate the rest of the LAN. The main reason was to cut down the chatter on the other LAN to make LAN captures more readable.
Scott Henion

Embedded Systems Consultant,
SHDesigns home - DIY Welder


Tyler, TX
reply to PowerOfIT
Same principle of over subscription. When I was studying for my Cisco exams their recommendation was 20:1 on the uplinks before you start to see service degrade. Same thing happens on interfaces which is why they have buffer queues so it's transparent. As long as the videos download faster then you watch them you wont see the hickups. No one was streaming HD back then too lol.