dslreports logo
 story category
Let's Build a Better Internet
Stanford 'Clean Slate' Project
Network World (via Slashdot) takes a look at the Stanford Clean Slate Project, which aims to correct the "significant deficiencies" of the existing Internet by -- well -- designing a new one from scratch.
view:
topics flat nest 

FFH5
Premium Member
join:2002-03-03
Tavistock NJ

FFH5

Premium Member

Never happen

Just like Microsoft was prevented from using a clean slate when upgrading their operating system because of the existing huge base of customers and applications, no one is going to rebuild the internet from scratch. There are just too many existing people and groups and countries and companies involved to ever make that practical.

texans20
Premium Member
join:2002-09-28
Texas!

texans20

Premium Member

Re: Never happen

said by FFH5:

Just like Microsoft was prevented from using a clean slate when upgrading their operating system because of the existing huge base of customers and applications, no one is going to rebuild the internet from scratch. There are just too many existing people and groups and countries and companies involved to ever make that practical.
I agree. We can look to see how well the IPv6 move is going for proof this will never happen.
openbox9
Premium Member
join:2004-01-26
71144

openbox9 to FFH5

Premium Member

to FFH5
Don't forget about cost of building new and the sunk cost that's in the existing infrastructure.

morbo
Complete Your Transaction
join:2002-01-22
00000

1 recommendation

morbo to FFH5

Member

to FFH5

i hate this mentality. change has to start somewhere. if we insist upon backward compatibility with everything then we will never achieve real progress.

i wish the spam problem would be tackled first. damn it is annoying.

cork1958
Cork
Premium Member
join:2000-02-26

cork1958

Premium Member

Re: Never happen

said by morbo:

i hate this mentality. change has to start somewhere. if we insist upon backward compatibility with everything then we will never achieve real progress.

i wish the spam problem would be tackled first. damn it is annoying.
Sucks, but true!
amungus
Premium Member
join:2004-11-26
America

amungus

Premium Member

Re: Never happen

Sorry, I just have to post this somewhere in the discussion...

I guess it's more of a reply to morbo...

Hopefully it's good for a laugh

a
bogey7806
join:2004-03-19
Here

bogey7806 to FFH5

Member

to FFH5
Not really. It's just a huge cost to create new routes and server system. It's a huge task but is doable as long as it starts like they did with Internet2 and work their way down.

FFH5
Premium Member
join:2002-03-03
Tavistock NJ

FFH5

Premium Member

Re: Never happen

said by bogey7806:

Not really. It's just a huge cost to create new routes and server system. It's a huge task but is doable as long as it starts like they did with Internet2 and work their way down.
It is more than just a bigger faster infrastructure. Internet2 isn't new. It is just a separate system that is VERY VERY fast. What this "Clean Slate" group is proposing is an entire rewrite of the systems: security; identification; DNS systems; protocols, etc. And that is what isn't going to happen.

tschmidt
MVM
join:2000-11-12
Milford, NH
·Consolidated Com..
·Republic Wireless
·Hollis Hosting

1 edit

1 recommendation

tschmidt to FFH5

MVM

to FFH5
said by FFH5:

There are just too many existing people and groups and countries and companies involved to ever make that practical.
That is true to a certain extent but I assume once technical details of the "perfect" next generation network are fleshed out interoperability tweaks will be made. Whatever the details no one in their right mind would propose an incompatible isolated network.

The power of the Internet is separating physical transport, networking and applications. The so called three layer model. The middle layer, IP, is the constant facilitating deployment of new physical interfaces and applications. That is why migration from IPv4 to IPv6 has been so slow and painful. That requires an old fashioned "forklift upgrade" since it really cannot be done a little bit at a time.

Given the massive technology change over its short history rethinking Internet architecture makes sense. Incredibly fast optical links vs 56kbps point-to-point copper and streaming multicast vs unicast file transfer to name a few over the last couple of decades. Of concern are proposals that want to make the Internet less transparent. Transparency helped accelerate Internet acceptance because it encouraged innovation and experimentation. Experiments: both technical and business can be conducted at low cost and without consent or permission of the network owner. Hopefully that principle will remain intact.

For a good lesson of why one does not want too much intelligence built into the network I refer you to David Isen's famous Stupid Network paper.

/tom

captokita
Premium Member
join:2005-02-22
Calabash, NC

captokita

Premium Member

uhhh... what?

"significant deficiencies"? Like what? What's considered "wrong" to me or you will differ from one person/group to another, so how would one ever define it?

n2jtx
join:2001-01-13
Glen Head, NY

n2jtx

Member

Isn't It Already Called Internet2

What the heck? Are they just reinventing the wheel?
yabos
join:2003-02-16
London, ON

yabos

Member

Re: Isn't It Already Called Internet2

Internet2 is private. They want to bring it to the public. I wish they would just tear appart the current internet and go IPv6 already. Screw all the stuff that's not compatible, just replace it already. Most routers by now are probably capable of IPv6, they just need to implement it.
patcat88
join:2002-04-05
Jamaica, NY

patcat88

Member

i agree

TCP needs to be replace, it was never made for high speed network and doesnt tolerate random packet loss very well, we need synchronous networks to our homes/computer (less latencey since minimal to no buffers now), DNS is the vast majority of slow downs, the other is web servers lag in generating the page. There is too much back and forth talk protocol wise on the internet adding lag. Also servers need to be set to upload per connection at the server's line speed/no per connection speed limits, so no need to use download accelerators. Maybe a cell switched network may be needed instead of packet switched.

LilYoda
Feline with squirel personality disorder
Premium Member
join:2004-09-02
Mountains

LilYoda

Premium Member

Re: i agree

TCP supports random packet loss extremely well... It reduces the sending rate

That's actually the base to any WAN QoS implementation. Read a bit about WRED, it works by randomly dropping packets to prevent congestion before it happens. The TCP stacks on both ends then lower their sending rate to adapt.

For protocols talking back and forth, I suspect you are talking about application layers. All it takes is for the application writer to optimize it. There are very lean application protocols out there.

snipper_cr
Premium Member
join:2002-01-22
Wheaton, IL

snipper_cr

Premium Member

Internets?

omg the internets needs more hegahurtzzz!!!11

richardpor
Fur it up
join:2003-04-19
Portland, OR

1 edit

richardpor

Member

Killed By Neutrality Fanatics

SAVE THE INTERNET!

I scanned the white paper; the part of Economic especially valuing packets will never fly if the Tron guy had his way.

"Some of the economic travails of the current Internet can be traced to a failure of engineering. The Internet lacks explicit economic primitives, hindering its functionality in several ways. In particular, the Internet provides no support for determining the value of a packet (to the sender, the receiver, or the service provider). Such information could be used, for example, to better allocate the resources of the network, providing high-value traffic with higher bandwidth, more reliability, or lower latency paths.
A related issue is that the current Internet does not provide support for differentiating between different packets on economic grounds. For example, two packets with the same origin and destination will typically be routed on the same path through the network, even if the packets have very different values. Even if these values were known to the network, the current routing protocols would not permit the packets to travel on different paths. Finally, the lack of economic primitives in the current Internet makes charging for traffic, and micropayments in particular, a challenge to implement. Such payments could contribute to both the prevention of near-valueless uses of the network (spam) and to defraying the network
maintenance costs."

While this makes perfect sense to me e-mail packets do not need to be held in the same value and priority as video. The Net Neutrality folks would be fit to be tied. They would scream how dare they to put a value on a packet.
amungus
Premium Member
join:2004-11-26
America

amungus

Premium Member

Re: Killed By Neutrality Fanatics

Maybe not screaming "how dare you" but likely upset about the "micropayments" part.

Thing is, and I simply have to agree with TCH here, it won't happen in our lifetime.

The foundation has already been laid, the infrastructure has already been worked and re-worked many times and is continuing to improve, the routing and layers of protocols have already been somewhat fine tuned, and lastly, last time I checked ..it ain't broken. This website is proof that the internet is not broken.
The fact that anyone can hop online from wherever they want and post pretty much anything they want is also proof that it is not in the least bit broken...

Probably the funniest part of that quote above is "...failure of engineering." I wonder what Vint Cerf would say about that. I wonder how any ISP even stays in business then too.. I mean, they must be so broke that they're just giving everybody free connections because they're such nice people

LilYoda
Feline with squirel personality disorder
Premium Member
join:2004-09-02
Mountains

LilYoda to richardpor

Premium Member

to richardpor
The article you scanned shows poor knowledge of TCP/IP

IP packet header has had a field for Type Of Service for years, if not decades. It used to be called ToS, then Precedence, now Diffserv or DSCP, but it's there...

I won't even go on the routing protocols bit, cause it's pathetic...

tschmidt
MVM
join:2000-11-12
Milford, NH
·Consolidated Com..
·Republic Wireless
·Hollis Hosting

2 edits

tschmidt

MVM

Clean-Slate Whitepaper Critique

One can learn a lot about an author’s perspective by choice of words.
quote:
… We don’t believe that we can or should continue to relay on a network that is often broken, frequently disconnected, unpredictable in its behavior, rampant with (and unprotected from) malicious users, and probably not economically sustainable.
That is a prettying damning quote for a technology that has revolutionized communications, empowered people, and is about to displace 100-year-old telephone network.

They are looking into five areas: Network Architecture, Heterogeneous applications, heterogeneous physical layer, Security, and Economics.

1) Architecture
The author argues “end-to-end” principle thwarts innovation and then cites several modifications that were implemented quickly as an example why it is a bad idea. To me that is a benefit of transparent “end-to-end” architecture it allowed these modifications to be rolled out quickly.

It is still unclear whether congestion is best dealt with via over provisioning or quality of service. Bandwidth is not free, but then neither are these complex prioritization protocols. One needs to keep in mind priority schemes to do not increase bandwidth they select winners and losers.

2) Heterogeneous application
The report mentions sensor networks as placing new demands on the network but doesn’t really delve into their unique attributes. Sensors are often extremely power constrained. IP protocols were not designed to accommodate this “feature.” Security is important both to conceal information from intruders and to prevent spoofing. Normally one does not worry about Signals Intelligence when designing a civilian network but that becomes a key consideration with sensors. Learning Sensor A it talking to Widget Z at time X may be enough to compromise the entire network.

The multicast vs unicast debate is contentious. Internet protocols do not handle broadcast transmission very well. There are solutions but they have not yet been widely implemented. All commercial content is based on a one-to-many distribution model. This was a direct result of the technology available at the time: radio, print publishing, film distribution etc.

A key question the paper does not even ask, much less address, is whether or not there is inherent value in “broadcast” one to many or does unicast using a peer-to-peer model make better use of high-speed networks and computers? In the olden days bandwidth and storage were expensive. Only a few players had access to mass distribution. Technology turns that on its head. Both are cheap, the old optimizations no longer apply.

Rather then focus on multicast does it make more sense to utilize readily available storage and network capacity to facilitate on demand access using peer-to-peer technology? This offer great value and low cost because users act as both data sources and sinks. This sort of arraignment is not possible with previous technologies. Being distributed it is also very robust and secure.

3) Heterogeneous Physical Layers
This is the section I agree with the most. Digital technology has wetted people’s appetite for data: where I want it, when I want it and how I want it. Mobility places great demands on the network.

4) Security
Insecure devices are the cause of many Internet security problems. If devices were less susceptible to compromise the amount of malicious traffic would decrease.

Security requires a mechanism to allow Bob to trust Alice even if they have never met. That is a hard problem usually involving trusted third parties. That immediately brings all kinds of new threats and complexity to the scene.

Security is often at odds with anonymity. This is another key attribute left out the paper. What is the proper role of anonymity vs trust? How much information should third parities be privy to? Secrets are impossible to keep. The more data collected the more will leak out. What is the best balance between privacy vs security?

5) Economic
The authors assumption is the only way to organize the network is for all components to be in the private sector, optimized for maximum profitability. Perhaps this is the best solution but shouldn't such a wide-ranging study at least look at the tradeoffs? Should the Internet be all private, all public, or some combination of public/private partnership?

For example the highway system is owned and managed by the government as a public good. The availability of excellent highways allows private companies to compete and flourish.

Another example is air travel. Again public sector owns and operates the enabling infrastructure, airports. Private companies compete to deliver services.

Summary
The authors have their bias, as I mine. For a massive research project geared to rethinking the future of the Internet this sure looks like more warmed over old ideas.

/Tom