Internet II – the sequel
April 16, 2007
Here is an article supporting the “clean slate” approach to “fixing” the Internet.
Some interesting points:
One challenge in any reconstruction, though, will be balancing the interests of various constituencies. The first time around, researchers were able to toil away in their labs quietly. Industry is playing a bigger role this time, and law enforcement is bound to make its needs for wiretapping known.
I consider this to be a real concern. Are people actually suggesting that we need to redesign the Internet for security while simultaneously designing in gaping holes and traceability for law enforcement? I think that there might be something of a gap between what law enforcement “need” and what they should actually get. For instance, law enforcement tend to support what Bruce Schneier refers to as “wholesale surveillance” – getting as much information as possible about everyone, just in case – but this incurs an enormous cost in personal privacy. I’m all for catching bad guys, but security and privacy are not mutually exclusive and if you’re trading them off, then you’re looking in the wrong place.
And does anyone really think that allowing industry to play a bigger role in design is going to be all good for the consumer? Tiered Internet anyone? The current network might be half broken – but it’s broken to exactly the same degree for everyone, not just those who can’t afford to pay.
The Internet will continue to face new challenges as applications require guaranteed transmissions – not the “best effort” approach that works better for e-mail and other tasks with less time sensitivity.
Think of a doctor using teleconferencing to perform a surgery remotely, or a customer of an Internet-based phone service needing to make an emergency call. In such cases, even small delays in relaying data can be deadly.
Isn’t that back to front? What do you expect when you use something for a purpose it wasn’t designed for? Internet data transfer protocols were deigned on the basis of best effort – so why would you use such technology for critical systems? Stupidity? Or sheer bloody mindedness? A bit like basing a home security system on a blender, and then having to redesign the blender.
Rather than create workarounds each time, clean-slate researchers want to redesign the system to easily accommodate any future technologies, said Larry Peterson, chairman of computer science at Princeton and head of the planning group for the NSF’s GENI.
Now this is just market-droid talk. Is he inferring that the original designers of the Internet made a conscious decision not to accommodate any future technologies? How do you design systems to “easily” take into account unknowns? Computer psychics?
Even if the original designers had the benefit of hindsight, they might not have been able to incorporate these features from the get-go. Computers, for instance, were much slower then, possibly too weak for the computations needed for robust authentication.
And this kind of sums up the overall problem with this approach – in another 40 years as technology and social paradigms progress, they’ll be looking back and saying exactly the same things. “Computers were slower then. We didn’t take into account technology X or security Y. Wah Wah. Wah.”
Just because the Internet is largely ubiquitous, doesn’t mean that we have to use it for every new technology or system we come up with. If the Internet is not appropriate for new requirements, maybe designers need to consider a completely new model rather that trying to fit a round network into a thousand different polygonal holes?