Researchers at several universities are working to design a new internet to replace the current global network. The new version would address many of the problems inherent in the design of the current network, including the security, mobility, and ubiquity of internet-connected devices. But for schools, businesses, and other institutions, the cost of transitioning to the new network–replacing routers, switches, and other gear to accommodate the new internet architecture–could be significant.

The internet “works well in many situations but was designed for completely different assumptions,” said Dipankar Raychaudhuri, a Rutgers University professor overseeing three internet redesign projects. “It’s sort of a miracle that it continues to work well today.”

No longer constrained by slow connections and computer processors and high costs for storage, researchers say the time has come to rethink the internet’s underlying architecture–a move that could mean replacing networking equipment and rewriting software on computers to better channel future traffic over the existing pipes.

One challenge in any reconstruction, though, will be balancing the interests of various constituencies. The first time around, researchers were able to toil away in their labs quietly. Industry is playing a bigger role this time, and law enforcement is bound to make its needs for wiretapping known.

There’s no evidence they are meddling yet, but once any research looks promising, “a number of people [will] want to be in the drawing room,” said Jonathan Zittrain, a law professor affiliated with Oxford and Harvard universities. “They’ll be wearing coats and ties and spilling out of the venue.”

The National Science Foundation wants to build an experimental research network known as the Global Environment for Network Innovations, or GENI, and is funding several “clean-slate” internet projects at universities and elsewhere through a program called Future Internet Network Design, or FIND.

Rutgers, Stanford, Princeton, Carnegie Mellon, and the Massachusetts Institute of Technology are among the universities pursuing individual projects. Other government agencies, including the Defense Department, also have been exploring the concept.

A new network could run parallel with the current internet and eventually replace it, or perhaps aspects of the research could go into a major overhaul of the existing architecture.

These efforts are still in their early stages, though, and aren’t expected to bear fruit for another 10 or 15 years–assuming Congress comes through with funding.

Guru Parulkar, who will become executive director of Stanford’s initiative after heading NSF’s clean-slate programs, estimated that GENI alone could cost $350 million, while government, university, and industry spending on the individual projects could collectively reach $300 million. And it could take billions of dollars to replace all the software and hardware deep in the legacy systems.

Clean-slate advocates say the cozy world of researchers in the 1970s and 80s doesn’t necessarily mesh with the realities and needs of the commercial internet.

The internet’s early architects built the system on the principle of trust. Researchers largely knew one another, so they kept the shared network open and flexible–qualities that proved key to its rapid growth.

But spammers and hackers arrived as the network expanded and could roam freely, because the internet doesn’t have built-in mechanisms for knowing with certainty who sent what.

The network’s designers also assumed that computers are in fixed locations and always connected. That’s no longer the case with the proliferation of laptops and other mobile devices, all hopping from one wireless access point to another.

Engineers tacked on improvements to support mobility and improved security, but researchers say all that adds complexity, reduces performance, and–in the case of security–amounts at most to bandages in a high-stakes game of cat and mouse.

Workarounds for mobile devices “can work quite well if a small fraction of the traffic is of that type,” but they could overwhelm computer processors and create security holes when 90 percent or more of the traffic is mobile, said Nick McKeown, co-director of Stanford’s clean-slate program.

The internet will continue to face new challenges as applications require guaranteed transmissions–not the “best effort” approach that works better for eMail and other tasks with less time sensitivity.

Think of a doctor using teleconferencing to perform a surgery remotely, or a customer of an internet-based phone service needing to make an emergency call. In such cases, even small delays in relaying data can be deadly.

And one day, sensors of all sorts likely will be internet-capable.

Rather than create workarounds each time, clean-slate researchers want to redesign the system to accommodate any future technologies easily, said Larry Peterson, chairman of computer science at Princeton and head of the planning group for the NSF’s GENI project.

Even if the original designers had the benefit of hindsight, they might not have been able to incorporate these features from the start. Computers, for instance, were much slower then, too weak for the computations needed for robust authentication.

“We made decisions based on a very different technical landscape,” said Bruce Davie, a fellow with network-equipment maker Cisco Systems Inc., which stands to gain from selling equipment as schools and businesses migrate to the new network.

“Now, we have the ability to do all sorts of things at very high speeds,” he said. “Why don’t we start thinking about how we take advantage of those things and not be constrained by the current legacy we have?”

Of course, a key question is how to make any transition–and researchers are largely punting for now.

“Let’s try to define where we think we should end up, what we think the internet should look like in 15 years’ time, and only then would we decide the path,” McKeown said. “We acknowledge it’s going to be really hard–but I think it will be a mistake to be deterred by that.”