If a physician wants to send a critically ill patient's X ray over the Internet to a radiologist who specializes in cancer diagnosis, should that X ray be delayed by transmissions of data between computer game players? Who should decide who gets needed data the fastest or who should get a message delivered first when there is considerable traffic on the network? Should only people willing to pay for service- delivery agreements receive messages faster than others who do not pay or who pay less?
Whether the desired performance in delivering messages and huge data files can be provided is an issue facing people concerned about the "quality of service" (QOS) of networks. QOS is all about being able to guarantee that data will be received by a certain time, at a certain rate, or with some other desired quality. "It is analogous to next-day-air delivery by Federal Express versus delivery by U.S. mail," says Al Geist, head of the High-Performance Computing Research Section in ORNL's Computer Science and Mathematics Division (CSMD). "If your Internet file absolutely has to be there by a certain time, then you need QOS. Unfortunately, the Internet does not have any delivery guarantees, but researchers at CSMD are trying to change that."
When a data file is sent over the Internet, it is sliced into pieces, or data packets. Computers called routers direct these packets along different paths over the network. The packets are then reassembled at their destination. One problem is that with today's Internet a router is allowed to throw away data packets in response to traffic conditions and priorities assigned by the service providers. In response, the lost packet must be resent, and there is no guarantee that this packet will not also be lost. The net effect is that delivery of the complete data file can be delayed for an unknown amount of time.
A delayed response can be costly for a lot of reasons. Beyond the example of the physician and the patient's X ray, there are also examples in science. Say, for instance, a million-dollar supercomputer is waiting for the data it needs to perform certain calculations. If an unanticipated delay occurs in the transmission of the data, that million-dollar supercomputer could remain idle. A delay in relaying a complete message could also cause a chaotic response in a robot being operated remotely over a network, say, to run an experiment. "Just as a drunk driver may cause a traffic collision because of a delayed response, a robotic arm could behave wildly because of a delay in delivering the complete set of commands," says Geist.
To reduce and predict delays in data delivery, CSMD's Nageswara Rao has developed a computer program called NetLet that is currently being tested on a small subset of the Internet. "NetLet allows computers to talk with each other and determine whether a complete message got there and what the delay is in getting the dropped packet to the receiver," Rao says. "This algorithm enables the computers to measure connection speeds and the delays of alternate pathways and then identify the best combination of pathways to get the information delivered efficiently in the time or rate guaranteed."
Demonstrations of NetLet have shown that the algorithm has improved the speed of data delivery by about 40%, without any additional support from Internet routers. "Some of our data files used to take 10 seconds to get from our computer to a destination computer," Rao says. "Those same data files can now get there in 6 seconds. That means that a huge data file that takes 10 hours to arrive at a destination computer can now get there in 6 hours."
NetLet could be useful for speeding up the delivery of large data files from neutron scattering experiments performed at the Spallation Neutron Source (SNS) when it becomes operational in 2006 at ORNL. "We need to find ways to rapidly transfer hundreds of gigabytes or terabytes of data from the SNS across the country to scientists requesting the information," Geist says. "And our climate scientists are needing this capability today."
It takes about 2 hours to send 500 megabytes of climate data (calculated results of simulations of future climate) from DOE's Center for Computational Sciences at ORNL to the National Energy Research Supercomputing Center at DOE's Lawrence Berkeley National Laboratory in California. These data files are sent across DOE's new Energy Sciences Network (ESnet), a semiprivate part of the Internet.
Today, if high-energy physicists wish to transfer hundreds of terabytes of data from DOE's Stanford Linear Accelerator Center in California to the European Laboratory for Particle Physics (CERN) near Geneva, Switzerland, they don't use ESnet, because it would take a month to transmit all the data within these huge files. Instead, they deliver the information in a week by copying it onto tapes and transporting the tapes to CERN by Federal Express. The goal is to further develop NetLet and other approaches so that huge files of data from high-energy physics facilities and the SNS can be delivered in one to two days.
If NetLet is eventually incorporated into Internet routers, more users may feel they are sending and receiving their information by express.
Related Web sites
Computer Science & Mathematics Division (CSMD)