Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

TR10: Peering into Video's Future

The Internet is about to drown in digital video. Hui Zhang thinks peer-to-peer networks could come to the rescue.

13 comments

John Hersey

This article is one in a series of 10 stories we're running this week covering today's most significant emerging technologies. It's part of our annual "10 Emerging Technologies" report, which appears in the March/April print issue of Technology Review.

Ted Stevens, the 83-year-old senior senator from Alaska, was widely ridiculed last year for a speech in which he described the Internet as "a series of tubes." Yet clumsy as his metaphor may have been, Stevens was struggling to make a reasonable point: the tubes can get clogged. And that may happen sooner than expected, thanks to the exploding popularity of digital video.

TV shows, YouTube clips, animations, and other video applications already account for more than 60 percent of Internet traffic, says CacheLogic, a Cambridge, England, company that sells media delivery systems to content owners and Internet service providers (ISPs). "I imagine that within two years it will be 98 percent," adds Hui Zhang, a computer scientist at Carnegie Mellon University. And that will mean slower downloads for everyone.

Advertisement

Zhang believes help could come from an unexpected quarter: peer-to-peer (P2P) file distribution technology. Of course, there's no better playground for piracy, and millions have used P2P networks such as Gnutella, Kazaa, and BitTorrent to help themselves to copyrighted content. But Zhang thinks this black-sheep technology can be reformed and put to work helping legitimate content owners and Internet-backbone operators deliver more video without overloading the network.

For Zhang and other P2P proponents, it's all a question of architecture. Conventionally, video and other Web content gets to consumers along paths that resemble trees, with the content owners' central servers as the trunks, multiple "content distribution servers" as the branches, and consumers' PCs as the leaves. Tree architectures work well enough, but they have three key weaknesses: If one branch is cut off, all its leaves go with it. Data flows in only one direction, so the leaves'--the PCs'--capacity to upload data goes untapped. And perhaps most important, adding new PCs to the network merely increases its congestion--and the demands placed on the servers.

In P2P networks, by contrast, there are no central servers: each user's PC exchanges data with many others in an ever-shifting mesh. This means that servers and their overtaxed network connections bear less of a burden; data is instead provided by peers, saving bandwidth in the Internet's core. If one user leaves the mesh, others can easily fill the gap. And adding users actually increases a P2P network's power.

There are just two big snags keeping content distributors and their ISPs from warming to mesh architectures. First, to balance the load on individual PCs, the most advanced P2P networks, such as BitTorrent, break big files into blocks, which are scattered across many machines. To re­assemble those blocks, a computer on the network must use precious bandwidth to broadcast "metadata" describing which blocks it needs and which it already has.

Next Page »

  • Page
  • 1
  • 2