The HyperText Transport Protocol, HTTP, underlies every website visit you make. Your browser negotiates with the server for the website. Once they come to their split second agreement, the server starts to send the web page, along with any other requested files (like images)11. Curious?: If you are using Firefox, install a program called Firebug. This popular programmer’s tool reveals all this under-the-hood magic. [↩]
The governing document specifying the protocol is RFC-2616, published by the Internet Engineering Task Force (IETF), an anarchic nerdopoly that authoritatively defines hundreds of protocols and file formats for internet use.
In common with many other IETF protocols, HTTP allows servers and browsers to send information not given in the RFC document using X- headers. Originally the ‘X’ stood for ‘experimental’. Today X-headers are used to create important de-facto sources of information and to facilitate services that build on the basic HTTP model.
I think that a useful additional header would be X-Torrent.
HTTP is a client-server protocol: all clients look up a single service by a universal addressing scheme. That single service provides the data, images etc by itself. This often leads to complicated schemes to disguise whole legions of physical servers as a single logical server in order to account for demand.
If you are a large firm with a relatively predictable load, this is all fine and good. But if you are a small website, perhaps hosted by a shared host or a modest virtual private server, it won’t stand up to the sudden surge in traffic that might come down the pipe from Digg, Reddit, Slashdot and the like.
By contrast, the BitTorrent protocol works by distributing files between peers in small chunks. As more and more peers obtain parts of the file, the aggregate bandwidth and load capacity for that file increases. A popular torrent can have staggering aggregate bandwidth by a simple scheme of many small connections donating some of their capacity.
The thinking behind the X-Torrent header is that whenever a web server returns any headers over HTTP, it includes an X-Torrent header pointing to a torrent tracker for that document. When a website becomes heavily loaded, browsers would use the BitTorrent protocol to obtain the requested document or resource from their peers, distributing the load amongst the audience as well as on the server.
This would be particularly useful for cases where many users try to access a single large file at the same time. It would also provide a useful form of distributed caching — other systems might be able to pull resources directly from the torrent rather than the original server.
Such a scheme would require web servers to push the header and maintain a BitTorrent tracker file. Browsers would need to include the BitTorrent protocol. After that, it might turn out to be a way to make everyone’s life just a little easier.
Update: a commenter on Reddit makes the fairly sensible suggestion that alternatively, web browsers could send a message to the web server that they will accept a torrent file instead of the original using the HTTP Accept-Encoding header. This would eliminate the need for another header and simplify the semantics a bit.
Update II: Hello Redditors and Diggers. The server was down for reasons unrelated to being Reddited and Dugg, but it was certainly embarrassing!