|the forums at degreez.net
|BitTornado's web-based seeding specification
|Page 1 of 1|
|Author:||TheSHAD0W [ Wed Sep 03, 2008 10:26 pm ]|
|Post subject:||BitTornado's web-based seeding specification|
There's been a lot of chatter lately about "webseeds", or seeding torrents via http. I wrote a spec for doing so in 2003 (http://bittornado.com/docs/webseed-spec.txt), and there's a competing specification produced by the authors of Getright (http://getright.com/seedtorrent.html). (These specs have been adopted as draft BEPs by BitTorrent Inc as http://www.bittorrent.org/beps/bep_0017.html and http://www.bittorrent.org/beps/bep_0019.html respectively.) We've been debating which specification is superior, and whether to choose one or the other (or both) as a standard. I wanted to explain and defend my design.
Getright's specification is the most obvious, and the simplest to implement on the server end: Upload the file(s) onto your web or ftp server, then add a key to the .torrent file pointing to the URL. Simple, right?
Some sort of web-based seeding was one of the most common requests as an addition to BitTornado (and BitTorrent), and the above system was the first that came to mind, but I resisted implementing it for several reasons. I felt it would cause issues for web servers, with ill-behaved clients performing multiple download threads, would result in people pointing torrent seeds at servers that didn't expect the attention, and would be an inefficient use of bandwidth.
When I discovered DeHackEd's prowess with scripting, he and I got together and came up with a script-regulated download interface which would solve all of the above problems. By using a script (DeHackEd wrote one in PHP, but it can be implemented easily in many other languages), bandwidth usage can easily be regulated, or cut off if the torrent had sufficient conventional seeds, and would be used optimally, and abusive clients can be regulated or banned.
The LEGITIMATE target BitTorrent utilizer is a content owner who has limited resources and can't afford to use Akamai to host their content. They would likely be hosting their web pages on the cheapest webspace provider they could find, with limited bandwidth and limited access to the httpd's configuration settings. I wanted a solution that would fit on a $10 per month webspace account. Regulating bandwidth usage in the script is straightforward and requires no access to the http daemon's configuration files.
One big advantage of the script-regulated solution is it forces clients to download one piece at a time. This is optimal because once a client has a whole piece it can immediately start distributing the data to its peers. With direct http/ftp access, clients can also download whole pieces in one shot - assuming they don't span any files. Clients may also be tempted to try and grab more than one piece at a time, which would load down the web server and slow distribution to other peers. Further, some badly behaved clients which have implemented this download a chunk at a time which hurts efficiency badly.
The direct http/ftp access model may also have issues with torrents containing many small files. Because you would need to make multiple download requests to obtain a full piece worth of data, a loaded-down server may slow uploading or respond with 503 messages, resulting in a delay in the completion of downloading pieces and slowing distribution in the torrent. The script-regulated model, on the other hand, will take a request for a full piece, concatenate the necessary data and send it in a single block.
One last benefit for clients which use the script-regulated model: It's much easier to implement on the client side than direct http/ftp. So there's no excuse not to include it!
|Page 1 of 1||All times are UTC - 7 hours [ DST ]|
|Powered by phpBB® Forum Software © phpBB Group