TheSHAD0W wrote:
http://bittornado.com/docs/webseed-spec.txt
Read "server-side implementation notes"
1. Limit its average upload to a reasonable level.
2. Intelligently tell peers how long they should wait before
retrying.
3. translate from an info-hash and piece number to a byte range
within a file or set of files, and return those bytes.
1. Limit its average upload to a reasonable level.
If this is a public webserver, they will either be limited by media (because of lots of clients) or by software throttling, anyway. This is only important if someone doesn't have a server or pipe that can handle it.
2. Intelligently tell peers how long they should wait before
retrying.
This could default. A server that is handling all the requests, but only allowing a trickle of data may never need to do this, anyway. However, http clients generally have a default and then retry.
3. translate from an info-hash and piece number to a byte range
within a file or set of files, and return those bytes.
As I say, the client could do this based on the torrent file and then convert that to an http request. Plus, with the client doing this, the server needs know nothing about the torrent. It doesn't even need to know that there is a torrent pointing to it.
It still seems to me that this would be quite useful for some situations. The limitation of having to have the server know about the torrent considerable.
In the situation where there's a big file on a slow http server out there that doesn't have a torrent system and the clients want to have that public server do the seeding, it seems especially useful. Instead of deciding _between_ using bittorrent or http, one could use both of them. In fact, one could use the swarm and several mirrors, rather than just using one webserver.
Perhaps with this implemented on a client, people could also add the url manually in the client. You're downloading some large file from a swarm, but you notice that some webserver also has it, so you click around to "add a url" and paste the url in.
Clients in the future may even be able to take a url and then search some databases around for swarm information automatically.
To me, this just adds a lot of potential and puts the power in the hands of the clients, rather than the servers.