the forums at degreez.net

It is currently Tue Mar 19, 2024 1:00 am

All times are UTC - 7 hours [ DST ]




Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 60 posts ]  Go to page Previous  1, 2, 3, 4  Next
Author Message
 Post subject:
PostPosted: Thu Apr 29, 2004 9:30 pm 
What about MUTE? I "think" it's serverless and anonymous. I've never used to to be honest, so I don't know what the speeds are like.

- Heavy Arms -


Top
  
 
 Post subject:
PostPosted: Fri Apr 30, 2004 1:07 pm 
I am sry to say but filetopia is the best place to get files caht and get help. no adware/spy-ware like bearshare(cough) :D


Top
  
 
 Post subject: Re: is it possible?
PostPosted: Fri Apr 30, 2004 7:28 pm 
asdf903 wrote:
lordgreg wrote:
i'm interested in that too.. if someone could create a p2p client/server(less) that would had a functions from emule and speed from torrents, that would be awesome!


that sound awesome....but is it possible?


It's called Shareaza... Trackerless capability if the trackers go down, share over Gnutella, G2, and Edonkey2000. Check out the most recent beta release: http://www.shareaza.com


Top
  
 
 Post subject:
PostPosted: Sun May 02, 2004 4:54 am 
Shareaza? This is not about combining a hand full of popular file sharing programs into only one program. All the flaws are being combined too. And to be honest, I don't like the idea of interfacing gnutella with bittorrent or anything else.

The point is to merge the good features of all file sharing programs into one, thus making it more versatile und advanced as any other file sharing program out there.


Top
  
 
PostPosted: Tue May 04, 2004 7:07 am 
this idea may be too ...impossible..

develop a new p2p client which share file via winmx(using sepcial kind proxy)
edonkey2000 network Bit Torrent p2p network FastTrack Network....
(but it have own protocol to intergrate these network and the more effective)

(i maynot express my idea very well....
my english is quite bad .....
in fact i am hong kong p2p enthusiasts...


Top
  
 
 Post subject: better idea
PostPosted: Wed May 12, 2004 12:53 am 
I was a p2p like DC++ but multi-user download and serverless, that works on a local lan and works within a certain ip range


Top
  
 
 Post subject: p2p stuffs
PostPosted: Thu May 27, 2004 6:53 am 
There were several things I'd like to touch. First off, encryption won't do jack for someone you're sharing to the public. Encrypting a message for someone you don't know is still going to get to that person, but people on the way there may not know what it is.

Encryption does not provide anonymity. I'd repeat that, but you can just read it again. Pig latin is a weak form of encryption. While you can speak to someone in pig latin, they still know who you are(because they see you speaking).

Currently, entities that are trying to find illegal files on p2p networks are not sniffing traffic, they are legitimate clients. Someone who's legitimate will legitimately be able to decrypt the data you send them, regardless of whether they are good or evil.

Why is bittorrent so fast? Well, the protocol itself does not make it fast, but there are some catches to prevent upload throttling. It's the people using it that makes it fast. The idea that people who aren't uploading should have slower downloads keeps people uploading, in turn making everything faster. gnutella had 90% leeches who weren't sharing much at all. You can't trust the users to allow uploads, either. In practice, they don't. You would end up with 10% of the users supplying 80% of the data.

Locking the port down is a _bad_ _bad_ _bad_ idea. If port 1881 is always associated with high bandwidth p2p program X, then ISPs and universities will start to throttle or block traffic on port 1881. Simple as that. This is one case against random scanning for other hosts.

Second case is that random scanning is terribly inefficient, and won't work so well if you are randomly scanning ports, too. Gnutella had roots here and they were horrible.

Then again, the gnutella (limewire, bearshare, gnucleus, mutella, many others) protocol is one of the least efficient of the successful p2p networks. I've seen papers suggesting that it's theoretically only efficient up to about 100 well behaved clients. Hands on tests (tests) I've done show something about that is true. Let alone that many clients out there are not friendly and that the connection between any two nodes on the gnutella network may not be so hot, you're looking at problems. For example, someone is doing research on the amount of TCP protocol packets (non data packets used for starting and ending connections, mostly) compared to the TCP data packets (the stuff you're transferring) is in the range of 80%. edonkey and web traffic are down under 10%, really more like 2-3%, depending on the client, server, and network reliability. This is percentage of the number of packets, not percentage of data, keep in mind. However, every packet suffers from your round trip time (ping) and adds to network congestion. If everybody were running gnutella, I don't think anything else would work. I ran a gnutella client and switched ports. I was still seeing traffic on the old port for weeks. That's too much delay.


I certainly predict more waste style "community" things where you trust people and they trust people, but you don't just trust everybody. I also see encryption coming in, because it makes "detection" of the protocol much more difficult. This has nothing to do with legality. ISPs pay for bandwidth. Having a saturated upstream connection means that everybody notices things taking longer. Going to hotmail sits there for a while and then the images don't load right away, for example. People complain about that, and switch providers. If it gets to that point, they need to pay for more bandwidth. With p2p traffic being what it is, they could limit 20% of that traffic instead of buying the new T1. They could limit 80% of that traffic and people will applaud the speed at which web pages are returned. They can't throttle it if they can't detect it. They can't detect it if encryption is used in the right way. They can, however, throttle users who transfer too much data, which I see happening.

Another interesting concept would be if you built a p2p protocol that allowed you to inject packets with a spoofed source IP, but real data. Instead of giving my real IP, I could give a bogus one. The problem with this is that many providers don't allow traffic to go out if the origin doesn't make sense. If I am a router at some store, I won't let traffic go out if it says it's from an IP that isn't supposed to be in my store. On the same note, I won't let traffic in if it says the origin IP is supposed to be in my store. TCP would not work this way, but UDP and ICMP would. The host that was receiving packets wouldn't be able to tell where they came from. I don't know of any p2p programs that use UDP, and certainly none that use ICMP.


Top
  
 
 Post subject: Re: p2p stuffs
PostPosted: Thu May 27, 2004 7:58 am 
Offline

Joined: Mon Mar 15, 2004 8:35 am
Posts: 418
chiefmoo wrote:
I ran a gnutella client and switched ports. I was still seeing traffic on the old port for weeks. That's too much delay.


That is most likely because there is no centralized "registry" for who to connect to on gnutella. Any peer that successfully connected to you on port X would then cache your IP and port in their host cache. If the person using the client then exits it, then you change your port after that, and then that person doesn't load their gnutella client until a week or two later, their client tries connecting again to hosts in their host cache on the old ports that were successful on the last load but a week or two later. Which, as expected, generates minor traffic on the old port to figure out if that host still is available. Meanwhile, BitTorrent has a centralized peer list (the tracker) and people "register" themself in that centralized peer list during a transfer then unregister themself when done and that centralized list usually automatically expires non-responding peers (if they registered at some point but stopped telling the tracker that they are still active hours later). Since gnutella is completely decentralized and doesn't rely on a central source to know which peers to connect to, it obviously can't remove peers out of its host cache as fast as a protocol that contacts a central authority for peer lists on every load. If a centralized peer list (the tracker) goes offline, BitTorrent then can't connect to any peers because that is its only source for the peers to connect to.

chiefmoo wrote:
Another interesting concept would be if you built a p2p protocol that allowed you to inject packets with a spoofed source IP, but real data. Instead of giving my real IP, I could give a bogus one. The problem with this is that many providers don't allow traffic to go out if the origin doesn't make sense. If I am a router at some store, I won't let traffic go out if it says it's from an IP that isn't supposed to be in my store. On the same note, I won't let traffic in if it says the origin IP is supposed to be in my store. TCP would not work this way, but UDP and ICMP would. The host that was receiving packets wouldn't be able to tell where they came from. I don't know of any p2p programs that use UDP, and certainly none that use ICMP.


Even if you "spoof" the source for a file transfer done via UDP, you still have to know a real IP to contact to start the transfer.


Top
 Profile  
 
 Post subject: Re: p2p stuffs
PostPosted: Thu May 27, 2004 8:15 am 
Offline

Joined: Mon Mar 15, 2004 8:35 am
Posts: 418
chiefmoo wrote:
Then again, the gnutella (limewire, bearshare, gnucleus, mutella, many others) protocol is one of the least efficient of the successful p2p networks. I've seen papers suggesting that it's theoretically only efficient up to about 100 well behaved clients. Hands on tests (tests) I've done show something about that is true. Let alone that many clients out there are not friendly and that the connection between any two nodes on the gnutella network may not be so hot, you're looking at problems.


Those papers have to be years out of date. I am connected right now to 60 gnutella hosts (30 ultrapeers and 30 leaves) in LimeWire 4.0.5 and it is using at most 12k/s up and at most 10k/s down acting as an ultrapeer. As a leaf, LimeWire uses < 1k/s when connected to 4 ultrapeers.


Top
 Profile  
 
 Post subject: eMule + Kademlia
PostPosted: Sun May 30, 2004 9:42 am 
I use eMule and it rocks pretty much because its the only one that have support for the proeminent protocol called: Kademlia serverless network

The only thing needed to connect to this network is the IP and port of any eMule client already connected. This is called a Boot Strap.
Once a client is in the network, the client then requests for other clients to determine if it can be contacted freely. This process is very similar to the HighID/LowID (eMule's way of autheticating) check on the servers. If you can be freely contacted, you are assigned an ID (similar to a HighID) and given an ?open? status. If you are not freely contacted, you are given a ?firewalled? status. Currently, firewalled users are not supported and you are then required to connect to a server. Firewall support will be added later.

Searching in Kademlia
In this network it does not matter what you search for. Be it a search for filenames, for sources of a download or for other users, all work pretty much the same.
There are no servers to keep track of clients and the files they share so it has to be done by each participating client in the network ? in essence, every client is also a small server.
Since every client is identified by a unique hash value, the idea of Kademlia is to associate a certain ?responsibility? based on this hash. Each client in the Kademlia network works as a server for certain keywords or sources. The client?s hash determines the specific keywords or sources.
So the goal of any kind of search is to find those clients that have the responsibility for the current search topic. This is accomplished by a complex calculation of the possible distance to the target client by asking other clients for the shortest route to it.

And for that, you need credits to download files. Credits reward users who upload. The transferred amount of data determines the amount of credits. Credits are not global, i.e. there can only be used on the client who grants the credits. Credits are a major modifier when calculating the progress in another client's queue. The more credits you have the faster you will advance.

And that's true: The more the people connects and uploads in kademlia, the more they will download and get faster connections.


Top
  
 
 Post subject:
PostPosted: Wed Jun 02, 2004 4:05 pm 
kazaalite is really good. im new to BT, but so far, ive been unimpressed. My speed has been 40kb/s max, for a few seconds. On kazaa i could download straight up to 100-200kb/s, onpopular files, and it has practically every file.


Top
  
 
 Post subject:
PostPosted: Tue Jun 08, 2004 3:05 am 
guest wrote:
kazaalite is really good. im new to BT, but so far, ive been unimpressed. My speed has been 40kb/s max, for a few seconds. On kazaa i could download straight up to 100-200kb/s, onpopular files, and it has practically every file.


Theres many reasons for that. Go and read a few FAQs, they'll explain everything (or should at least).

Kazaa lite was a load of bull for me, it didn't have everything i want, eMule is the only P2P that has everything i want. That said, BitTorrent still kicks ass. Very friendly community.


Top
  
 
 Post subject:
PostPosted: Tue Jun 08, 2004 6:42 am 
Offline

Joined: Mon Jun 07, 2004 11:55 am
Posts: 29
Overnet sucks. It gave me 8 spyware applications when i started it.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jun 24, 2004 3:41 am 
Y'know, if these protocols were using real multicast, you wouldn't need to worry about leechers, uploaders, and downloaders. A single supplier could provide content to an unlimited number of receivers, so "tit for tat" would be irrelevant.

Of course you have to have a broadcast schedule, it would be inefficient to start broadcasts just on the demand of a single downloader.

Retransmits for lost packets may be inefficient, but still more efficient overall than thousands of TCP connections. Just send a multicast "retry" requests, but start with a small TTL and bump it up after a timeout, until you get a response. Eventually your request will hit an upstream site that has the missing packet, and they can reply directly to you (unicast) or if there are many retry requests, just multicast the retransmit. (Yes, you have to use UDP, but that's a plus for this purpose.)
-- HighlandSun


Top
  
 
 Post subject:
PostPosted: Thu Jun 24, 2004 10:31 am 
I don't know either. I'm not a fan of kazaa lite. Shareaza is alright. It has pretty good speeds, but only mediocre files. Bearshare looked alright but since I'm firewalled (and tend not to tool with it) it didn't work for me. Emule and Edonkey, I have heard good things about but havent tried. MUTE and filetopia? Are those new?

Anyway, I prefer Winmx. Speeds aren't always the best, but it is way more advance than the others servers I've tried and Opennap is pretty nice. If your looking for privacy that might be the place to go (Opennap). Admins can controll what they want on the server, min share, and whether browse should be allowed.

I'm new to Bit Torrent but it sounds interesting.
P.S. Should I go with reg. release or a modified one? Why?


Top
  
 
Display posts from previous:  Sort by  
Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 60 posts ]  Go to page Previous  1, 2, 3, 4  Next

All times are UTC - 7 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 0 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
cron
Powered by phpBB® Forum Software © phpBB Group