P2P-Zone

P2P-Zone (http://www.p2p-zone.com/underground/index.php)
-   Peer to Peer (http://www.p2p-zone.com/underground/forumdisplay.php?f=5)
-   -   port blocking, throttling, and scrambling (http://www.p2p-zone.com/underground/showthread.php?t=12313)

alphabeater 15-07-02 05:52 PM

port blocking, throttling, and scrambling
 
http://yro.slashdot.org/yro/02/07/14....shtml?tid=153

roadrunner in texas is, apparently, blocking the fasttrack port.

http://homepage.ntlworld.com/j.bucha...x/blocked.html

this site is devoted to solving a similar problem with isps limiting traffic on winmx's ports. on top of this, people have been reporting similar things since the early days of opennap servers. some isps seem very eager to limit what their customers can and can't use their internet access for - and some are almost monopolies in their areas, too.

the way the tcp (as well as udp) protocol works is a key vulnerability of decentralised p2p. for a program to act as a server (ie. receive any incoming connections), it must listen on a port. fasttrack (kazaa/grokster) uses port 1214, winmx uses 6699 and 6257, opennap uses 8888, gnutella uses 6346, and even freenet has to use a standard port, 8481. although the port is usually configurable, about half the peers in the network need to use the standard port for the network to remain stable - otherwise, they can't find each other.

one solution would be to have users enter a port number (this is the solution widely adopted on opennap, server : port lists instead of just server lists). this can, however, become cumbersome.

thinking about this, i realised - what is the one thing that both the server and client will know before establishing a connection? the ip addresses of both. so, instead of ever using a default tcp port, ports for individual peers could be worked out using a algorithm known by all peers.

for example, if my ip address was 64.124.41.39...

each number in an ip is 1 to 255
4 x 250 = 1000
so i could take 5 from each number, making 59.119.36.34
add these numbers together, 59 + 119 + 36 + 34 = 248
multiply by 10, 248 x 10 = 2480

so my p2p program would listen for incoming connections on port 2480 - this could be worked out when the program was started, and then quickly worked out by any peer encountering mine just before it connected to me. i think that this technique would stop isps from quickly and easily throttling or blocking certain ports used by known p2p programs.

simple, but effective.

butterfly_kisses 15-07-02 06:44 PM

Re: port blocking, throttling, and scrambling
 
Quote:

Originally posted by alphabeater
http://yro.slashdot.org/yro/02/07/14....shtml?tid=153

roadrunner in texas is, apparently, blocking the fasttrack port.

http://homepage.ntlworld.com/j.bucha...x/blocked.html

this site is devoted to solving a similar problem with isps limiting traffic on winmx's ports. on top of this, people have been reporting similar things since the early days of opennap servers. some isps seem very eager to limit what their customers can and can't use their internet access for - and some are almost monopolies in their areas, too.

the way the tcp (as well as udp) protocol works is a key vulnerability of decentralised p2p. for a program to act as a server (ie. receive any incoming connections), it must listen on a port. fasttrack (kazaa/grokster) uses port 1214, winmx uses 6699 and 6257, opennap uses 8888, gnutella uses 6346, and even freenet has to use a standard port, 8481. although the port is usually configurable, about half the peers in the network need to use the standard port for the network to remain stable - otherwise, they can't find each other.

one solution would be to have users enter a port number (this is the solution widely adopted on opennap, server : port lists instead of just server lists). this can, however, become cumbersome.

thinking about this, i realised - what is the one thing that both the server and client will know before establishing a connection? the ip addresses of both. so, instead of ever using a default tcp port, ports for individual peers could be worked out using a algorithm known by all peers.

for example, if my ip address was 64.124.41.39...

each number in an ip is 1 to 255
4 x 250 = 1000
so i could take 5 from each number, making 59.119.36.34
add these numbers together, 59 + 119 + 36 + 34 = 248
multiply by 10, 248 x 10 = 2480

so my p2p program would listen for incoming connections on port 2480 - this could be worked out when the program was started, and then quickly worked out by any peer encountering mine just before it connected to me. i think that this technique would stop isps from quickly and easily throttling or blocking certain ports used by known p2p programs.

simple, but effective.

you forgot to add:

Brilliant. I don't know who you are..but I like your ideas. Thanks for sharing even more with us in the future.

Note to Developers: hope you are listening I know I am.

Thanks, alphabeater

Mowzer 15-07-02 08:35 PM

Good post Alpha beater.

I hear what your saying about those pink jerky bastards, I am glad kazaa is being blocked.

Its nothing but a crap service. Should have went down the toilet years ago.

The creators and now its new owners have really spunked it up. As for ISP's blocking p2p in general.

Bad Idea. I rather beat off till my hands blister then have those kinda restrictions go into effect.

Scyth 15-07-02 08:59 PM

Why not simply use random ports (like Freenet)? The port information is carried with search results anyway (which is why it's possible to change the port at all), so it shouldn't be necessary to be able to extract/generate the port information from other information.

Still, even then ISPs will simply switch to a more advanced solution that involves traffic monitoring. In order to make this difficult/impossible, some sort of encryption could be used (the keys could be passed along with the search results).

alphabeater 15-07-02 09:17 PM

scyth:

generating port numbers from ip addresses would make the network easier to organise, in my opinion - while it's true that randomly generated ports can be carried within search results, that doesn't solve the problem of finding places to connect to in the network in the first place. freenet appears to do this using a large list of nodes' addresses and listening ports - seednodes.ref - that you download along with the client, and can update from sites within freenet (such as the freedom engine). this is a far from ideal approach to the problem.

i agree about encryption, however - it's the best way to stop traffic being monitored, although i can't help thinking that not many isps would be bothered with the time and money it'd take to try to monitor and police everything their users do, not to mention the legal implications of that course of action.

butterfly_kisses 15-07-02 09:21 PM

Why not simply use random ports (like Freenet)? The port information is carried with search results anyway (which is why it's possible to change the port at all), so it shouldn't be necessary to be able to extract/generate the port information from other information.

I wish I was more familiar with the Freenet and how it works. I have tried this though with KaZaA by modifying manually the port numbers (default 1214) in the partially downloading .DAT files and for whatever reason the KaZaA program rejected the modifications to the .dat file (used Ultraedit version 9.x)...wait actually i tried modifying the file's path from the default path that it uses which is generally

/http://someipaddress:1214/012345/somefile.gif

I tried modifying it to grab /http://someipaddress:1214/c:/windows/temporary%20internet%20files/kmdb.html

just to see if the kazaa protocol/service was or would be vulnerable to this type of exploit...I may need to go back and test that theory again...only doing as you suggested with trying a different port.

I have been successful using KaZaA while running a socks server and KaZaA did use Non-standard ports (other than 1214) it's been awhile since i did this but I may still have my notes on it.

What alphabeater is suggesting letting the clients communication configure the port number on-the-fly based on a certain calculation is I think GENIUS....from using Packet capturing programs such as Ethereal you can see pretty much in real time how the dhcp servers communicate requests for ip addresses like 'you who had this ipaddress please tell suchandsuch ipaddress'. So it's quite possible this could be done with a p2p program.

JackSpratts 15-07-02 10:59 PM

b-b-b-but you can't fool your isp. :N: they know your ip# better than you do! if you change the port based on a anything public they'll use that info to reprogram their servers and reject the new one(s). piece o cake. it’s got to be something really disgiused. :ND:

- js.

Stoepsel 16-07-02 12:32 AM

And it's also not a good idea to have the clients scan for open ports in search of supernodes. ISP rightly have something against this because it could be interpreted as hacking attempts.

Stoepsel

AYB 16-07-02 03:57 AM

Quote:

/http://someipaddress:1214/012345/somefile.gif

I tried modifying it to grab /http://someipaddress:1214/c:/windows/temporary%20internet%20files/kmdb.html
Suffice to say this doesnt work. The /012345/ directory doesnt actually exist. It's a hash based upon the file in the directory itself. So as far as I'm aware the HTTP server a FastTrak client runs will only allow access to such directories.

As JS said, if it becomes popular/controversial enough that ISPs want to block/throttle the ports, surely they can just calculate which port you're running on themselves?

alphabeater 16-07-02 05:11 AM

Quote:

Originally posted by JackSpratts

they know your ip# better than you do!

eh? it's 4 numbers and 3 dots that myself and my isp know equally well.

Quote:

Originally posted by AYB

As JS said, if it becomes popular/controversial enough that ISPs want to block/throttle the ports, surely they can just calculate which port you're running on themselves?

they could calculate it themselves, but it would stop them just being able to block one port for all customers and instantly deal with the problem.

you think they'll bother to reprogram a port-blocking system to calculate ports using an ip-based algorithm? it'd be an expensive project when you think of the millions of customers (possibly) that such a system would have to deal with.

also, remember that the 'take 5 from each, add together, multiply by 10' was an example. anything actually used could be far, far more complex, and proprietary if need be.

JackSpratts 16-07-02 07:49 AM

Quote:

Originally posted by alphabeater

eh? it's 4 numbers and 3 dots that myself and my isp know equally well.


they could calculate it themselves, but it would stop them just being able to block one port for all customers and instantly deal with the problem.

you think they'll bother to reprogram a port-blocking system to calculate ports using an ip-based algorithm? it'd be an expensive project when you think of the millions of customers (possibly) that such a system would have to deal with.

also, remember that the 'take 5 from each, add together, multiply by 10' was an example. anything actually used could be far, far more complex, and proprietary if need be.

i could guarantee you 90% of pc users worldwide (including a few nu members) have no idea what their actual ip address is at any given moment and maybe 50% of those people have no idea what an internet protocol number is in the first place. their isps on the other hand absolutely have to know by definition, but this is a side issue nonetheless.

furthermore, all an isp has to do is use the algorithm to calculate this new port once, and within seconds have it for every subscriber on their network. it’s no work at all to block one additional port per customer, even if each succeeding one is different. i’m not saying it’s a bad idea alphabeater, on the contrary, it’s a good one. as a matter of fact it’s the best idea i’ve heard in a long while that deals with this problem. but i am saying that encryption may be required, and that the number used to choose a new port must in no way be accessible to your isp and probably not to you either by extension. it will make it more complicated for your firewall but hey, nobody says you have to use one. personally, i like the idea of running p2ps on stand alone pcs with no firewalls anyway. it’s so much easier and they’re so much faster all around. i call it ”skinny dipping”. just back-up your downloads regularly (including your new p2ps themselves and all their settings) and if you get hit a simple quick restore/reinstall puts you back in business. :)

- js.

another ip# look-up.

db_ 16-07-02 08:05 AM

Hi.

The RoadRunner block on the default WinMX listening port 6699 has been going on since at least January this year. The block only affects certain areas of the RR network, in particular the Texas regions Houston and Austin regions were reported by users frequently as having the problem. The 6699 filter usually allowed an outside user to connect into the RR users 6699 serving port, the transfer would usually get to around 8KB - 30KB before stalling and timing out completely.

The easily 'solution' to this was for these users to specify a port number other than the 6699 port. This then circumvents the 6699 block that RR are running and allows uploads to operate as normal. Another problem that's been occurring it seems is that these same RR users are finding they can't search for files on the network correctly, reporting searches for popular artists failing very early in, only a handful of results returned before stopping completely. It seems that the RR 6699 block is not only filtering incoming connections to RR users 6699 server ports, but is also filtering external 6699 connections. The RR user initiating a search for a file asks the/other Primary connections it's connected to, and it's invariably connected into these other Primary users via their 6699 port, and so if the RR block is restricting search results, it's restricting *external* connections to 6699, this IMO is unjustifiable, but I'm sure RR if faced with it could make up a reason to justify blocking the service WinMX, and not just incoming connections to their users.

Looks summit like this...
An off-net user connects into a RR user to download a file...

DL'er:1663 connects into RRuser:6699

The filter prevents this from operating, no problem, it seems the filter is also filtering outside 6699 connections though like...

RRuser:3527 is connected to PrimaryUser:6699 (superpeer)

When the RRuser initiates a search request, it asks the Primary, or Primaries it's connected to for results, and those Primaries ask every other Primary, all the results get returned to the initial Primary that the RRuser is connected to, and then received by the RRuser on their already established 3527 to 6699 connection. If the results are falling under the filter that RR have in place, they are not filtering their own users operating servers on 6699, they're in fact filtering users on external other networks running on the 6699 port. How an ISP can justify this I really don't know.

damn, did any of that make any sense or what? :-)

Anyway, the way I see it is the solution is.... the clients upon installation and configuration should try to assign a random port, instead of basing every user on the 'centralized' 6699 port by default. You can't assign dynamic ports really as far as I'm aware that change per session, etc, as users Routers need configuring with port forwarding very often to enable incoming connections to the client. This means the clients ports need to remain stationary so a Router forwarding rule can be created such as 'forward 16699 packets to machine 192.168.0.200', this requires the client have the port 16699 static. If the client port changes, the Router forwarding becomes useless, and no uploads will take place (note).

So, the clients just need a feature that upon installation of the client it chooses a fairly random port to use, say, anything between 1024 and 5000, this creates a network that is not centralized around a single common port, this means ISP's will find it harder to control specific services through the filtering of the common ports used for the service.

There are other ways an ISP can filter connections, but I think it's not a straight forward thing to do with apps such as WinMX, as the packet data appears as junk, and as far as I'm aware has no repetitive info within the packet to distinguish it as a specific service to be filtered. The Opennap and KaZaa protocols for example are far easier to filter as they contain clear information that can be filtered by using packet rules, you can even sniff a Opennap packet and read in English the user password, username on the network, file being transferred, files being served etc in plain English, this makes a filter rule easy to create. This is a good reason I no longer use Opennap, although I'm not scared of a note from my ISP stating I was sharing blah blah, I just think the protocol is insecure and old compared to what seems to be required nowadays to help protect one's privacy, and protect one's self from being busted. :-D

There is another client I noticed that has implemented the randomized port idea, I'm not sure what it was now, maybe Filetopia, which I was very happy to see added, hopefully it'll become a standard practice as new clients emerge to scatter the ports away from the default presently used.

Right I better shut up now, all the above is only *IMO* and *AFAIK* so there could well be inaccuracies here and there, and there are no doubt things I'm not aware of yet, other methods and complications maybe, but hopefully some of it is of interest to someone somewhere. =)


BTW the site mentioned up top is one I made up to help users with WinMX probs, nothing special or anything by any means. I used to have more information directly for the blocked RR users available on the site at one point, but thought it best to not mention the actual ISP anymore in case it stirred any trouble. I know RR where made aware of the info as the users were phoning RR tech and causing mayhem a bit, mentioning 'the site' etc. I did have some interesting RR tech support excuses knocking around at one point, only one person afaik was able to get RR to admit the blocks were in place, the other tech responses were a joke tbh. This all happened around February iirc, I'm surprised it's took so long to come out (with the news of KaZaA possibly now blocked).


All good fun.
cheers.

db

http://WinMXHelp.d2g.com/

db_ 16-07-02 08:07 AM

whoops didn't realise it was so damn long, sorry! :J:

JackSpratts 16-07-02 08:20 AM

Quote:

Originally posted by db_

So, the clients just need a feature that upon installation of the client it chooses a fairly random port to use, say, anything between 1024 and 5000, this creates a network that is not centralized around a single common port, this means ISP's will find it harder to control specific services through the filtering of the common ports used for the service.

hi db, it wasn't too long.:) i can't remember who told me this, it might've been jaan - it was a while ago, but this is something fasttrack was looking at for an upcoming release. what's taking so long i don't know, but it's probably related to their defensive posture and resouce allocation vis a vis the riaa.

- js.

butterfly_kisses 16-07-02 08:37 AM

Quote:

Originally posted by db_
whoops didn't realise it was so damn long, sorry! :J:
That was lots of GOOD information...please don't apologise for posting it. as a matter of fact...THANK-YOU for posting that and for taking the time/trouble to go into the depth of detail that you did.

We can learn a lot from each other this way.

so again..I thank-you.

Snarkridden 16-07-02 04:12 PM

Good Links Ab... Thanks!
 
Being on NTL with a 512k connection 24 hour and going like the clappers most times, I really needed this info..

http://homepage.ntlworld.com/j.buch...mx/blocked.html


I tired out the suggested port changes and the test system seems to work Ok into my server, up & downloads Ok, also searches Ok with the rest of the network, so could be a winner in the "awkward" situations, but as was said, only if the primary nodes talk on those ports too...

Nice post Db ... Took me a long time to read it, which means it was usefull, and not skipped like so many other long posts..

(Snark's short span of attention!)



:shk: Snark..

alphabeater 16-07-02 06:31 PM

db, thanks from me too for the huge amount of detail you posted - it's the best i've seen describing actual experiences of port blocking. everyone in this thread is suggesting a lot of things i hadn't thought of at all.

looking at the posts above, an interesting set of problems are posed. a way of calculating a port number is needed which:

- is almost random
- cannot be figured out by a peer's isp
- can be figured out by another peer on the network
- is static for use with routers/firewalls

to stop isps creating lists of all the ips they own and which port needs to be blocked for each, at first i considered involving the gmt date in the algorithm, and changing all listen ports every day at gmt midnight. this would solve the first three problems in the list, but make it impossible to solve the last.

however, an answer could be found if some other detail apart from the ip address of each user were known to every peer in the network about each other peer. this could work, to a degree, for kazaa usernames (supernodes return both usernames and ips with search results, although the network is closed so i can't really check that for sure). if kazaa were to implement a port worked out with usernames, not ip addresses, then that could work.

for example,

if i'm alphabeater@KaZaA

the first five characters of my name are 'alpha'
if, for the sake of programming, we say a = 0, then there can be 25 (a nice number) letters in the alphabet (any numbers in the name could be left as they were).
so,
a = 1, l = 12, p = 16, h = 8, a = 1

in this example, we now have 38 (it's min 5 and max 125). multiply this by 80 and we have a number from 400 to 10000. if the result turns out as 2000 or under here, then it can be subtracted from 10000 to get the final result.

in this example, we're now using port 3040 for listening, and this port could be obtained by anyone who knew that the first five characters of my name are 'alpha', but not automatically listed by an isp.

this calculation could, again, be a lot more complicated, and the string used could change depending upon the p2p network layout, as long as any peer on the network needing to make a connection could calculate the other peer's port reliably it wouldn't matter. this technique also has the advantage of generating a static port for as long as the user keeps their identity on the network the same, even if the ip involved is dynamic, making it easier to configure firewalls and routers.

overall, i think known string-based port scrambling could be more effective than ip-based port scrambling.

Scyth 16-07-02 06:43 PM

I still don't under stand why any sort of generated port number is necessary. Since an IP address is required and must be communicated somehow, why not simply include the port, a mere 2 bytes of information, in this communication as well.

alphabeater 16-07-02 06:55 PM

Quote:

Originally posted by Scyth
I still don't under stand why any sort of generated port number is necessary. Since an IP address is required and must be communicated somehow, why not simply include the port, a mere 2 bytes of information, in this communication as well.
ips are generally communicated across a decentralised p2p using the standard ports of that p2p. there's no way to send ports around if you can't find out which port your gateway peer is listening on.

if these ports are blocked, then this communication cannot take place. if everyone is using random port numbers, then there is no way for communication to take place, as there's no port open for either peer that the other peer knows and can access (try connecting two firewalled hosts on gnutella, the effect is similar).

ports even become confusing and annoying in an opennap-like network, although the problem is not as great as in a fully decentralised one. i feel that generating port numbers from data available to both sides of the communication, before any communication actually takes place between them, is the easiest solution available.

Scyth 16-07-02 11:00 PM

Quote:

Originally posted by alphabeater
ips are generally communicated across a decentralised p2p using the standard ports of that p2p. there's no way to send ports around if you can't find out which port your gateway peer is listening on.
How do you know what IP your gateway peer is at? Who/Whatever gives you its IP can also give you the port number.

alphabeater 17-07-02 07:10 AM

Quote:

Originally posted by Scyth
How do you know what IP your gateway peer is at?Who/Whatever gives you its IP can also give you the port number.
as i've said, this could be useful on a network such as kazaa or winmx because everyone can run on a completely different port, and there's no need for anyone (not even a supernode) to tie itself up keeping large lists of them and passing them backwards and forwards to people in complex webs, as they can be calculated on-the-fly by the program itself.

passing everyone's port around with everyone's ip address means that peers have to go around asking other peers which port they are listening on.. and if you don't know which port a peer is listening on, the catch-22 is that it's impossible to ask them.

db_ 17-07-02 07:44 AM

Hi.

People can run whatever port they like, it makes no difference to the operation of the client on the network. If every user on the Kazoo network changed their listening port to a random one, it wouldn't change a thing regarding how the network operates (as far as I'm aware). It'd simply make it more difficult for an ISP to run filters based on ports, there ain't a common port this way.

I use 32200 for WinMX currently, simply refers to the current version number and helps avoid any complications during transfers caused by any common port filters in place anywhere on the network connection between me and whoever. There's no need for me to advertise or do anything special in order to operate using a different port other than to manually change it myself.

If I'm a primary here's what happens...

I start WinMX, it contacts the cache
me:4363 contacts cache:7720

cache:7720 gives me IP: port of a OtherPrimary:6699

me:4224 connects into OtherPrimary:6699
(I'm online with the network now)

Some other user, a secondary (SC) user contacts the cache for a Primary to connect into, the cache responds to the secondary user with it's cached IP:Port of my machine...

SC:4256 contacts Cache:7701
Cache:7701 replys to SC:4256
SC:2432 connects into me:32200

I'm sharing files on the network. A remote user sends out a search request, it hits my Primary (me), I reply with the information to the user requesting results with the file details (name, size, type, etc), my IP, and my listening port to connect into.

The remote user double-clicks the search result, the information contained within that search result directs the user to connect into my IP:Port, and the transfer starts (all going well).

That's it really, changing ports doesn't change anything regarding how connections are made within the network (afaik). It just randomizes the ports that the application uses, meaning an ISP can't place a simple filter on any specifc port number, it wouldn't do anything.

It's important each users port remains static, as many users have routers that need to forward incoming packets addressed to specific ports to the appropriate machine. So, upon installation of the client, it'd be preferable if the port was chosen at that point, from there on the router can be configured according to the port chosen during installation. Any user that can't handle port forwarding would chose the easy option of operating the client in the 'firewalled' mode that doesn't require any static listening ports to be defined or configured. I'm not gonna go into the 'firewalled' way of things, I'll just say it's no substitute for static listening ports and cannot be used unanimously.

enough for now.

dB.

TankGirl 17-07-02 04:59 PM

A good thread with plenty of ideas and information – thanks alphabeater, db_, Scyth and others! :tu:

I think it is a good general approach to make p2p clients as unpredictable and adaptive as possible so that their use would be very hard to track, block or control by your ISP or anybody else. Random port selection from user-defined range is a good first measure against mechanical blocking. Fully encrypted communications between peers would be the next natural step. If your ISP has no way of telling what you communicate through protected pipes with other peers, p2p becomes externally indistinguishable from Virtual Private Networking practiced routinely by many businesses today.

Quote:

Originally posted by alphabeater
looking at the posts above, an interesting set of problems are posed. a way of calculating a port number is needed which:

- is almost random
- cannot be figured out by a peer's isp
- can be figured out by another peer on the network
- is static for use with routers/firewalls
Point 2 fails on open networks because of point 3: your ISP – just like Hilary Rosen and Jack Valenti – is free to enter any open network as a normal peer and access the same information as any other peers, making possible port-sniffing bots etc.

- tg ;)

Scyth 17-07-02 05:02 PM

Quote:

Originally posted by alphabeater
...and if you don't know which port a peer is listening on, the catch-22 is that it's impossible to ask them.
..But if you don't know what IP a peer is at, it's impossible to ask them, too.

Yes, it is necessary to know the IP and port of at least one peer in order to bootstrap yourself onto the network. That information has to come from outside the network. This is normally done using a combination of static files and DNS resolution. Once you're on the network, the IPs and ports of the rest of the hosts become available to you.

alphabeater 17-07-02 05:37 PM

Quote:

Originally posted by Scyth
Yes, it is necessary to know the IP and port of at least one peer in order to bootstrap yourself onto the network.
which is easier: knowing a few domains (told to you by friends) and being able to work out their ips and ports from there as a gateway into the network, or having to get both from some kind of host cache (the weak point of gnutella in particular)?

Quote:

Originally posted by TankGirl
Point 2 fails on open networks because of point 3: your ISP – just like Hilary Rosen and Jack Valenti – is free to enter any open network as a normal peer and access the same information as any other peers
... unless the network is built on a trust layer - encrypted communications as you say, accompanied by users building their own networks of friends (and friends of friends, etc.) instead of having them automatically built for them by the p2p program. this would make it far easier to remove unwanted peers or files from the network quickly.

in my opinion, the future of p2p doesn't lie in huge, global networks, but in smaller, more personal ones made up of friends and interconnected at certain points.

Scyth 17-07-02 06:14 PM

Quote:

Originally posted by alphabeater

which is easier: knowing a few domains (told to you by friends) and being able to work out their ips and ports from there as a gateway into the network, or having to get both from some kind of host cache (the weak point of gnutella in particular)?

I've done both (pre-host-cache-gnutella days), and I think that using a host cache is easier. But if you want to get the information from your friends, you can get the ports from them too.

alphabeater 17-07-02 06:21 PM

Quote:

Originally posted by Scyth
if you want to get the information from your friends, you can get the ports from them too.
opennap shows how messy this can get - ports can be difficult to remember, and then there's the added disadvantage that many people don't understand what they are or what they're for.

i hear what you're saying, i'm just trying to find a way which means that people don't ever need to worry about ports again (unless they're behind a router/firewall, of course), because the program can do it for them.

TankGirl 17-07-02 06:33 PM

Quote:

Originally posted by alphabeater
... unless the network is built on a trust layer - encrypted communications as you say, accompanied by users building their own networks of friends (and friends of friends, etc.) instead of having them automatically built for them by the p2p program. this would make it far easier to remove unwanted peers or files from the network quickly.

in my opinion, the future of p2p doesn't lie in huge, global networks, but in smaller, more personal ones made up of friends and interconnected at certain points.

I agree 100 %. :tu:

You are describing features of socially intelligent p2p topology, and I firmly believe that something like it will soon be implemented on decentralized p2p. Once available, group and community tools will make the whole decentralized scene so much more interesting and exciting! There will always be a place for a global and open sharing layer but groups and communities with their internal activities and their mutual interactions will take the game of p2p to a whole new level. :ND:

- tg ;)

JackSpratts 17-07-02 07:35 PM

Quote:

Originally posted by alphabeater

which is easier: knowing a few domains (told to you by friends) and being able to work out their ips and ports from there as a gateway into the network, or having to get both from some kind of host cache (the weak point of gnutella in particular)?

that's right. easier, but not better. host caches (and host catchers) are the weak point of gnutella. introduced just over 2 years ago to make the system more workable for newbies and the less technically inclined, they showed their disadvantage almost immediately when napster got slammed by the court in july 2000. with everyone suddenly checking out gnutella and using the caches with the same hosts as the only way on they knew how, they started clumping nodes in ever-tighter circles. before host caching, users got their addresses from friends, web pages and ircs in an ever-evolving matrix of different hosts that all but guaranteed ideal dispersal. as a result each host was connected to an average 10,000 nodes in a fairly smooth system wide configuration that resembled a land dotted by small cities and townships interconnected by only a few roads. after host caching some cells dropped down to just a few dozen in size. adding to the problem were host catchers, where a list of known hosts is deposited for reference. this created as gene kan said, “permanent instability in the network as nodes log on and connect to hosts they remember, irrespective of the fact that those hosts are often poor choices in terms of capacity and topology”. it gets worse with a robust client that refuses to release them and it's something that needs to be avoided in a next generation p2p.

as for providing true anonymity, with untraceable downloads from untraceable hosts well, that’s like the holy grail in peer-to-peer applications.

actually, it is the holy grail.

it's one of those things you devote your life to finding but never do.

still, a floating ip is as good a place as any to start. we probably have to protect filesharing just long enough for the riaa to give up or congress to heed the will of the people which might be a while (a long while). but i don’t really think it will be forever even if it’s going to feel like it.

- js.


All times are GMT -6. The time now is 11:58 PM.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)