View Single Post
Old 27-06-02, 02:00 PM   #23
SA_Dave
Guardian of the Maturation Chamber
 
SA_Dave's Avatar
 
Join Date: May 2002
Location: Unimatrix Zero, Area 25
Posts: 462
Default

Quote:
Originally posted by TankGirl
Regarding the architecture my advice would be to pay attention to the issue of permanent, verifiable peer identities. It's a bit trickier to implement in the decentralized environment than on a centralized system but it is worth it.
Now if that isn't the understatement of the century! In order for this to work, you'd need a server/server-farm to maintain this or each superpeer, or at least a dependable group which will be disadvantaged in that every supernode would be as dependant on them as on a central server, would have to maintain an index of every client (as not every superpeer will be always-on & not every system can be guaranteed to be permanent or stable, this would provide a RAID structure to the database.) This would entail massive amounts of hdd space & bandwidth being used just for this purpose alone. Furthermore, each client would require some sort of unique encrypted key or fingerprint. How can you guarantee that this won't be lost in a format etc.? I'd rather those resources were dedicated to file-transfers & storage of the files instead! This may be possible in future when every1 has broadband & unlimited storage space, but this is a far-off dream.

Personally, I think that an app with a built-in web community (eg. bitzi comments, hashlinks & forums) is the best solution right now. If people could easily find ways around fakers, viruses etc. ,while learning about their favourite client, their computer & their fellow netizens, it would encourage them to become better members themselves. This would have a beneficial ripple-effect on all the communities/networks they participate in. When I first started using FT clients, I was only interested in the content. It didn't occur to me that a community even existed! This is why it should be easily accessable from the client-side. Plus, many clients with strict anti-leeching policies are failures. I suggest reading this article posted by naz yesterday. Also, you shouldn't neglect the fact that many people enjoy multiple accounts & the anonymity that today's p2p clients offer. This does cause problems as far as fakers, virus sharers & spammers are concerned. However, it does attract many decent citizens who might not otherwise participate out of concerns for the snooping phenomenon.

Quote:
Originally posted by AYB
The very first release will probably only connect to OpenNap and Gnutella but bear in mind we will have the whole proprietory-to-generic system in place for this which will mean adding a network is easy - expect at least one new network with every major release.
This sounds interesting! As a disgruntled dial-up user, I'm frustrated at the frequent upgrades of clients, which require clean installs. This is why I, as well as many others, restrict myself to using about 2 or fewer clients. A modular design would be revolutionary! I'm not necessarily asking for an auto-upgrade feature (although it would be nice) but plugins sound like a good idea. It would be nice if people could customise their upgrades eg. I can download only the network features, ui updates or main executable patches/improvements that I want or need. As far as I'm concerned : SIZE MATTERS. This is another reason why I prefer java to .NET (which has raised a few security concerns already due to its flawed documentation, which the linux version probably won't have to deal with.) Ethen brought up some good points about the efficiency of the .NET-based apps. I can't comment however, as I tried 10 times in a row to download it last month & it failed every time almost near the end! I'm sure not all .NET apps are that small & there's every likelihood that security patches & upgrades will be required as the project evolves. I'd rather not use it right now for these reasons.

I'm going on a bit now, but another feature that I believe is important is priority. If a rare file is available, I want it to receive the utmost attention! Another thing I hate is when a common file is 99% complete, yet the client starts downloading another queued file from 0% because the temporary file is alphabetically prioritised or some other arbitrary reason! Priority for uploads is another way to ease the distribution of rare material. Right now, it requires manual editing of shares or moving partial downloads.

Finally, I think that embedded metadata, although useful, can lead to "fragmentation" of distribution. People change filenames or tags all the time for selfish reasons, without re-encoding or otherwise editing the actual contents. Swarmed downloads can become irrelevant in some of these scenarios, as hashes are based primarily on filesize (which is altered slightly when tags/filenames are changed) as well as type & name. If only multi-source downloads could actually use all the sources available & you could change the details once it completes. If you could develop a hash which ignores tags, as well as one which includes them (2 hashes per file), this might be somewhat phenomenal! Another way around this problem would be to use a simple external database (.xml file for example) which overlays details onto files for searches. The only problem is this wouldn't be compatable with most external audio/video players. We need a change in standards before this might be feasible! As others suggested, sharing of partials is another solution, but I don't think it's enough when dealing with high-speed leechers.

Okay, that's all.
SA_Dave is offline   Reply With Quote