P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 31-08-23, 05:42 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,017
Default Peer-To-Peer News - The Week In Review - September 2nd, ’23

Since 2002































Early Edition



September 2nd, 2023














Sports Leagues Ask US for “Instantaneous” DMCA Takedowns and Website Blocking

UFC, NBA, NFL want bigger crackdown on pirated streams of live sports.
Jon Brodkin

Sports leagues are urging the US to require "instantaneous" takedowns of pirated livestreams and new requirements for Internet service providers to block pirate websites.

The Digital Millennium Copyright Act of 1998 requires websites to "expeditiously" remove infringing material upon being notified of its existence. But pirated livestreams of sports events often aren't taken down while the events are ongoing, said comments submitted last week by Ultimate Fighting Championship, the National Basketball Association, and National Football League.

The "DMCA does not define 'expeditiously,' and OSPs [online service providers] have exploited this ambiguity in the statutory language to delay removing content in response to takedown requests," the leagues told the US Patent and Trademark Office in response to a request for comments on addressing counterfeiting and piracy.

The leagues urged the US "to establish that, in the case of live content, the requirement to 'expeditiously' remove infringing content means that content must be removed 'instantaneously or near-instantaneously' in response to a takedown request." The leagues claimed the change "would be a relatively modest and non-controversial update to the DMCA that could be included in the broader reforms being considered by Congress or could be addressed separately." They also want stricter "verification measures before a user is permitted to livestream."

The UFC separately submitted comments on its own, urging the US to require that ISPs block pirate sites. The UFC said that a "significant and growing" number of websites, typically operated from outside the US, don't respond to takedown requests and thus should be blocked by broadband network operators. The UFC wrote:

Unlike many other jurisdictions around the world, the US lacks a "site-blocking" regime whereby copyright owners may obtain no-fault injunctions requiring domestic Internet service providers to block websites that are primarily geared at infringing activity. A "site-blocking" regime, with appropriate safeguards to prevent abuse, would substantially facilitate all copyright owners' ability to address piracy, including UFC's.

Website-blocking is bound to be a controversial topic, although the Federal Communications Commission's now-repeated net neutrality rules only prohibited blocking of "lawful Internet traffic." While the UFC said it just wants "websites that are primarily geared at infringing activity" to be blocked, a site-blocking regime could be used more expansively if there aren't strict limits.

Big Tech opposes major changes

A Big Tech lobby group urged the US to avoid requiring more onerous enforcement obligations. The Computer & Communications Industry Association (CCIA) said the current notice-and-takedown legal framework provides "an efficient way to expeditiously remove allegedly infringing content from Internet services, while fostering cooperation between relevant stakeholders."

The CCIA continued:

Under both existing copyright law and trademark law, there is no obligation on the part of online service providers to proactively monitor or enforce infringements. Rather, this is a matter of discretion and policy for each service, and should remain that way. The imposition of proactive enforcement obligations would be less effective, would inevitably negatively impact free speech and legitimate trade, and would introduce untold unintended consequences—digital services would be disincentivized from innovating and would do only what the law required, benefiting no one.

The CCIA also told the US that "the most effective way to prevent infringement is to ensure that members of the public, most of whom want to pay for content, can lawfully consume works digitally whenever and wherever they want."

Google submitted comments touting its anti-piracy systems while pointing out that the takedown process is too easily abused. "Unfortunately, we have seen certain actors abuse our takedown system as a pretext for censorship and anticompetitive behavior," Google said. "We are committed to ensuring that we detect and reject bogus infringement allegations, such as removals for political or competitive reasons, even as we battle online piracy."
Premier League wants “live takedown tool”

The US received comments about Automated Content Recognition (ACR) systems from England's Premier League. ACR systems can "prevent unauthorized streams being uploaded onto the Internet," the league said.

YouTube and Facebook already have such systems, the Premier League said. But for platforms without ACR, the Premier League said it wants live takedown tools that rights owners would operate themselves.

"The Premier League understands that not all intermediaries are able to develop and provide ACR systems," the league said. "In such cases, a live takedown tool, to be operated by the rightsowners, is a technologically simple and low-cost alternative. Such tools need to be easy to use, and a rebuttable presumption should be applied in favour of the rightsowner rather than the streamer."
https://arstechnica.com/tech-policy/...site-blocking/





Anti-Piracy Group Takes Massive AI Training Dataset ‘Books3′ Offline
Kyle Barr

One of the most prominent pirated book repositories used for training AI, Books3, has been kicked out from the online nest it had been roosting in for nearly three years. Rights-holders have been at war with online pirates for decades, but artificial intelligence is like oil seeping into copyright law’s water. The two simply do not mix, and the fumes rising from the surface just need a spark to set the entire concept of intellectual property rights alight.

As first reported by TorrentFreak, the large pirate repository The Eye took down the Books3 dataset after the Danish anti-piracy group Rights Alliance sent the site a DMCA takedown. Now trying to access that dataset gives a 404 error. The Eye still hosts other training data for AI, but the portion allotted for books has vanished.

Rights Alliance told Gizmodo it sent The Eye a takedown request, and the site took down the content last month. The group said the Books3 dataset contained around 150 titles published by their member companies. Rights Alliance also reached out to AI model hosting site Hugging Face (which hosted a datacard and link to the Books3 download) as well as EleutherAI. Both organizations pointed the anti-piracy group toward The Eye.

The nonprofit research group EleutherAI originally released Books3 as a part of the AI training set The Pile, an 800 GB open source chunk of training data comprising 22 other datasets specifically designed for training language models. Rights Group said the organization “denied responsibility” for Books3. Gizmodo reached out to EleutherAI for comment, but we did not receive a response.

The Eye claims it regularly complies with all valid DMCA requests, though that data set was originally uploaded by AI developer and prominent open source AI proponent Shawn Presser back in 2020. His stated goal at the time was to open up AI development beyond companies like OpenAI, which trained its earlier large language models on the still-unknown “Books1” and “Books2” repositories. The Books3 repository contained 196,640 books all in plain.txt format and was supposed to give fledgling AI projects a leg up against the likes of ChatGPT-maker OpenAI.

Over Twitter DM, Presser called the attack on Books3 a travesty for open source AI. While other major companies and VC-funded startups get away with including copyrighted data in their training data, grassroots projects need something to compete—and that’s what Books3 was for.

“The only way to replicate models like ChatGPT is to create datasets like Books3,” Presser said. “And every for-profit company does this secretly, without releasing the datasets to the public… without Books3, we live in a world where nobody except OpenAI and other billion-dollar companies have access to those books—meaning you can’t make your own ChatGPT. No one can. Only billion-dollar companies would have the resources to do that.”

For as long as the media industry groups have fought against piracy, few expected the next front to the neverending copyright war would be AI. In a phone interview with Gizmodo, Rights Alliance CEO Maria Fredenslund said the organization is actively working to take down other copies of Books3. But this is just the start, and anti-piracy groups now have a new target to focus on compared to the usual boogeymen of file-sharing services and pirate libraries.

“We are very worried. It’s a really huge development in technology and how the content is used,” Fredenslund said. “In a way, we see it as the same as 10 years ago when we discussed file sharing, and governments were very afraid of regulating the internet because, in their eyes, everything had to be free. It turned out that copyright also needed to be regulated on the internet as well as in any other aspect.”

It’s not like there are no more copies of Books3 being hosted on the internet. After the books were taken down last week, Presser posted two new Books3 download links on his Twitter profile. Rights Group said it will continue to pursue sites that host the dataset, but as any old salt of an internet pirate would tell you, once a file’s out and available, it never truly goes away.

Meta is Also Using Books3 for Its AI Models

Comedian Sarah Silverman was just one of several authors who signed on to a class action lawsuit against Meta, claiming the company stole their books in order to train their LlaMA AI. The lawsuit mentions that Meta used the Books3 repository for training its AI, but added that Meta did not mention what works were contained within those gigabytes of data.

In its whitepaper describing the original LlaMA language model, Meta researchers described Books3 as a “publicly available dataset for training large language models.” Meta referenced this dataset coming from The Pile.

Growing AI models requires an enormous amount of information, and for close to a decade the technology’s development has depended on using protected text. Earlier versions of OpenAI’s language model from just two or three years ago were trained on datasets like BookCorpus, which contained thousands of scraped-up scraps of book text from sites like Smashwords. That dataset was only a few gigabytes of data, but researchers found that it included works that were copyrighted, or required payment to access.

OpenAI’s GPT-3 model used the Books2 training set to train its AI. Both Books1 and Books2 make up close to 15% of GPT-3’s training data, though there’s little to no precise information on what’s contained in it. Some have speculated the Books2 data was scraped from Libgen, the open source pirate library also called Library Genesis. There’s even less information on what’s contained in GPT-4’s 45 terabytes worth of training data.

Big tech companies are increasingly uninterested in sharing this data, knowing the more they do, the more other people can build similar AI models, or tangle them up in lawsuits. Then again, the costs for training these massive models are staggering, especially for larger models.

But while OpenAI has been revealing less of its training data over the years, we know exactly what’s gone into the Books3 repository. The dataset was derived from a copy of the Bibliotik library. Bibliotik is a so-called “shadow library” akin to other, industry-derided sources like Libgen, Z-Library, and Sci-Hub. Presser had to build scripts that managed to turn PDFs and images into usable .txt files, a very labor-intensive task.

“My goal was to make it so that anybody could [create these models.] It felt crucial that you and I could create our own ChatGPT if we wanted to,” Presser said. “Unless authors intend to somehow take ChatGPT offline, or sue them out of existence, then it’s crucial that you and I can make our own ChatGPTs, for the same reason it was crucial that anybody could make their own website back in the ‘90s.”

Fredenslund said their group was looking to “reach out” to Meta about this copyrighted content being used to train its AI. While the tech giant that is Meta is unlikely to retrain its entire AI model to placate copyright holders, there’s little worldwide regulation mandating transparency for AI models. While the European Union is currently working on an AI Act that will force companies to have some model transparency, Fredenslund said AI developers need to be forced to share the specifics of their training data, including what precise works were used to create their AI models.

“We hope this attitude toward using illegal content will change, that they will not do that in the future,” she said. “We want to be able to actually control the copyright in this aspect, then we actually need to know what the models are trained on.”

As noted in past forum comments, Presser actively worked with EleutherAI to add the Books3 dataset to The Pile. EleutherAI has used The Pile and other data to craft its own AI models, including one called GPT-J that was originally meant to compete with OpenAI’s GPT-3.

Meta went as far as to claim that the original LlaMA-65B model didn’t perform as well as some other, larger models like the PaLM-540B because it “used a limited amount of books and academic papers” in its pre-training data. The original LlaMA was also formatted on C4, a version of Common Crawl that was a large dataset of mass amounts of internet data. Researchers found that the C4 training set included mass amounts of published work, including propaganda and far-right websites. Those researchers told the Washington Post the copyright symbol appeared more than 200 million times in the C4 training set.

Since then, Meta has clammed up hard about what goes into its language models. Last month, Meta released a newer, bigger language model called LlaMA 2. This time, Meta worked with Microsoft to add 40% more data than its previous model, though in its whitepaper the company was much more hesitant to outright state what data its latest LM was trained on. The only reference to its training data was that it’s “a new mix of publicly available online data.” As the friction between AI and copyright grows hotter, companies are less and less likely to share exactly what’s contained in the morass of AI training data.
https://gizmodo.com.au/2023/08/anti-...80%B2-offline/





US Copyright Office wants to Hear what People think about AI and Copyright

The agency is opening the public comment period on August 30th.
Emilia David

The US Copyright Office is opening a public comment period around AI and copyright issues beginning August 30th as the agency figures out how to approach the subject.

As announced in the Federal Register, the agency wants to answer three main questions: how AI models should use copyrighted data in training; whether AI-generated material can be copyrighted even without a human involved; and how copyright liability would work with AI. It also wants comments around AI possibly violating publicity rights but noted these are not technically copyright issues. The Copyright Office said if AI does mimic voices, likenesses, or art styles, it may impact state-mandated rules around publicity and unfair competition laws.

Written comments are due on October 18th, and replies must be submitted to the Copyright Office by November 15th.

The copyright status of AI training data and the output of generative AI tools has become a hot topic for politicians, artists, authors, and even civil rights groups, making it a potential testing ground for coming AI regulation. The Copyright Office says that “over the past several years, the Office has begun to receive applications to register works containing AI-generated material.” It may use the comments to inform how it decides to grant copyright in the future.

The Copyright Office was involved in a lawsuit last year after it refused to grant Stephen Thaler rights to an image created by an AI platform. Earlier this month, a Washington, DC, court sided with the US Copyright Office in the case, stating copyright has never been handed to any work without a human involved.

Meanwhile, many large language models that power generative AI tools ingest information that’s freely available online. While it’s not often clear if copyrighted material is part of a given tool’s training dataset, several lawsuits have already been filed alleging copyright infringement. Three artists sued generative AI art platforms Stable Diffusion and Midjourney and the art website DeviantArt for allegedly taking their art without their consent and using it to train AI models. Comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey filed their own legal action against OpenAI and Meta for allegedly using their books to help improve ChatGPT and LLaMA.

Concerns over the usage of intellectual property in AI models also prompted several news organizations to block OpenAI’s web crawler to prevent it from scraping their data.

Lawmakers have invited various AI stakeholders to discuss the best way to regulate AI. Senate Majority Leader Chuck Schumer even called on his peers to pick up the pace with rulemaking around the technology.
https://www.theverge.com/2023/8/29/2...ublic-comments





New Single-Frame Watermark Technology can Detect Piracy from just a Screenshot

A cloud-based approach to the content watermarking issue
Alfonso Maruccia

In context: Watermarks are identifying patterns hidden within a piece of paper, an image, or other types of content. Manufacturers and providers can use it to detect counterfeit or piracy. Now, a Berlin-based company promises an even stronger watermarking system that works in the cloud.

German company castLabs recently introduced "single-frame forensic watermarking," a new, cloud-based way to easily and reliably identify piracy and IP theft. castLabs said that its novel approach allows the corporation to embed "tunable robustness level watermarks" in digital assets such as images, videos, documents, or any other type of digital file.

The system is presented as a way to protect copyrighted content in any possible scenario. Even in cases of distortion or obstruction, the new watermarking technology can seemingly work with just a single image to retrieve ID, IP addresses, session information and other useful detailed user data.

Single-frame forensic watermarking conceals vital information within a single frame of digital media, castLabs explained. The watermark can work in conjunction with other security measures such as Digital Rights Management (DRM) protections. The technology is split in two different parts, an embedder and an extractor.

castLabs' unique algorithm embeds the watermark during the encoding process through the company's cloud-based Video Toolkit platform. The watermark is embedded server-side, castLabs revealed, with unique IDs that are "strategically" hidden within video frames or other visual digital assets. Its visibility can be "precisely" regulated to serve different use cases, seemingly providing a high survival level even in low-bitrate video and single image files.

The second part of the system, a cloud-based (AWS) extractor, can scan various areas in a video frame, a document, or an image, detecting the hidden watermark with what castLabs defines as a "remarkable resilience." This so-called "blind extraction" approach can retrieve a hidden pattern from a watermarked content when access to the original watermark is not available anymore.

castLabs is promoting its single-frame forensic watermarking solution to companies and organizations interested in taking "swift action" against content theft, as the extracted watermark can pinpoint the source of a leak within the supply chain. Furthermore, the system can provide "renewed deterrence," as potential infringers are aware that they can now be easily tracked, and "solid evidence" of content ownership to prove unauthorized uses in court.
https://www.techspot.com/news/99856-...ct-piracy.html





Netflix Added 2.6M U.S. Subscribers In July, Continuing Advertising Momentum Amid Password-Sharing Crackdown, Study Finds
Dade Hayes

Netflix continued to add subscribers in the U.S. at a high rate in July after initiating a crackdown on password sharing in May, according to new data from research firm Antenna.

The streaming giant had 2.6 million gross subscriber additions in July, the latest figures show. The company also saw the highest percentage of new sign-ups going to its advertising tier since the $7-a-month offering hit the market last November. About 23% of new subscribers opted for the ad tier, a gain of four percentage points over June levels.

The overall July gains represented a 26% downturn from June’s record-breaking numbers, but they still show momentum stretching back to the May 23 introduction of paid password sharing in the U.S. From May 24 to 27, Netflix had its four biggest single days of sign-ups in the four-and-a-half years since Antenna has began tracking its subscribers, outpacing even the 2020 Covid boom.

The new password scheme followed last fall’s debut of the cheaper, ad-supported subscription tier, with the combination of the two providing a potent boost. In the second quarter ending June 30, the company reported that it doubled projections by adding 5.9 million subscribers, reaching 238.3 million worldwide.

Antenna derives its subscriber numbers from online purchase receipts, credit, debit and banking data, and bill-scrape data and then factors in various demographic and behavioral elements. Customers taking advantage of free trials are not counted.

The new Netflix password-sharing plan imposes an $8 fee for anyone sharing their login credentials, an effort to recapture billions in revenue the company has forfeited in the name of subscriber growth for most of its 15-year run in streaming. Netflix’s financial performance and stock price have improved significantly since the password and advertising initiatives kicked in as Wall Street has reassessed the company after its rocky period in late-2021 and the first half of 2022.

While Netflix has not given a full breakdown of how many of its subscribers have switched to the cheaper ad tier, execs have touted the progress on passwords. Speaking on the company’s second-quarter earnings call in July, Co-CEO Greg Peters said execs “know that it’s working.”

The caliber of the customers who switch from using someone else’s subscription to paying for their own has been high, Peters added. “They are choosing plans and engaging at rates, have retention characteristics that generally look like higher-tenure members,” he said. “That’s good.”

Netflix recently discontinued its ad-free Basic subscription tier, Antenna noted in a blog post. “We’ll be interested to see how this impacts the composition of Netflix subscriptions and sign-ups overall in the coming months,” the company wrote.

Below is a chart from Antenna showing U.S. signup activity in July:
https://deadline.com/2023/08/netflix...ta-1235526018/





FCC Says “Too Bad” to ISPs Complaining that Listing Every Fee is Too Hard

Comcast and other ISPs asked FCC to ditch listing-every-fee rule. FCC says "no."
Jon Brodkin

The Federal Communications Commission yesterday rejected requests to eliminate an upcoming requirement that Internet service providers list all of their monthly fees.

Five major trade groups representing US broadband providers petitioned the FCC in January to scrap the requirement before it takes effect. In June, Comcast told the FCC that the listing-every-fee rule "impose[s] significant administrative burdens and unnecessary complexity in complying with the broadband label requirements."

The five trade groups kept up the pressure earlier this month in a meeting with FCC officials and in a filing that complained that listing every fee is too hard. The FCC refused to bend, announcing yesterday that the rules will take effect without major changes.

"Every consumer needs transparent information when making decisions about what Internet service offering makes the most sense for their family or household. No one wants to be hit with charges they didn't ask for or they did not expect," FCC Chairwoman Jessica Rosenworcel said.

Yesterday's order "largely affirms the rules... while making some revisions and clarifications such as modifying provider record-keeping requirements when directing consumers to a label on an alternative sales channel and confirming that providers may state 'taxes included' when their price already incorporates taxes," the FCC said.
ISPs don’t want to list all fees

Comcast and other ISPs objected to a requirement that ISPs "list all recurring monthly fees" including "all charges that providers impose at their discretion, i.e., charges not mandated by a government." They complained that the rule will force them "to display the pass-through of fees imposed by federal, state, or local government agencies on the consumer broadband label."

As we've previously written, ISPs could simplify billing and comply with the new broadband-labeling rules by including all costs in their advertised rates. That would give potential customers a clearer idea of how much they have to pay each month and save ISPs the trouble of listing every charge that they currently choose to break out separately.

Rejecting the broadband industry's request, the FCC order yesterday said:

[W]e affirm our requirement that providers display all monthly fees with respect to broadband service on the label to provide consumers with clear and accurate information about the cost of their broadband service. We thus decline providers' request that they not disclose those fees or that they instead display an "up to" price for certain fees they choose to pass through to consumers.

Specifically, "providers must itemize the fees they add to base monthly prices, including fees related to government programs they choose to 'pass through' to consumers, such as fees related to universal service or regulatory fees," the FCC said.

The FCC was ordered by Congress to implement broadband-label rules. The FCC is requiring ISPs to display the labels to consumers at the point of sale and include information such as the monthly price, additional fees, introductory rates, data caps, charges for data overages, and performance metrics. The FCC rules aren't in force yet because they are subject to a federal Office of Management and Budget (OMB) review under the US Paperwork Reduction Act.

FCC pointedly says ISPs can simplify pricing

In its dismissal of the broadband industry's claims that itemizing fees would be too confusing for customers and too burdensome for providers, the FCC pointedly noted that ISPs are allowed to use a simpler pricing model:

[i]We also disagree that clear disclosure of these fees "has the potential to cause significant confusion for consumers and add unnecessary complexity for providers" due to the "huge variety and quantity of fees on broadband providers." Providers must itemize the fees on consumer bills, and we see no reason why consumers cannot assess the fees at the point-of-sale any less than they can when they receive a bill. Providers are free, of course, to not pass these fees through to consumers to differentiate their pricing and simplify their Label display if they believe it will make their service more attractive to consumers and ensure that consumers are not surprised by unexpected charges.

Further, we are not persuaded that it will be burdensome for ISPs to itemize on the label those fees they opt to pass along to consumers above the monthly price, particularly since providers acknowledge being able to describe such fees to a consumer over the phone and on a consumer's bill once the consumer subscribes to service. We also find that any such burdens are far outweighed by the benefits to consumers when they are shopping for service... ISPs could alternatively roll such discretionary fees into the base monthly price, thereby eliminating the need to itemize them on the label.[i]

Separately, the order said the FCC rejected a wireless-industry "request to include potentially complex and lengthy details about data allowances on the label, and instead affirm that providers can make those details available to consumers on a linked website." To maintain simplicity, the labels must "identify the amount of data included with the monthly price," and "disclose any charges or reductions in service for any data used in excess of the amount included in the plan," the FCC said.

The FCC granted a wireless-industry "request to clarify that wireless providers have the flexibility to state 'taxes included' or add similar language to the label template when the provider has chosen to include taxes as part of its base price."

FCC grants request to simplify record-keeping

The FCC also bent to ISPs on their objection to a record-keeping requirement for labels provided through "alternate sales channels" such as retail stores or customer service phone calls. ISPs can meet the label requirement in alternate sales channels either by providing a hard copy of the label or by "directing the consumer to the specific web page on which the label appears."

Under the FCC's original decision, ISPs that don't provide hard copies of the label to prospective customers in alternate sales channels would have had to document each instance in which they direct a consumer to a label. That's what broadband industry groups objected to, saying that documenting "every customer interaction would be highly disruptive to consumers seeking information through alternative sales channels and would impose significant burdens on providers of all sizes."

ISPs also objected on privacy grounds because the rule would require them to collect identifying information from customers or potential customers who were directed to the label.

Granting the ISPs' request, the FCC said it is "persuaded by petitioners that providers deal with millions of customers and prospective customers by phone, in retail locations, and at 'pop-up' sales outlets such as fairs or exhibitions, and that it may be challenging for providers to capture and retain such documentation when consumers are provided with access to the labels at each and every point of sale."

The FCC thus amended its rules to "clarify that the requirement to document interactions with consumers at alternate sales channels will be deemed satisfied if, instead, the provider: 1) establishes the business practices and processes it will follow in distributing the label through alternative sales channels; 2) retains training materials and related business practice documentation for two years; and 3) provides such information to the Commission upon request, within 30 days."
https://arstechnica.com/tech-policy/...e-is-too-hard/
















Until next week,

- js.



















Current Week In Review





Recent WiRs -

August 26th, August 19th, August 12th, August 5th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - November 24th, '12 JackSpratts Peer to Peer 0 21-11-12 09:20 AM
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 09:21 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)