|01-09-21, 06:47 AM||#1|
Join Date: May 2001
Location: New England
Peer-To-Peer News - The Week In Review - September 4th, ’21
September 4th, 2021
Movie Companies Demand that VPNs Log User Data and Disconnect Pirates
• After the ISP suing spree that went on over the years, the piracy lawsuits have expanded, with VPN providers as the main targets.
• One of the main accusations is allowing VPN subscribers to bypass the geographical restrictions of streaming services such as Netflix.
• Filmmakers also argue that some VPNs even partner up with notorious movie piracy websites, in order to promote their services.
• Besides the money, the movie companies also seek the immediate shutdown of websites such as RARBG, or the infamous Pirate Bay.
You should probably know, if you’re into this kind of stuff, that a group of movie companies continues its legal efforts to hold VPN services liable for pirating subscribers.
A new pending lawsuit lists service providers like Surfshark, VPN Unlimited, Zenmate, and ExpressVPN as defendants.
Besides damages, the filmmakers want the VPNs to block pirate sites and start logging user data. The accused companies have yet to respond in court.
As a direct result of the growing threats against online privacy and security, VPN services have become increasingly popular in recent years.
Filmmakers ask for RARBG and The Pirate Bay to be shut down
It’s already a well-known fact that millions of people use VPNs to stay secure and prevent outsiders from tracking their online activities.
However, as with regular Internet providers, a subsection of these subscribers may be engaged in piracy activities.
Throughout the years, we’ve seen copyright holders take several ISPs to court, under charges of failing to disconnect repeat copyright infringers.
Now, these lawsuits have expanded, with VPN providers as the main targets.
Ever since the COVID-19 pandemic started and people spend more time at home, downloading and pirating movies of all sorts, the numbers have grown to staggering heights.
These above-mentioned lawsuits were filed by a group of independent movies companies that also took it upon themselves to go after piracy sites and apps.
Among them are the creators of blockbusters and award-winning movies such as The Hitman’s Bodyguard, Dallas Buyers Club, and London Has Fallen.
One of the main accusations is allowing VPN subscribers to bypass the geographical restrictions of streaming services such as Netflix.
The filmmakers bring to attention various examples of promotional pages where the VPN providers claim that their services can bypass blocking efforts and other restrictive measures.
In some cases, these VPN providers don’t even go to the trouble of hiding such actions, as the following announcement from UnlimitedVPN shows.
Besides bypassing geographical restrictions, the movie companies also list many examples of VPN subscribers who are directly involved in sharing pirated movies via BitTorrent.
And while BitTorrent can also be used legally, the VPN companies allegedly promote their service as a tool to download copyright-infringing material anonymously.
And that’s not all! Filmmakers also argue that some VPNs even partner up with notorious movie piracy websites, in order to promote their services.
As an example, the website YTS.movie encourages the use of ExpressVPN. It’s not immediately clear whether ExpressVPN is aware of that, however.
Hold on to your chair, because the list of grave accusations continues in a manner you wouldn’t have expected.
The movie companies also allege that VPN customers are engaged in other types of inadmissible conduct under this privacy shield, including racist comments, child pornography, and even committing murder.
Based on these claims, as well as others, the filmmakers argue that the VPN services are liable for direct, contributory, and vicarious copyright infringement.
Besides the money they solicit for these frauds, the movie companies also request that the VPN services start blocking known pirate sites such as The Pirate Bay and RARBG.
‘Wonder Woman 1984’ Director Blames HBO Max Release for Paltry Box Office Returns
Patty Jenkins, director of last year’s “Wonder Woman 1984,” believes the movie’s release on HBO Max was the cause of disappointing returns at the box office for the highly anticipated flick.
Following the success of 2017’s “Wonder Woman,” which made $821 million worldwide in theaters, the 2020 sequel was poised for similar results before the COVID-19 pandemic. The movie, originally planned for an October debut, was pushed to Christmas.
But weeks before its release, Warner Bros. announced “Wonder Woman 1984” would be the first of its movies to debut on HBO Max to coincide with the theater debut, McClatchy News reported.
This, Jenkins said Thursday during a panel at CinemaCon, “was detrimental to the movie,” Deadline reported.
“I don’t think it plays the same on streaming, ever,” she said, according to Deadline. She later added, “I make movies for the big-screen experience.”
The sequel, panned by many reviewers with a 59% rating through Rotten Tomatoes’ aggregate of movie critics, made $166 million worldwide. Many movie theaters remained closed during Christmas last year as COVID-19 cases again surged across the United States.
About 17.2 million people activated their HBO Max accounts in the last quarter of 2020, coinciding with the “Wonder Woman 1984” release, Bloomberg reported. This was double the previous quarter.
Nearly half of HBO Max subscribers watched the movie on its premiere date, and it broke records for the streaming service, according to The Hollywood Reporter.
But the streaming release also led to a horde of people pirating it, allowing others to watch it for free online. TorrentFreak reported “Wonder Woman 1984” was the most torrented movie for three straight weeks and was among the top 10 most pirated through April.
It faced a similar fate as Marvel’s “Black Widow,” which premiered in theaters and Disney+ this summer. The star of that movie, Scarlett Johansson, later filed a lawsuit against Disney alleging the Disney+ release breached her contract, CNN reported.
Jenkins, who will also direct the third “Wonder Woman” movie as well as “Star Wars: Rogue Squadron,” called her movie’s release “the best choice in a bunch of bad choices” but ultimately one that was a “heartbreaking experience,” according to IGN.
Google Appeals $591M French Fine in Copyright Payment Spat
Google is appealing a 500 million euro ($591 million) fine issued by French regulators over its handling of negotiations with publishers in a dispute over copyright.
The dispute is part of a larger battle by authorities in Europe and elsewhere to force Google and other tech companies to compensate publishers for content.
“We disagree with a number of legal elements, and believe that the fine is disproportionate to our efforts to reach an agreement and comply with the new law,” Google France Vice President Sebastien Missoffe said in a press statement.
France’s antitrust watchdog levied the fine in mid-July after it found Google hadn’t negotiated in good faith with publishers over payments for their news stories. The watchdog had issued temporary orders to Google in April 2020 to hold talks within three months with news publishers, and had fined the company for breaching those orders.
“We continue to work hard to resolve this case and put deals in place. This includes expanding offers to 1200 publishers, clarifying aspects of our contracts, and we are sharing more data as requested by the French Competition Authority in their July Decision,” Missoffe said.
The antitrust watchdog also threatened fines of another 900,000 euros (around $1 million) per day if Google didn’t come up with proposals within two months on how it will pay publishers and news agencies for their content.
France was the first of the European Union’s 27 nations to adopt the bloc’s 2019 copyright directive, which lays out a way for publishers and news companies to strike licensing deals with online platforms.
Locast Shuts Down, but the Fight Over Local Channel Streaming Continues
Watching local channels just got a lot more expensive unless you can use an antenna.
Locast, a nonprofit service that used a loophole in copyright law to stream local TV channels on the cheap, delivered a blow to cord cutters this week by abruptly suspending its operations.
Locast had argued that it was allowed to stream local channels to viewers via its own antennas, using a provision of copyright law that lets nonprofit groups retransmit broadcast signals. The major TV networks—ABC, NBC, CBS, and Fox—disagreed, and sued the group nearly two years ago.
On Wednesday, a federal judge in New York sided with the networks, tossing Locast’s request for summary judgment.
While the service was ostensibly free, Locast would interrupt users’ video streams every 15 minutes unless they paid a $5 per month donation. District Judge Louis Stanton said that with this approach, Locast was soliciting payments for a service, not charitable contributions. He also took issue with the way Locast used donations in existing markets to fund expansion into others, saying the nonprofit exemption in copyright law doesn’t allow for this.
Locast initially responded on Wednesday by dropping all donation requests from its video feeds. But on Thursday, the group shut down its service outright.
“As a non-profit, Locast was designed from the very beginning to operate in accordance with the strict letter of the law, but in response to the court’s recent rulings, with which we respectfully disagree, we are hereby suspending operations, effective immediately,” the group said in an email to its donors.
The fight goes on
Mitch Stoltz, a senior staff attorney for Electronic Frontier Foundation, which assisting with Locast’s defense, said via email that the case will continue, “likely including an appeal,” to resolve remaining issues in the case.
“The problem remains: broadcasters keep using copyright law to control where and how people can access the local TV that they’re supposed to be getting for free,” Stoltz said.
Still, the group seems to be adopting a less aggressive posture than it did a couple of years ago, when Locast founder David Goodfriend told the New York Times that he would welcome a legal challenge. Goodfriend had said his idea for Locast was inspired by Aereo, an earlier attempt to retransmit broadcast channels that didn’t try to achieve nonprofit status. The Supreme Court ruled in 2014 that Aereo violated copyright law, and the company imploded shortly thereafter.
Jessica Litman, a law professor at the University of Michigan who specializes in copyright law, said via email that Locast’s decision to immediately shut down likely came down to costs. She speculates that if Locast continued to run the service while pursuing further litigation, it might have faced a preliminary injunction from the networks.
“The problem is that litigation is really, really expensive,” Litman said. “Suspending operations will probably allow them not to have to litigate a preliminary injunction motion.”
Cord cutters left stranded
Locast’s demise is a gut punch to cord cutters who want to access local channels but can’t get them with an antenna due to range or reception issues. For the most part, live local channels are still tied to big TV bundles, consisting of many other channels owned by the major broadcast networks. The cheapest streaming bundles that include all four networks are YouTube TV, Hulu + Live TV, and Fubo TV, and they call cost $65 per month.
In lieu of a big bundle or an antenna, cord cutters can subscribe to Paramount+, which includes a local feed in its $10 per month tier (but is often available for free). Primetime network shows are also available on Hulu, on NBC’s Peacock, and on free network TV apps (usually with an eight-day delay), but those channels don’t offer live feeds without a pay TV subscription. NewsOn, Vuit, and Stirr over local news in certain markets, but no major network programming. LocalBTV streams free digital subnetworks in a small number of markets, but it also lacks major network channels.
The lack of an inexpensive streaming option for local channels could also send some folks back to cable, though it could just as likely send them in the opposite direction, away from major TV networks and channel bundles entirely.
Australian Powers to Spy on Cybercrime Suspects Given Green Light
Coalition bill to create powerful new warrants, allowing authorities to modify and delete data and even take over accounts, passes Senate
A government bill to create new police powers to spy on criminal suspects online, disrupt their data and take over their accounts has been passed with the support of Labor.
The identify and disrupt bill passed the Senate on Wednesday, despite concerns about the low bar of who can authorise a warrant, and that the government failed to implement all the safeguards recommended by the bipartisan joint committee on intelligence and security.
The bill creates three new types of warrants to enable the AFP and Australian Criminal Intelligence Commission to modify and delete data, take over accounts and spy on Australians in networks suspected of committing crimes.
Earlier in August, the parliamentary joint committee on intelligence and security – (PJCIS) chaired by the Liberal senator James Paterson – made a series of recommendations to improve oversight and safeguards.
On Tuesday, the home affairs minister, Karen Andrews, introduced amendments to implement some of the proposed safeguards, including a sunset clause so the new powers would expire after five years and stronger criteria to issue warrants.
Andrews said the amendments would mean data disruption warrants would need to be “reasonably necessary and proportionate” and data disruption and account takeover warrants would need to specify the types of activities proposed to be carried out.
The media would also gain some extra protection, with the addition of a “public interest test for data disruption warrants, network activity warrants and account takeover warrants where an investigation of an unauthorised disclosure offence is in relation to a person working in a professional capacity as a journalist”, she said.
The Independent National Security Legislation Monitor will review the bill after three years and the PJCIS can review the bill after four.
The Labor MP Andrew Giles told the lower house on Tuesday the opposition supported the bill because “the cyber-capabilities of criminal networks have expanded, and we know that they are using the dark web and anonymising technology to facilitate serious crime, which is creating significant challenges for law enforcement”.
Giles noted that Labor had called to raise the bar for the types of crimes that trigger the new powers from the current “all commonwealth offences punishable by a maximum term of three years or more trigger the powers”.
Sign up to receive the top stories from Guardian Australia every morning
Giles warned this meant “tax offences, trademark infringements and a range of other offences” would enliven the powers, not just the offences of “child abuse and exploitation, and terrorism” the Coalition used to justify the bill.
In August 2020 the then home affairs minister Peter Dutton claimed the new powers would target terrorists, paedophiles and drug traffickers operating online, such as on the dark web, and would apply “to those people and those people only”.
In the Senate the Greens and Rex Patrick resisted the bill, moving amendments to implement the other PJCIS recommendations, including to require that magistrates or judges would have to sign off on warrants, not just members of the administrative appeals tribunal.
The attorney general, Michaelia Cash, rejected this proposal, arguing it would be a “departure from longstanding government policy”, “likely result in operational delays” and was inconsistent with other warrant powers.
In a second reading amendment, the Greens noted the bill “rejects a core recommendation of the Richardson review” of the legal framework for the intelligence community, which had found “law enforcement agencies should not be given specific cyber-disruption powers”.
The amendments were defeated, and the bill passed easily due to Labor’s support.
The Law Council president, Jacoba Brasch, said the “[failure] to implement the committee’s recommendation that there be judicial issuing of the new, extraordinary warrants is particularly disappointing”.
“The Law Council believes the significant breadth and intrusive scope of these warrants demands consideration by judicial officers, as the PJCIS recommended.”
Kieran Pender, the senior lawyer at the Human Rights Law Centre, told Guardian Australia given the bill’s powers “are unprecedented and extraordinarily intrusive, they should have been narrowed to what is strictly necessary and subject to robust safeguards”.
Despite the “significant changes” recommended by the committee, the HRLC believes that about half were either rejected or only partially adopted.
“It is alarming that, instead of accepting the committee’s recommendations and allowing time for scrutiny of subsequent amendments, the Morrison government rushed these laws through parliament in less than 24 hours,” Pender said.
“While the safeguards for journalists and whistleblowers are welcome, they highlight the lack of wider entrenched safeguards for press freedom and free speech in Australia.”
Andrews said the arrest of more than 290 people in Operation Ironside “confirmed the persistent and ever evolving threat of transnational, serious and organised crime – and the reliance of these networks on the dark web and anonymising technology to conceal their offending”.
“In Operation Ironside, ingenuity and world-class capability gave our law enforcement an edge,” she said.
“This bill is just one more step the government is taking to ensure our agencies maintain that edge.”
Ads, Privacy and Confusion
Privacy is coming to the internet and cookies are going away. This is long overdue - but we don’t know what happens next, we don’t have much consensus on what online privacy actually means, and most of what’s on the table conflicts fundamentally with competition.
The consumer internet industry spent two decades building a huge, complex, chaotic pile of tools and systems to track and analyse what people do on the internet, and we’ve spent the last half-decade arguing about that, sometimes for very good reasons, and sometimes with strong doses of panic and opportunism. Now that’s mostly going to change, between unilateral decisions by some big tech platforms and waves of regulation from all around the world. But we don’t have any clarity on what that would mean, or even quite what we’re trying to achieve, and there are lots of unresolved questions. We are confused.
First, can we achieve the underlying economic aims of online advertising in a private way? Advertisers don’t necessarily want (or at least need) to know who you are as an individual. As Tim O’Reilly put it, data is sand, not oil - all this personal data actually only has value in the aggregate of millions. Advertisers don’t really want to know who you are - they want to show diaper ads to people who have babies, not to show them to people who don’t, and to have some sense of which ads drove half a million sales and which ads drove a million sales. Targeting ads per se doesn’t seem fundamentally evil, unless you think putting car ads in car magazines is also evil. But the internet became able to show car ads to people who read about cars yesterday, somewhere else - to target based on the user rather than the context. This is both exactly the same and completely different.
In practice, ‘showing car ads to people who read about cars’ led the adtech industry to build vast piles of semi-random personal data, aggregated, disaggregated, traded, passed around and sometimes just lost, partly because it could and partly because that appeared to be the only way to do it. After half a decade of backlash, there are now a bunch of projects trying to get to the same underlying advertiser aims - to show ads that are relevant, and get some measure of ad effectiveness - while keeping the private data private. This is the theory behind Google’s FLoC and Apple’s rather similar tracking and ad-targeting system - do the analysis and tracking on the device, show relevant ads but don’t give advertisers or publishers the underlying personal data. However, even if the tech works and the industry can get to some kind of consensus behind any such project (both very big questions), would this really be private? And what does it do to competition?
This takes me to a second question - what counts as ‘private’, and how can you build ‘private’ systems if we don’t know?
Apple has pursued a very clear theory that analysis and tracking is private if it happens on your device and is not private if leaves your device or happens in the cloud. Hence, it’s built a complex system of tracking and analysis on your iPhone, but is adamant that this is private because the data stays on the device. People have seemed to accept this (so far), but acting on the same theory Apple also created a CSAM scanning system that it thought was entirely private - ‘it only happens your device!’ - that created a huge privacy backlash, because a bunch of other people think that if your phone is scanning your photos, that isn’t ‘private’ at all. So is ‘on device’ private or not? What’s the rule? What if Apple tried the same model for ‘private’ ads in Safari? How will the public take FLoC? I don’t think we know.
On / off device is one test, but another and much broader one is the first party / third party test: that it’s OK for a website to track what you do on that website but not OK for adtech companies to track you across many different websites. This is the core of the cookie question, and sounds sensible, and indeed one might think that we do have a pretty good consensus on ‘third party cookies’ - after all, Google and Apple are getting rid of them. However, I’m puzzled by some of the implications. “1p good / 3p bad” means that it’s OK for the New York Times to know that you read ten New York Times travel pieces and show you a travel ad, but not OK for the New Yorker to know that and show you the same ad. Why, exactly, is that a policy objective? Indeed, is it ‘private’ for the New York Times to record and analyse everything a logged-in user read on that site for the last decade? What would happen to its ad revenue if it dumped your history after 24 hours? (Cynically, the answer might be ‘not much’.) Is that different to Facebook recording and analysing everything you read on Facebook?
At this point one answer is to cut across all these questions and say that what really matters is whether you disclose whatever you’re doing and get consent. Steve Jobs liked this argument. But in practice, as we've discovered, ‘get consent’ means endless cookie pop-ups full of endless incomprehensible questions that no normal consumer should be expected to be understand, and that just train people to click ‘stop bothering me’. Meanwhile, Apple’s on-device tracking doesn't ask for permission, and opts you in by default, because, of course, Apple thinks that if it's on the device it's private. Perhaps ‘consent’ is not a complete solution after all.
But the bigger issue with consent is that it’s a walled garden, which takes me to a third question - competition. Most of the privacy proposals on the table are in absolute, direct conflict with most of the competition proposals on the table. If you can only analyse behaviour within one site but not across many sites, or make it much harder to do that, companies that have a big site where people spend lots of time have better targeting information and make more money from advertising. If you can only track behaviour across lots of different sites if you do it ‘privately’ on the device or in the browser, then the companies that control the device or the browser have much more control over that advertising (which is why the UK CMA is investigating FLoC). And, as an aside, if you can only target on context, not the user, then Hodinkee is fine but the Guardian’s next landmark piece on Kabul has no ad revenue. Is that what we want? What else might happen?
These are all unresolved questions, and the more questions you ask the less clear things can become. I’ve barely touched on a whole other line of enquiry - of where all the world’s $600bn of annual ad spending would be reallocated when all of this has happened (no, not to newspapers, sadly). Apple clearly thinks that scanning for CSAM on the device is more private than the cloud, but a lot of other people think the opposite. You can see the same confusion in terms like 'Facebook sells your data' or 'surveillance capitalism' - these are really just attempts to avoid the discussion by reframing it, and moving it to a place where we do know what we think, rather than engaging with the challenge and trying to work out an answer. I don’t have an answer either, of course, but that’s rather my point - I don’t think we even agree on the questions.
Amid Backlash, Apple will Change Photo-Scanning Plan but won’t Drop it Completely
Apple issues vague statement promising "improvements" but still plans to scan photos.
Apple said Friday that it will make some changes to its plan to have iPhones and other devices scan user photos for child sexual-abuse images. But Apple said it still intends to implement the system after making "improvements" to address criticisms.
Apple provided this statement to Ars and other news organizations today:
“Last month we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material [CSAM]. Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”
The statement is vague and doesn't say what kinds of changes Apple will make or even what kinds of advocacy groups and researchers it will collect input from. But given the backlash Apple has received from security researchers, privacy advocates, and customers concerned about privacy, it seems likely that Apple will try to address concerns about user privacy and the possibility that Apple could give governments broader access to customers' photos.
Privacy groups warned of government access
It isn't clear how Apple could implement the system in a way that eliminates its critics' biggest privacy concerns. Apple has claimed it would refuse government demands to expand photo-scanning beyond CSAM. But privacy and security advocates argue that once the system is deployed, Apple likely won't be able to avoid giving governments more user content.
"Once this capability is built into Apple products, the company and its competitors will face enormous pressure—and potentially legal requirements—from governments around the world to scan photos not just for CSAM, but also for other images a government finds objectionable," 90 policy groups from the US and around the world said in an open letter to Apple last month. "Those images may be of human rights abuses, political protests, images companies have tagged as 'terrorist' or violent extremist content, or even unflattering images of the very politicians who will pressure the company to scan for them. And that pressure could extend to all images stored on the device, not just those uploaded to iCloud. Thus, Apple will have laid the foundation for censorship, surveillance and persecution on a global basis."
Apple previously announced that devices with iCloud Photos enabled will scan images before they are uploaded to iCloud. Given that an iPhone uploads every photo to iCloud right after it is taken, the scanning of new photos would happen almost immediately if a user has previously turned iCloud Photos on.
Apple has said it will also add a tool to the Messages application that will "analyze image attachments and determine if a photo is sexually explicit." The system will be optional for parents, who can enable it in order to have Apple devices "warn children and their parents when receiving or sending sexually explicit photos."
Apple initially said it would roll the changes out later this year, in the US only at first, as part of updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey. Apple's promise to "take additional time over the coming months to collect input and make improvements" suggests the scanning system could be implemented later than Apple intended, but the company never provided a firm release date to begin with.
Apple called system an advancement in privacy
As we've previously written, Apple says its CSAM-scanning technology "analyzes an image and converts it to a unique number specific to that image" and flags a photo when its hash is identical or nearly identical to the hash of any that appear in a database of known CSAM. An account can be reported to the National Center for Missing and Exploited Children (NCMEC) when about 30 CSAM photos are detected, a threshold Apple set to ensure that there is "less than a one in one trillion chance per year of incorrectly flagging a given account." That threshold could be changed in the future to maintain the one-in-one-trillion false-positive rate.
Apple has argued that its system is actually an advancement in privacy because it will scan photos "in the most privacy-protecting way we can imagine and in the most auditable and verifiable way possible."
"If you look at any other cloud service, they currently are scanning photos by looking at every single photo in the cloud and analyzing it. We wanted to be able to spot such photos in the cloud without looking at people's photos and came up with an architecture to do this," Craig Federighi, Apple's senior VP of software engineering, said last month. The Apple system is "much more private than anything that's been done in this area before," he said.
Changes to the system could be fought by advocacy groups that have urged Apple to scan user photos for CSAM. Apple partnered on the project with NCMEC, which dismissed privacy criticisms as coming from "the screeching voices of the minority." Apple seemingly approved of that statement, as it distributed it to employees in an internal memo that defended the photo-scanning plan the day it was announced.
Until next week,
Current Week In Review
Recent WiRs -
August 28th, August 21st, August 14th, August 7th
Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.
"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public." - Hugo Black
Thanks For Sharing
|Thread Tools||Search this Thread|
|Thread||Thread Starter||Forum||Replies||Last Post|
|Peer-To-Peer News - The Week In Review - November 24th, '12||JackSpratts||Peer to Peer||0||21-11-12 09:20 AM|
|Peer-To-Peer News - The Week In Review - July 16th, '11||JackSpratts||Peer to Peer||0||13-07-11 06:43 AM|
|Peer-To-Peer News - The Week In Review - January 30th, '10||JackSpratts||Peer to Peer||0||27-01-10 07:49 AM|
|Peer-To-Peer News - The Week In Review - January 16th, '10||JackSpratts||Peer to Peer||0||13-01-10 09:02 AM|
|Peer-To-Peer News - The Week In Review - December 5th, '09||JackSpratts||Peer to Peer||0||02-12-09 08:32 AM|