P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 17-06-15, 07:58 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - June 20th, '15

Since 2002


































"At a base minimum, people should be able to walk down a public street without fear that companies they’ve never heard of are tracking their every movement — and identifying them by name — using facial recognition technology. Unfortunately, we have been unable to obtain agreement even with that basic, specific premise." – ACLU, EFF


"We completely cracked the keychain service - used to store passwords and other credentials for different Apple apps." – Luyi Xing






































June 20th, 2015




BitTorrent Launches Shoot, A Mobile App To Share Large Files Without The Cloud

The peer-to-peer tech company is expanding its offerings to compete with Dropbox and Google Drive.
Evie Nagy

A month after launching super-private messaging app Bleep, peer-to-peer tech company BitTorrent is further expanding its suite of mobile apps built on its cloud-free file-sharing protocol.

Shoot is the company's new app designed to share large files or collections of files—such as videos or photo batches—quickly between mobile devices, including those with different operating systems. Shoot is based on BitTorrent's new Sync technology, which the company is positioning to compete directly with services like Dropbox and Google Drive.

In its announcement, BitTorrent touts the convenience and speed of the app for spontaneous sharing of large files, like concert videos or vacation photos, directly between two mobile devices without going through the cloud. BitTorrent's peer-to-peer technology, including Sync, works by using the Internet or a local network to identify other devices with permission to share files—this means that data is shared directly instead of requiring files to be uploaded to a central server and then downloaded by the recipient. It also means that without server considerations, Shoot doesn't impose limits on file size (file sizes are only limited by users' storage).

The tech also assigns each user a private identity that doesn't require an email and password combination or any personal information to be stored on a central server. Like with Bleep, BitTorrent is promoting the extreme privacy of the app based on the fact that no information is sent to a hackable cloud.

After downloading Shoot, users will get three free shares, after which they'll be prompted to purchase the app for a one-time fee of $1.99.

While the name BitTorrent is associated with illegal content sharing because its open source technology is a favorite of pirates, the focus of the company itself is on legitimate B-to-B and consumer-facing products like Bundle for multimedia distribution and the new family of Sync services and apps. According to an announcement last fall, in addition to mobile large file sharing, Sync is set to include a powerful file replication product.
https://www.fastcompany.com/3047481/...hout-the-cloud





Pirates Shatter Filesharing Record With Game Of Thrones Season 5 Season Finale
Paul Lilly

HBO's wildly popular Game of Thrones just recently concluded its fifth season, which ranks as the most popular in the series to date in terms of viewership, both through official channels and via piracy. Speaking of the latter, the season finale set a new record in piracy with 1.5 million downloads in just eight hours. That number is expected to balloon to 10 million in the coming days.

It's not really a surprise that the finale of attracted so many illegal downloads -- while this represents more than any previous seasons, breaking unsavory records like this one is nothing unusual when it comes to Game of Thrones. The series has built up a rabid fan base in its five seasons, and piracy is something that HBO has been dealing with since day one.

There was a time when HBO took this sort of thing in stride.

"I probably shouldn’t be saying this, but it is a compliment of sorts," HBO programming president Michael Lombardo said back in 2013. "The demand is there. And it certainly didn’t negatively impact the DVD sales. [Piracy is] something that comes along with having a wildly successful show on a subscription network."

That statement was in response to Game of Thrones breaking the record for the largest BitTorrent swarm ever, which has now been broken. However, it seems HBO is growing increasingly agitated at pirates plundering its content. Two months ago, HBO sent DMCA takedown notices to those who illegally viewed Game of Thrones. In that same month, it even threatened paying customers outside the U.S. who were using VPNs to access HBO Now.

Morality aside, one downside to pirating Game of Thrones is the picture quality. The majority of torrents out there are 480p copies.
http://hothardware.com/news/pirates-...-season-finale





The Only Internet Most Cubans Know Fits in a Pocket and Moves by Bus

Meet El Packete. OK, it’s a thumb drive. But to many residents of the Net’s lost island, it’s all they’ve got.
Susan Crawford

Last week I wrote about the dismal Internet access, or lack of it, in Cuba, where I recently visited. But due to a combination of resourcefulness and desperation, Cubans have managed a system whereby commercial content is easily available. By way of an informal but extraordinarily lucrative distribution chain — one guy told me the system generates $5 million in payments a month — anyone in Cuba who can pay can watch telenovelas, first-run Hollywood movies, brand-new episodes of Game of Thrones, and even search for a romantic partner. It’s called El Packete, and it arrives weekly in the form of thumb drives loaded with enormous digital files. Those drives make their way across the island from hand to hand, by bus, and by 1957 Chevy, their contents copied and the drive handed on.

In a sense, El Packete is a very slow high-capacity Internet access connection; someone (no one knows who) loads up those drives with online glitz and gets them to Cuban shores. As in the Hollywood system, there are distribution windows. If you can wait to watch your favorite show, you’ll pay less.

El Packete plays to Cuban strengths and needs: Cubans, several people told me, are great at sharing. And being paid to be part of the thumb-drive supply chain is a respectable job in an economy that is desperately short on employment opportunities.

So the reason for its popularity is no mystery. The real riddle is why this rogue system can operate under the tight governing regime. The Cuban government has to know that this underground operation impinges on its monopoly on information. The secret police calls people in all the time to find out what’s going on. But for some reason El Packete isn’t a problem, while actual Internet access is.

Why?

For a possible answer, consider what’s happening in another control-crazy country — China. The Asian giant unveiled an alarming “national security law” announcement earlier this month. As the New York Times reported, the new law doesn’t say much about “traditional security matters as military power, counterespionage or defending the nation’s borders.” Instead, it’s focused on centralizing and consolidating the power of the state. The real threat to the Chinese government is an organized, energetic civil society, influenced by Western nonprofits, that might undermine the survival of the Communist Party. And so the law calls for all of these organizations to be officially sponsored, registered and regulated, and for all foreign companies to essentially agree to be surveilled at all times.

What China wants is for its people to be commercially active — building an enormous, consuming middle class — but politically passive. That may well be the thinking of the current Cuban leadership as it implicitly allows El Packete to circulate.

True, access to telenovelas and HBO series might make the Cuban people long for the air conditioners and dishwashers they see in the backgrounds of the dramas on the screen. But it won’t make them get up from their chairs and do anything to change the country. And so the breathtaking inequality of Cuba can continue, changing only incrementally and only at the pace with which the Cuban government is comfortable.

Cuban hotels, all government-owned, are places that Cubans can now visit and for which they can work; the bellman carrying your bags can make twenty times more in a single night than his wife, a dentist, can make in a month. That’s because he is paid in the touristic currency (itself worth many times more than the ordinary Cuban peso) and gets tips, while his wife must work within the government-controlled system. Living completely outside that system is possible, but suspect; you’ll be called in for interrogation.

Encouraging passive consumption: that’s the model of El Packete and the robotic ideal of the Cuban citizen now facilitated by the current regime.

All of this came home for me when I interviewed a young Cuban documentarian, a woman who had gone briefly to Colombia for a graduate program and now feels her mission in life is to help her country. She was both soft-voiced and determined; she began to cry as she told me that she has realized that the lack of Internet access is not only a problem for her generation but also for the whole country, because Cubans cannot participate as citizens through the Web. She said that even though everyone told her that she was going to get in trouble she needed to make a movie to tell this story.

She did make that movie, she called it “Offline,” and she handed me a copy. It’s like El Packete, going in the other direction, and this time with meaningful content: I brought it back with me to the U.S. I hope you will watch it today.
https://medium.com/backchannel/the-o...s-c96b7e82f7aa





Restaurateur Won’t Face the Music After Losing Copyright Suit
Kathianne Boniello

A New Jersey restaurateur is singing a defiant tune after getting sued for playing Elton John and Rolling Stones songs without permission.

Broadcast Music Inc., an artists-rights group that enforces copyright laws for more than 8.5 million songs, claims Amici III in Linden didn’t have a license when it played four tunes in its eatery one night last year, including the beloved “Bennie and the Jets” and “Brown Sugar.”

The company sued Amici III in New Jersey federal court, winning a $24,000 judgment earlier this year, as well as more than $8,200 in attorney’s fees.

But Amici owner Giovanni Lavorato simply won’t face the music.

“It’s not fair,” he told The Post. “They look to me for free money. I think they should stay away from restaurants that are trying to make a living.”

Lavorato, who has been in business for 25 years, says he has a license from the city of Linden that allows him to have live music in his establishment. The disc jockey DJ brought into the eatery by his son also paid a fee to play tunes, Lavorato believes.
“It’s ridiculous for me to pay somebody also,” he said. “This is not a nightclub. This is not a disco joint . . . How many times do they want to get paid for the stupid music?”

Lavorato is doing it his way.

“I don’t talk to the judges. I don’t talk to anybody. I just don’t want to talk to any of these people, because it’s illegal to try and take money from people,” he insists.

“I’m in the restaurant business, not the entertainment business. They should stay away from me.”

BMI regularly sues eateries, bars and other businesses for playing music without coughing up licensing fees, which range from $357 annually for a jukebox to $5.85 per audience member for a week’s worth of live performances.

Some places can end up paying more then $10,000 for the right to play music in public.

“They’re very aggressive about policing their intellectual-property rights,” said attorney Dante Rohr.

BMI went after a number of Jersey Shore venues recently, including Bobby Dee’s Rock ’N Chair in Avalon, NJ, last year, alleging the restaurant played seven songs, including the Stones’ “Honky Tonk Woman,” without first paying BMI.

Rohr, who represented Bobby Dee’s, said it makes sense for businesses to be in harmony with licensing groups like BMI. “It’s actually foolish not to do it,” he said, noting the licensing cost is less than a legal proceeding.

That Bobby Dee’s case was settled for an undisclosed sum. Flip Flopz Beach Bar and Grill in North Wildwood, which was sued by BMI for playing songs like the Red Hot Chili Peppers’ “Otherside,” also settled its case.

“Most of the business people have no clue that these songs are copyrighted and they need to have licenses to play,” attorney Keith Bonchi, who represented Flip Flopz, said.

“Businesses have to deal with so much regulation, this just becomes another expense for them,” he said.

Musicians rely on the money from BMI’s licenses, and lawsuits against businesses that don’t comply are a tool of last resort for the organization, said a spokeswoman, who added that BMI had been reaching out to Amici III’s for several years about obtaining a music license before taking them to court.
http://nypost.com/2015/06/15/sean-le...arents-tossed/





U.K. Newspaper Tries to Silence Glenn Greenwald Criticism with Copyright Claim
Kevin Collier

Accused of publishing government propaganda against NSA whistleblower Edward Snowden, the Sunday Times is using copyright to hit back at its strongest critic.

In a paywalled feature published Sunday, titled “British spies betrayed to Russians and Chinese,” three authors, citing anonymous government sources, claim that “Russia and China have cracked the top-secret cache of files stolen by the fugitive U.S. whistleblower Edward Snowden.” In turn, the Times’s sources say, the U.K. had to relocate special agents around the world who were allegedly in harm’s way.

In an extremely critical takedown post, The Intercept’s Glenn Greenwald, the journalist Snowden first met with after fleeing the U.S., denied many of the details in the Times story. In particular, the Times claimed that Greenwald’s partner, David Miranda, met with Snowden in Moscow to receive more documents—a claim that’s since been deleted from the Times article.

Greenwald’s post also includes a screengrab of the Times’s layout—and that’s what the Times used to pounce on their high-profile critic. In a legal notice sent Monday, the paper cites the Digital Millennium Copyright Act (DMCA) and claims the Intercept is violating the Times’s copyright of “the typographical arrangement of the front page.”

“If Greenwald were selling a book of Great Covers of the Sunday Times, they'd have a case,” Parker Higgins, an activist at the Electronic Frontier Foundation who specializes in intellectual property, told the Daily Dot. “But this is grasping at straws and attempting to use the strictest takedown law available—copyright—just to silence criticism.”

There’s a long history of people accused of using online copyright law to censor critics; a recent smattering includes California mayors, lawyers, Drake’s label, and Ecuador. The Times didn’t respond to the Daily Dot’s question of just how frequently it issues those claims to other news outlets.

It’s not likely to have much effect on the Intercept’s story, though. When the Daily Dot asked Greenwald if he would abide the DMCA takedown, he simply responded “No.”
http://www.dailydot.com/politics/sno...dmca-takedown/





The Sunday Times’ Snowden Story is Journalism at its Worst — and Filled with Falsehoods
Glenn Greenwald

Western journalists claim that the big lesson they learned from their key role in selling the Iraq War to the public is that it’s hideous, corrupt and often dangerous journalism to give anonymity to government officials to let them propagandize the public, then uncritically accept those anonymously voiced claims as Truth. But they’ve learned no such lesson. That tactic continues to be the staple of how major US and British media outlets “report,” especially in the national security area. And journalists who read such reports continue to treat self-serving decrees by unnamed, unseen officials – laundered through their media – as gospel, no matter how dubious are the claims or factually false is the reporting.

We now have one of the purest examples of this dynamic. Last night, the Murdoch-owned Sunday Times published their lead front-page Sunday article, headlined “British Spies Betrayed to Russians and Chinese.” Just as the conventional media narrative was shifting to pro-Snowden sentiment in the wake of a key court ruling and a new surveillance law, the article (behind a paywall: full text here) claims in the first paragraph that these two adversaries “have cracked the top-secret cache of files stolen by the fugitive US whistleblower Edward Snowden, forcing MI6 to pull agents out of live operations in hostile countries, according to senior officials in Downing Street, the Home Office and the security services.” It continues:

“Western intelligence agencies say they have been forced into the rescue operations after Moscow gained access to more than 1m classified files held by the former American security contractor, who fled to seek protection from Vladimir Putin, the Russian president, after mounting one of the largest leaks in US history.

Senior government sources confirmed that China had also cracked the encrypted documents, which contain details of secret intelligence techniques and information that could allow British and American spies to be identified.

One senior Home Office official accused Snowden of having “blood on his hands”, although Downing Street said there was “no evidence of anyone being harmed”.

Aside from the serious retraction-worthy fabrications on which this article depends – more on those in a minute – the entire report is a self-negating joke. It reads like a parody I might quickly whip up in order to illustrate the core sickness of western journalism.

Unless he cooked an extra-juicy steak, how does Snowden “have blood on his hands” if there is “no evidence of anyone being harmed?” As one observer put it last night in describing the government instructions these Sunday Times journalists appear to have obeyed: “There’s no evidence anyone’s been harmed but we’d like the phrase ‘blood on his hands’ somewhere in the piece.”

The whole article does literally nothing other than quote anonymous British officials. It gives voice to banal but inflammatory accusations that are made about every whistleblower from Daniel Ellsberg to Chelsea Manning. It offers zero evidence or confirmation for any of its claims. The “journalists” who wrote it neither questioned any of the official assertions nor even quoted anyone who denies them. It’s pure stenography of the worst kind: some government officials whispered these inflammatory claims in our ears and told us to print them, but not reveal who they are, and we’re obeying. Breaking!

Stephen Colbert captured this exact pathology with untoppable precision in his 2006 White House Correspondents speech, when he mocked American journalism to the faces of those who practice it:

“But, listen, let’s review the rules. Here’s how it works.The President makes decisions. He’s the decider. The press secretary announces those decisions, and you people of the press type those decisions down. Make, announce, type. Just put ’em through a spell check and go home. Get to know your family again. Make love to your wife. Write that novel you got kicking around in your head. You know, the one about the intrepid Washington reporter with the courage to stand up to the administration? You know, fiction!”

The Sunday Times article is even worse because it protects the officials they’re serving with anonymity. The beauty of this tactic is that the accusations can’t be challenged. The official accusers are being hidden by the journalists so nobody can confront them or hold them accountable when it turns out to be false. The evidence can’t be analyzed or dissected because there literally is none: they just make the accusation and, because they’re state officials, their media-servants will publish it with no evidence needed. And as is always true, there is no way to prove the negative. It’s like being smeared by a ghost with a substance that you can’t touch.

This is the very opposite of journalism. Ponder how dumb someone has to be at this point to read an anonymous government accusation, made with zero evidence, and accept it as true.

But it works. Other news agencies mindlessly repeated the Sunday Times claims far and wide. I watched last night as American and British journalists of all kinds reacted to the report on Twitter: by questioning none of it. They did the opposite: they immediately assumed it to be true, then spent hours engaged in somber, self-serious discussions with one another over what the geopolitical implications are, how the breach happened, what it means for Snowden, etc. This is the formula that shapes their brains: anonymous self-serving government assertions = Truth.

By definition, authoritarians reflexively believe official claims – no matter how dubious or obviously self-serving, even when made while hiding behind anonymity – because that’s how their submission functions. Journalists who practice this sort of primitive reporting – I uncritically print what government officials tell me, and give them anonymity so they have no accountability for any it – do so out of a similar authoritarianism, or uber-nationalism, or laziness, or careerism. Whatever the motives, the results are the same: government officials know they can propagandize the public at any time because subservient journalists will give them anonymity to do so and will uncritically disseminate and accept their claims.

At this point, it’s hard to avoid the conclusion that journalists want it this way. It’s impossible that they don’t know better. The exact kinds of accusations laundered in the Sunday Times today are made – and then disproven – in every case where someone leaks unflattering information about government officials.

In the early 1970s, Nixon officials such as John Ehrlichman and Henry Kissinger planted accusations in the U.S. media that Daniel Ellsberg had secretly given the Pentagon Papers and other key documents to the Soviet Union; everyone now knows this was a lie, but at the time, American journalists repeated it constantly, helping to smear Ellsberg. That’s why Ellsberg has constantly defended Snowden and Chelsea Manning from the start: because the same tactics were used to smear him.

The same thing happened with Chelsea Manning. When WikiLeaks first began publishing the Afghan War logs, U.S. officials screamed that they – all together now – had “blood on their hands.” But when some journalists decided to scrutinize rather than mindlessly repeat the official accusation (i.e., some decided to do journalism), they found it was a fabrication.

Writing under the headline “US officials privately say WikiLeaks damage limited,” Reuters’ Mark Hosenball reported that “internal U.S. government reviews have determined that a mass leak of diplomatic cables caused only limited damage to U.S. interests abroad, despite the Obama administration’s public statements to the contrary.”

An AP report was headlined “AP review finds no WikiLeaks sources threatened,” and explained that “an Associated Press review of those sources raises doubts about the scope of the danger posed by WikiLeaks’ disclosures and the Obama administration’s angry claims, going back more than a year, that the revelations are life-threatening.” Months earlier, McClatchy’s Nancy Youssef wrote an article headlined “Officials may be overstating the dangers from WikiLeaks,” and she noted that “despite similar warnings ahead of the previous two massive releases of classified U.S. intelligence reports by the website, U.S. officials concede that they have no evidence to date that the documents led to anyone’s death.”

Now we have exactly the same thing here. There’s an anonymously made claim that Russia and China “cracked the top-secret cache of files” from Snowden’s, but there is literally zero evidence for that claim. These hidden officials also claim that American and British agents were unmasked and had to be rescued, but not a single one is identified. There is speculation that Russia and China learned things from obtaining the Snowden files, but how could these officials possibly know that, particularly since other government officials are constantly accusing both countries of successfully hacking sensitive government databases?

What kind of person would read evidence-free accusations of this sort from anonymous government officials – designed to smear a whistleblower they hate – and believe them? That’s a particularly compelling question given that Vice’s Jason Leopold just last week obtained and published previously secret documents revealing a coordinated smear campaign in Washington to malign Snowden. Describing those documents, he reported: “A bipartisan group of Washington lawmakers solicited details from Pentagon officials that they could use to ‘damage’ former NSA contractor Edward Snowden’s ‘credibility in the press and the court of public opinion.'”

Manifestly then, the “journalism” in this Sunday Times article is as shoddy and unreliable as it gets. Worse, its key accusations depend on retraction-level lies.

The government accusers behind this story have a big obstacle to overcome: namely, Snowden has said unequivocally that when he left Hong Kong, he took no files with him, having given them to the journalists with whom he worked, and then destroying his copy precisely so that it wouldn’t be vulnerable as he traveled. How, then, could Russia have obtained Snowden’s files as the story claims – “his documents were encrypted but they weren’t completely secure ” – if he did not even have physical possession of them?

The only way this smear works is if they claim Snowden lied, and that he did in fact have files with him after he left Hong Kong. The Sunday Times journalists thus include a paragraph that is designed to prove Snowden lied about this, that he did possess these files while living in Moscow:

“It is not clear whether Russia and China stole Snowden’s data, or whether he voluntarily handed over his secret documents in order to remain at liberty in Hong Kong and Moscow.

David Miranda, the boyfriend of the Guardian journalist Glenn Greenwald, was seized at Heathrow in 2013 in possession of 58,000 “highly classified” intelligence documents after visiting Snowden in Moscow.”

What’s the problem with that Sunday Times passage? It’s an utter lie. David did not visit Snowden in Moscow before being detained. As of the time he was detained in Heathrow, David had never been to Moscow and had never met Snowden. The only city David visited on that trip before being detained was Berlin, where he stayed in the apartment of Laura Poitras.

The Sunday Times “journalists” printed an outright fabrication in order to support their key point: that Snowden had files with him in Moscow. This is the only “fact” included in their story that suggests Snowden had files with him when he left Hong Kong, and it’s completely, demonstrably false (and just by the way: it’s 2015, not 1971, so referring to gay men in a 10-year spousal relationship with the belittling term “boyfriends” is just gross).

Then there’s the Sunday Times claim that “Snowden, a former contractor at the CIA and National Security Agency (NSA), downloaded 1.7m secret documents from western intelligence agencies in 2013.” Even the NSA admits this claim is a lie. The NSA has repeatedly said that it has no idea how many documents Snowden downloaded and has no way to find out. As the NSA itself admits, the 1.7 million number is not the number the NSA claims Snowden downloaded – they admit they don’t and can’t know that number – but merely the amount of documents he interacted with in his years of working at NSA. Here’s then-NSA chief Keith Alexander explaining exactly that in a 2014 interview with the Australian Financial Review:

“AFR: Can you now quantify the number of documents [Snowden] stole?

Gen. Alexander: Well, I don’t think anybody really knows what he actually took with him, because the way he did it, we don’t have an accurate way of counting. What we do have an accurate way of counting is what he touched, what he may have downloaded, and that was more than a million documents.”

Let’s repeat that: “I don’t think anybody really knows what he actually took with him, because the way he did it, we don’t have an accurate way of counting.” Yet someone whispered to the Sunday Times reporters that Snowden downloaded 1.7 million documents, so like the liars and propagandists that they are, they mindlessly printed it as fact. That’s what this whole article is.

Then there’s the claim that the Russian and Chinese governments learned the names of covert agents by cracking the Snowden file, “forcing MI6 to pull agents out of live operations in hostile countries.” This appears quite clearly to be a fabrication by the Sunday Times for purposes of sensationalism, because if you read the actual anonymous quotes they include, not even the anonymous officials claim that Russia and China hacked the entire archive, instead offering only vague assertions that Russia and China “have information.”

Beyond that, how could these hidden British officials possibly know that China and Russia learned things from the Snowden files as opposed to all the other hacking and spying those countries do? Moreover, as pointed out last night by my colleague Ryan Gallagher – who has worked for well over a year with the full Snowden archive – “I’ve reviewed the Snowden documents and I’ve never seen anything in there naming active MI6 agents.” He also said: “I’ve seen nothing in the region of 1m documents in the Snowden archive, so I don’t know where that number has come from.”

Finally, none of what’s in the Sunday Times is remotely new. US and UK government officials and their favorite journalists have tried for two years to smear Snowden with these same claims. In June, 2013, the New York Times gave anonymity to “two Western intelligence experts, who worked for major government spy agencies” who “said they believed that the Chinese government had managed to drain the contents of the four laptops that Mr. Snowden said he brought to Hong Kong.” The NYT‘s Public Editor chided the paper for printing that garbage, and as I reported in my book, then-editor-in-chief Jill Abramson told the Guardian‘s Janine Gibson that they should not have printed that, calling it “irresponsible.” (And that’s to say nothing of the woefully ignorant notion that Snowden – or anyone else these days – stores massive amounts of data on “four laptops” as opposed to tiny thumb drives).

The GOP’s right-wing extremist Congressman Mike Rogers constantly did the same thing. He once announced with no evidence that “Snowden is working with Russia” – a claim even former CIA Deputy Director Michael Morell denies – and also argued that Snowden should “be charged with murder” for causing unknown deaths. My personal favorite example of this genre of reckless, desperate smears is the Op-Ed which the Wall Street Journal published in May, 2014, by neocon Edward Jay Epstein, which had this still-hilarious paragraph:

“A former member of President Obama’s cabinet went even further, suggesting to me off the record in March this year that there are only three possible explanations for the Snowden heist: 1) It was a Russian espionage operation; 2) It was a Chinese espionage operation, or 3) It was a joint Sino-Russian operation.”

It must be one of those, an anonymous official told me! It must be! Either Russia did it. Or China did it. Or they did it together! That is American journalism.

The Sunday Times today merely recycled the same evidence-free smears that have been used by government officials for years – not only against Snowden, but all whistleblowers – and added a dose of sensationalism and then baked it with demonstrable lies. That’s just how western journalism works, and it’s the opposite of surprising. But what is surprising, and grotesque, is how many people (including other journalists) continue to be so plagued by some combination of stupidity and gullibility, so that no matter how many times this trick is revealed, they keep falling for it. If some anonymous government officials said it, and journalists repeat it while hiding who they are, I guess it must be true.



UPDATE: The Sunday Times has now quietly deleted one of the central, glaring lies in its story: that David Miranda had just met with Snowden in Moscow when he was detained at Heathrow carrying classified documents. By “quietly deleted,” I mean just that: they just removed it from their story without any indication or note to their readers that they’ve done so (though it remains in the print edition and thus requires a retraction). That’s indicative of the standard of “journalism” for the article itself. Multiple other falsehoods, and all sorts of shoddy journalistic practices, remain thus far unchanged.
https://firstlook.org/theintercept/2...ed-falsehoods/





Huge Loss For Free Speech In Europe: Human Rights Court Says Sites Liable For User Comments
Mike masnick

Last year we wrote about a very dangerous case going to the European Court of Human Rights: Delfi AS v. Estonia, which threatened free expression across Europe. Today, the ruling came out and it's a disaster. In short, websites can be declared liable for things people post in comments. As we explained last year, the details of the case were absolutely crazy. The court had found that even if a website took down comments after people complained, it could still be held liable because it should have anticipated bad comments in the first place. Seriously. In this case, the website had published what everyone agrees was a "balanced" article about "a matter of public interest" but that the website publisher should have known that people would post nasty comments, and therefore, even though it automated a system to remove comments that people complained about, it was still liable for the complaints.

The European Court of Human Rights agreed to rehear the case, and we hoped for a better outcome this time around -- but those hopes have been dashed. The ruling is terrible through and through. First off, it insists that the comments on the news story were clearly "hate speech" and that, as such, "did not require any linguistic or legal analysis since the remarks were on their face manifestly unlawful." To the court, this means that it's obvious such comments should have been censored straight out. That's troubling for a whole host of reasons at the outset, and highlights the problematic views of expressive freedom in Europe. Even worse, however, the Court then notes that freedom of expression is "interfered with" by this ruling, but it doesn't seem to care -- saying that it is deemed "necessary in a democratic society."

Think about that for a second.

The Court tries to play down the impact of this ruling, by saying it doesn't apply to any open forum, but does apply here because Delfi was a giant news portal, and thus (1) had the ability to check with lawyers about this and (2) was publishing the story and opening it up for comments.

The rest of the ruling is... horrific. It keeps going back to this "hate speech" v. "free speech" dichotomy as if it's obvious, and even tries to balance the "right to protection of reputation" against the right of freedom of expression. In other words, it's the kind of ridiculous ruling that will make true free expression advocates scream.

When examining whether there is a need for an interference with freedom of expression in a democratic society in the interests of the “protection of the reputation or rights of others”, the Court may be required to ascertain whether the domestic authorities have struck a fair balance when protecting two values guaranteed by the Convention which may come into conflict with each other in certain cases, namely on the one hand freedom of expression protected by Article 10, and on the other the right to respect for private life enshrined in Article 8

And the court insists that the two things -- reputation protection and free speech "deserve equal respect." That's bullshit, frankly. The whole concept of a right to a reputation makes no sense at all. Your reputation is based on what people think of you. You have no control over what other people think. You can certainly control your own actions, but what people think of you?

The court sets up a series of areas to explore in determining if Defli should be held liable for those comments. In the US, thanks to Section 230 of the CDA, we already know the answer here would be "hell no." But without a Section 230 in Europe -- and with the bizarre ideas mentioned above -- things get tricky quickly. So even though the court readily agrees that the article Defli published "was a balanced one, contained no offensive language and gave rise to no arguments about unlawful statements" it still puts the liability on Delfi. Because the site wanted comments. It actually argues that because Delfi is a professional site and thus comments convey economic advantage, Delfi is liable:

As regards the context of the comments, the Court accepts that the news article about the ferry company, published on the Delfi news portal, was a balanced one, contained no offensive language and gave rise to no arguments about unlawful statements in the domestic proceedings. The Court is aware that even such a balanced article on a seemingly neutral topic may provoke fierce discussions on the Internet. Furthermore, it attaches particular weight, in this context, to the nature of the Delfi news portal. It reiterates that Delfi was a professionally managed Internet news portal run on a commercial basis which sought to attract a large number of comments on news articles published by it. The Court observes that the Supreme Court explicitly referred to the fact that the applicant company had integrated the comment environment into its news portal, inviting visitors to the website to complement the news with their own judgments and opinions (comments). According to the findings of the Supreme Court, in the comment environment, the applicant company actively called for comments on the news items appearing on the portal. The number of visits to the applicant company’s portal depended on the number of comments; the revenue earned from advertisements published on the portal, in turn, depended on the number of visits. Thus, the Supreme Court concluded that the applicant company had an economic interest in the posting of comments. In the view of the Supreme Court, the fact that the applicant company was not the writer of the comments did not mean that it had no control over the comment environment...

Also? Having "rules" posted for comments somehow increases the site's liability, rather than lessens it as any sane person would expect:

The Court also notes in this regard that the “Rules of comment” on the Delfi website stated that the applicant company prohibited the posting of comments that were without substance and/or off-topic, were contrary to good practice, contained threats, insults, obscene expressions or vulgarities, or incited hostility, violence or illegal activities. Such comments could be removed and their authors’ ability to post comments could be restricted. Furthermore, the actual authors of the comments could not modify or delete their comments once they were posted on the applicant company’s news portal – only the applicant company had the technical means to do this. In the light of the above and the Supreme Court’s reasoning, the Court agrees with the Chamber’s finding that the applicant company must be considered to have exercised a substantial degree of control over the comments published on its portal.

Yes, that's right. They get in more trouble for posting rules saying behave. It's incredible.

The next key finding: because commenters are anonymous and anonymity is important -- and because it's difficult to identify anonymous commenters -- well, fuck it, just put the liability on the site instead. That really does seem to be the reasoning:

According to the Supreme Court’s judgment in the present case, the injured person had the choice of bringing a claim against the applicant company or the authors of the comments. The Court considers that the uncertain effectiveness of measures allowing the identity of the authors of the comments to be established, coupled with the lack of instruments put in place by the applicant company for the same purpose with a view to making it possible for a victim of hate speech to effectively bring a claim against the authors of the comments, are factors that support a finding that the Supreme Court based its judgment on relevant and sufficient grounds. The Court also refers, in this context, to the Krone Verlag (no. 4) judgment, where it found that shifting the risk of the defamed person obtaining redress in defamation proceedings to the media company, which was usually in a better financial position than the defamer, was not as such a disproportionate interference with the media company’s right to freedom of expression....

Further on the question of liability, the court finds that because Delfi's filter wasn't good enough, that exposes it to more liability. I wish I were making this up.

Thus, the Court notes that the applicant company cannot be said to have wholly neglected its duty to avoid causing harm to third parties. Nevertheless, and more importantly, the automatic word-based filter used by the applicant company failed to filter out odious hate speech and speech inciting violence posted by readers and thus limited its ability to expeditiously remove the offending comments. The Court reiterates that the majority of the words and expressions in question did not include sophisticated metaphors or contain hidden meanings or subtle threats. They were manifest expressions of hatred and blatant threats to the physical integrity of L. Thus, even if the automatic word-based filter may have been useful in some instances, the facts of the present case demonstrate that it was insufficient for detecting comments whose content did not constitute protected speech under Article 10 of the Convention.... The Court notes that as a consequence of this failure of the filtering mechanism, such clearly unlawful comments remained online for six weeks....

Then the court says that because the "victims" of "hate speech" can't police the interwebs, clearly it should be the big companies' responsibility instead:

Moreover, depending on the circumstances, there may be no identifiable individual victim, for example in some cases of hate speech directed against a group of persons or speech directly inciting violence of the type manifested in several of the comments in the present case. In cases where an individual victim exists, he or she may be prevented from notifying an Internet service provider of the alleged violation of his or her rights. The Court attaches weight to the consideration that the ability of a potential victim of hate speech to continuously monitor the Internet is more limited than the ability of a large commercial Internet news portal to prevent or rapidly remove such comments.

Finally, the court says that since the company has stayed in business and is still publishing, despite the earlier ruling, it proves that this ruling is no big deal for free speech.

The Court also observes that it does not appear that the applicant company had to change its business model as a result of the domestic proceedings. According to the information available, the Delfi news portal has continued to be one of Estonia’s largest Internet publications and by far the most popular for posting comments, the number of which has continued to increase. Anonymous comments – now existing alongside the possibility of posting registered comments, which are displayed to readers first – are still predominant and the applicant company has set up a team of moderators carrying out follow-up moderation of comments posted on the portal (see paragraphs 32 and 83 above). In these circumstances, the Court cannot conclude that the interference with the applicant company’s freedom of expression was disproportionate on that account either.

The ruling is about as bad as you can imagine. It is absolutely going to chill free expression across Europe. Things are a bit confusing because the EU Court of Justice has actually been much more concerned about issues of intermediary liability, and this ruling contradicts some of those rulings, but since the two courts are separate and not even part of the same system, it's not clear what jurisdiction prevails. It is quite likely, however, that many will seize upon this European Court of Human Rights ruling to go after many websites that allow comments and free expression in an attempt to block it. It is going to force many sites to either shut down open comments, curtail forums or moderate them much more seriously.

For a Europe that is supposedly trying to build up a bigger internet industry, this ruling is a complete disaster, considering just how much internet innovation is based on enabling and allowing free expression.

There is a dissenting opinion from two judges on the court, who note the "collateral censorship" that is likely to occur out of all of this.

In this judgment the Court has approved a liability system that imposes a requirement of constructive knowledge on active Internet intermediaries (that is, hosts who provide their own content and open their intermediary services for third parties to comment on that content). We find the potential consequences of this standard troubling. The consequences are easy to foresee. For the sake of preventing defamation of all kinds, and perhaps all “illegal” activities, all comments will have to be monitored from the moment they are posted. As a consequence, active intermediaries and blog operators will have considerable incentives to discontinue offering a comments feature, and the fear of liability may lead to additional self-censorship by operators. This is an invitation to self-censorship at its worst.

It further notes how this works -- in such a simple manner it's disturbing that the court didn't get it:

Governments may not always be directly censoring expression, but by putting pressure and imposing liability on those who control the technological infrastructure (ISPs, etc.), they create an environment in which collateral or private-party censorship is the inevitable result. Collateral censorship “occurs when the state holds one private party A liable for the speech of another private party B, and A has the power to block, censor, or otherwise control access to B’s speech”. Because A is liable for someone else’s speech, A has strong incentives to over-censor, to limit access, and to deny B’s ability to communicate using the platform that A controls. In effect, the fear of liability causes A to impose prior restraints on B’s speech and to stifle even protected speech. “What looks like a problem from the standpoint of free expression ... may look like an opportunity from the standpoint of governments that cannot easily locate anonymous speakers and want to ensure that harmful or illegal speech does not propagate.” These technological tools for reviewing content before it is communicated online lead (among other things) to: deliberate overbreadth; limited procedural protections (the action is taken outside the context of a trial); and shifting of the burden of error costs (the entity in charge of filtering will err on the side of protecting its own liability, rather than protecting freedom of expression).

It's disappointing they were unable to convince their colleagues on this issue. This ruling is going to cause serious problems in Europe.
https://www.techdirt.com/articles/20...comments.shtml





South Korea Provokes Teenage Smartphone Privacy Row
Stephen Evans

How much control should parents have over teenagers' web browsing?

That's the debate raging in South Korea at the moment.

Because the government has ruled that people under 19 who buy a smartphone must install an app that monitors their web activity.

Parents will be able to to see what their kids are up to online and block access to "undesirable" sites.

Failure to install such an app means the phone won't work.

Is it a triumph of good sense or a paternalistic government going too far, especially when you consider that many of these youngsters are old enough to vote in other countries or serve in the military?

'Harmful content'

The government has developed its own monitoring app called Smart Sheriff, but there more than a dozen alternatives on the market.

Phone stores now have posters at the entrance saying: "Young smartphone users, you must install apps that block harmful content."

Apple loophole

There is no opt-out.

But there are loopholes, including the fact that the Communications Commission assumes that in the land of Samsung everybody prefers Android to Apple, so, according to the critics, those with iPhones can get round the rule.

The government argument is simple but powerful: there is a pit of nastiness on the web and young people should be protected from it.

The opponents' argument is also simple and powerful: it's about personal freedom.

Children have to be allowed to roam in cyberspace - just like in physical space - to learn how to cope with life's difficulties, as well as enjoy life's pleasures.

And even if parts of the internet should be closed off to children, it's for parents to decide where the barriers should be, not the government.

Blocking access to a list of forbidden sites through a smartphone app is a step too far, they argue.

Some of the apps monitor particular words and phrases, then alert parents when these triggers are put into search engines.

Examples include "threat", "run away from home", "pregnancy", and "crazy". There are many, many more.

Kim Kha Yeun, a lawyer at the Open Net Korea organisation, which is trying to get the compulsory instalment of the app blocked, said: "It is the same as installing a surveillance camera on teenagers' smartphones."

Open Net Korea also fears that the list of banned sites could expand at the behest of politicians for political reasons.

Smartphone rules

There is a tradition of paternalism in Korea.

South Korea was industrialised at the direction of the president, so it may be that what is tolerated in hi-tech Korea, where 8-out-of-10 teenagers own a smartphone, would not be tolerated elsewhere.

There have already been attempts to control the way citizens use technology.

For example, a default shutter-click sound has been introduced to smartphone cameras to discourage perverts from taking surreptitious, voyeuristic photos of people on trains, in changing rooms, or other public places.

But the small number of convictions for such an offence would indicate that the truly determined are managing to switch off this sound effect anyway.

In the shadow of dictatorship

South Korea is a vibrant democracy.

It's had free and fair elections since 1987. But paternalism doesn't have the bad name it might have in some other democracies.

That is partly because the track record of strong government is good, in the eyes of many Koreans.

The country was modernised rapidly under the firm leadership of a paternalistic president.

Major-General Park Chung-hee took power in a coup in 1961.

He was a strongman who utilised brutal methods - but he also dictated that industries be created.

Under his direction, the South Korean economic phenomenon was born.

Koreans know that.

And the current president knows that. She should do - Park Geun-hye is the dictator's daughter.

'Nasty stuff'

When the BBC talked to teenagers aged 18 and under, they resented being made to install Smart Sheriff or its alternatives.

At Seoul Global High School, Won June-Lee, Yerim Jin and Minjun Kim were studying 1984 - the George Orwell novel in which Big Brother first appears - when the BBC visited.

Their opinions all followed the same line: parents are right to have fears about what children are doing on the internet, but the kids are also entitled to challenge and negotiate what they are allowed to see.

And learning to control what kinds of media are encountered on the net is now a part of growing up, they argued.

Modern South Korea is struggling to come to terms with its past.

It is a country seemingly addicted to technology, but also accepting of paternalistic government; a vibrant democracy built on economic foundations laid by a despot.

Big Brother may have been tolerated in the past, but now he has to argue his case.
http://www.bbc.com/news/technology-3...lflow_facebook





'600 Million' Samsung Mobiles Vulnerable To Keyboard Cracking Attack
Thomas Fox-Brewster

Given everything that’s occurred over the last two years, Android phone owners would be forgiven for thinking major manufacturers had their backs when it came to security, especially encryption. But a serious issue affecting a default keyboard in as many as 600 million Samsung mobiles highlights just how wrong that assumption can be.

The problem, uncovered by Ryan Welton from mobile security specialists NowSecure, was a blatant one: the SwiftKey keyboard pre-installed on Samsung phones looked for language pack updates over unencrypted lines, in plain text. That meant it was possible for Welton to create a spoof proxy server and send malicious security updates to affected devices, along with some validating data to ensure the bad code remained on the device. This gave him a hook from which to find ways to escalate his attack and exploit the device without the users’ knowledge.

In more malicious hands, the exploit could be used to give an attacker system user level privileges and allowing them to siphon off contact data, text messages, bank logins and most information the victim would have considered private. It could also be used to monitor users from afar.

Having been alerted to the issue back in November 2014, Samsung told NowSecure it was working on a patch and eventually delivered one to carrier networks in late March for Android 4.2 and above, according to NowSecure CEO Andrew Hoog. But the company believes current devices are still vulnerable.

Welton, who today detailed the exploit at the Blackhat Security Summit in London, tested a Samsung Galaxy S6 running on Verizon and claimed to have replicated the attack. “We can confirm that we have found the flaw still unpatched on the Galaxy S6 for the Verizon and Sprint networks, in off the shelf tests we did over the past couple of days,” a NowSecure spokesperson confirmed. Hoog said the flaw likely affected the majority of Samsung Android devices, including the S3, S4, S5, and Galaxy Note 3 and 4.

FORBES has contacted Verizon and Sprint about the issue. Verizon had not responded at the time of publication, Sprint declined to comment.

A SwiftKey spokesperson said: “We’ve seen reports of a security issue related to the Samsung keyboard. We can confirm that the SwiftKey Keyboard apps available via Google Play or the Apple App Store are not affected by this vulnerability. We take reports of this manner very seriously and are currently investigating further.”

NowSecure noted this does not mean users can simply download a fresh version of SwiftKey from either of the two stores. They still require a carrier upgrade for the vulnerability to be removed.

Users have been left in the lurch somewhat, as the keyboard can’t be uninstalled and even when it’s not the default keyboard, it can still be exploited, said Welton. Until patches are ready, users of Samsung phones should be careful about what networks they’re using and ask their carrier if a patch for the vulnerability is available.

Samsung had not responded to a request for comment at the time of publication. One saving grace for the South Korean manufacturer is that an attacker has to find a way onto the same network as a user before exploiting the bug, though identifying Samsung Galaxy S6 phones should be trivial for seasoned hackers sitting on the same Wi-Fi as their targets. Fully remote attacks are also feasible by hijacking the Domain Name System (DNS), the network layer that directs user traffic to the right website after they ask to visit a particular URL, or by compromising a router or internet service provider from afar, Welton said.

Hoog told FORBES that it was the users who carried the majority of the risk when it came to flaws such as this. He believes the mobile security industry has been focused on the “wrong problem”, namely malware. “What we’re finding is that the real problem is leaky apps.”

Welton noted the same issue could be used to exploit the hugely popular Talking Tom app to install other apps and for further exploitation.
http://www.forbes.com/sites/thomasbr...acking-attack/





Apple CORED: Boffins Reveal Password-Killer 0-Days for iOS and OS X

Keychains raided, sandboxes busted, passwords p0wned, but Apple silent for six months
Darren Pauli

Six university researchers have revealed deadly zero-day flaws in Apple's iOS and OS X, claiming it is possible to crack Apple's password-storing keychain, break app sandboxes, and bypass its App Store security checks.

Attackers can steal passwords from installed apps, including the native email client, without being detected, by exploiting these bugs.

The team was able to upload malware to the Apple app store, passing the vetting process without triggering alerts. That malware, when installed on a victim's device, raided the keychain to steal passwords for services including iCloud and the Mail app, and all those stored within Google Chrome.

Lead researcher Luyi Xing told El Reg he and his team complied with Apple's request to withhold publication of the research for six months, but had not heard back as of the time of writing.

They say the holes are still present in Apple's software, meaning their work will likely be consumed by attackers looking to weaponize the work.

Apple was not available for immediate comment.

The Indiana University boffins Xing; Xiaolong Bai; XiaoFeng Wang; and Kai Chen joined Tongxin Li, of Peking University, and Xiaojing Liao, of Georgia Institute of Technology, to develop the research, which is detailed in a paper titled Unauthorized Cross-App Resource Access on Mac OS X and iOS.

"Recently we discovered a set of surprising security vulnerabilities in Apple's Mac OS and iOS that allows a malicious app to gain unauthorised access to other apps' sensitive data such as passwords and tokens for iCloud, Mail app and all web passwords stored by Google Chrome," Xing told The Register's security desk.

"Our malicious apps successfully went through Apple’s vetting process and was published on Apple’s Mac app store and iOS app store.

"We completely cracked the keychain service - used to store passwords and other credentials for different Apple apps - and sandbox containers on OS X, and also identified new weaknesses within the inter-app communication mechanisms on OS X and iOS which can be used to steal confidential data from Evernote, Facebook and other high-profile apps."

The team was able to raid banking credentials from Google Chrome on the latest OS X 10.10.3, using a sandboxed app to steal the system's keychain data and secret iCloud tokens, and passwords from password vaults.

Photos were stolen from WeChat, and the token for popular cloud service Evernote was nabbed, allowing it to be fully compromised.

"The consequences are dire," the team wrote in the paper.

Some 88.6 per cent of 1,612 OS X and 200 iOS apps were found "completely exposed" to unauthorized cross-app resource access (XARA) attacks allowing malicious apps to steal otherwise secure data.

Xing says he reported the flaws to Apple in October 2014.

Apple security bods responded to the researchers in emails seen by El Reg expressing understanding for the gravity of the attacks, and asked for at least six months to fix the problems. In February, the Cupertino staffers requested an advanced copy of the research paper.

Google's Chromium security team was more responsive, and removed keychain integration for Chrome, noting that it could likely not be solved at the application level.

AgileBits, owner of popular software 1Password, said it could not find a way to ward off the attacks nor make the malware "work harder" some four months after it was warned of the vulnerabilities. ("Neither we nor Luyi Xing and his team have been able to figure out a completely reliable way to solve this problem," said AgileBits's Jeffrey Goldberg in a blog post today.)

The team's work into XARA attacks is the first of its kind; Apple's app isolation mechanisms are supposed to stop malicious apps from raiding each other. The researchers found "security-critical vulnerabilities" including cross-app resource-sharing mechanisms and communications channels such as the keychain, WebSocket and Scheme.

"Note that not only does our attack code circumvent the OS-level protection but it can also get through the restrictive app vetting process of the Apple Stores, completely defeating its multi-layer defense," the researchers wrote in the paper.

They say almost all XARA flaws arise from Apple's cross-app resource sharing and communication mechanisms such as keychain for sharing passwords, BID based separation, and URL scheme for app invocation, which is different from how the Android system works.

Their research, previously restricted to Android, would lead to a new line of work for the security community studying how the vulnerabilities affect Apple and other platforms.

Here's the boffins' description of their work:

“Our study brings to light a series of unexpected, security-critical aws that can be exploited to circumvent Apple's isolation protection and its App Store's security vetting. The consequences of such attacks are devastating, leading to complete disclosure of the most sensitive user information (e.g., passwords) to a malicious app even when it is sandboxed.

Such findings, which we believe are just a tip of the iceberg, will certainly inspire the follow-up research on other XARA hazards across platforms. Most importantly, the new understanding about the fundamental cause of the problem is invaluable to the development of better app isolation protection for future OSes.”

In-depth technical details are available in the aforementioned paper.
http://www.theregister.co.uk/2015/06...ch_blitzkrieg/





Chinese Hackers Circumvent Popular Web Privacy Tools
Nicole Perlroth

Chinese hackers have found a way around widely used privacy technology to target the creators and readers of web content that state censors have deemed hostile, according to new research.

The hackers were able to circumvent two of the most trusted privacy tools on the Internet: virtual private networks, or VPNs, and Tor, the anonymity software that masks a computer’s true whereabouts by routing its Internet connection through various points around the globe, according to findings by Jaime Blasco, a security researcher at AlienVault, a Silicon Valley security company.

Both tools are used by Chinese businesses and by millions of citizens to bypass China’s censorship technology, often called the Great Firewall, and to make their web activities unreadable to state snoopers.

The attackers compromised websites frequented by Chinese journalists as well as China’s Muslim Uighur ethnic minority, Mr. Blasco discovered last week.

As long as visitors to those websites were also logged into one of 15 Chinese Internet portals — including those run by Baidu, Alibaba and RenRen — the hackers were able to steal names, addresses, sex, birth dates, email addresses, phone numbers and even the so-called Internet cookies that track other websites viewed by a user.

To get around the Tor and VPN technology, the attackers relied on a server software vulnerability that China’s top companies apparently didn’t patch, Mr. Blasco said.

While Mr. Blasco and others have not been able to pinpoint the identity of the hackers, the list of targets and the sophistication of the attacks suggest they may have been directed by the Chinese government.

“Who else could be potentially interested in this information and go to such lengths? Who else would want to know who was visiting Uighur websites and reporters’ websites inside China?” Mr. Blasco said in interview. “There’s no financial gain from targeting these sites.”

Since taking power in late 2012, President Xi Jinping has shown a personal interest in how the Internet is managed, by creating and leading a committee responsible for Internet governance.

He has also given broad powers to the newly formed Cyberspace Administration of China, which has in turn targeted Internet celebrities who influence online opinion, increased blocks on foreign websites and sought to project China’s influence over the Internet internationally.

In the last few months, the Chinese government has blocked sales and disabled the protocols of VPNs. It also hijacked Internet traffic flowing to Baidu, China’s biggest Internet company, using it to overwhelm and knock down websites like GitHub that carry content China’s sensors deem hostile, including content from The New York Times.

Activists and security experts advised Chinese Internet users to protect themselves from state-sponsored surveillance by using Tor and VPNs, and foreigners inside China have long done so. But Mr. Blasco’s discovery suggests that Beijing’s Internet censors have found a way to render those tools useless.

“There’s a growing sense within China that widely used VPN services that were once considered untouchable are now being touched,” said Nathan Freitas, a fellow at the Berkman Center for Internet and Society at Harvard and technical adviser to the Tibet Action Institute.

The Cyberspace Administration of China did not return requests for comment.

Mr. Blasco said the Uighur and press-related sites had been compromised with a “watering hole attack” in which attackers find a way to hide malicious code in websites frequented by their targets and then wait for their victims to come to them. Once people visit those sites, that code gets injected into their web browsers.

The technique has been used by governments and hackers for surveillance and to steal passwords.

What made the attacks particularly serious, Mr. Blasco said, was that as long as the victims were logged into China’s 15 top web services — including major portals like Baidu, Taobao, QQ, Sina, Sohu, Ctrip and RenRen — the attackers could identify them and siphon off their personal digital information, even if their victims were logged into Tor or a VPN.

They did this with the aid of a particularly serious vulnerability that 15 web services in China apparently never patched.

The vulnerability, known as JSONP, is not new. It was publicized in a Chinese security and web forum around 2013, about the same time that forensic evidence suggests that attackers used it to target Muslim Uighur websites and nongovernmental organizations’ sites, Mr. Blasco said.

By not patching this hole, Mr. Blasco said, major web portals like Baidu and Taobao, a subsidiary of Alibaba, effectively neutered the only privacy protections available to web users inside China.

“The equivalent would be if law enforcement was able to exploit a serious vulnerability in Facebook to deanonymize users of Tor and VPNs in the United States,” Mr. Blasco said. “You would assume Facebook would fix that pretty fast.”

It is not clear, given the severity of the vulnerability and its discovery some two years ago, why so many of China’s top web portals did not fix it.

A Baidu spokesman said the company did try to deal with the problem.

“To the best of our knowledge, our earlier efforts were successful in preventing any serious leak of personal use data. But in light of this further information, we have decided to implement a more aggressive and thorough fix across Baidu for the JSONP vulnerability,” the spokesman said.

A spokesman for Alibaba also said the company was now moving to deal with the problem. “Alibaba Group takes data security seriously and we do everything possible to protect our users,” said Robert Christie, vice president of international media at Alibaba.

“Many companies in our space have faced this issue, and once we discovered this issue, we moved swiftly to address it. We have found no evidence that any user information has been compromised,” he said.

Researchers say the complexity of the attack and the lack of digital fingerprints indicate that someone with significant influence had to have been directing it. Otherwise, “there must be a cybercriminal out there with pretty significant access to China’s Internet infrastructure,” said Mr. Freitas.

Paul Mozur contributed reporting from Hong Kong.
http://www.nytimes.com/2015/06/13/te...acy-tools.html





Sex, Lies and Debt Potentially Exposed by U.S. Data Hack
Arshad Mohammed and Joseph Menn

When a retired 51-year-old military man disclosed in a U.S. security clearance application that he had a 20-year affair with his former college roommate's wife, it was supposed to remain a secret between him and the government.

The disclosure last week that hackers had penetrated a database containing such intimate and possibly damaging facts about millions of government and private employees has shaken Washington.

The hacking of the White House Office of Personnel Management (OPM) could provide a treasure trove for foreign spies.

The military man's affair, divulged when he got a job with a defense contractor and applied to upgrade his clearance, is just one example of the extensive potential for disruption, embarrassment and even blackmail arising from the hacking.

The man had kept the affair secret from his wife for two decades before disclosing it on the government's innocuously named Standard Form 86 (SF 86), filled out by millions of Americans seeking security clearances.

His case is described in a judge's ruling, published on the Pentagon website, that he should keep his security clearance because he told the government about the affair. His name is not given in the administrative judge's decision.

The disclosure that OPM's data had been hacked sent shivers down the spines of current and former U.S. government officials as they realized their secrets about sex, drugs and money could be in the hands of a foreign government.

The data that may be compromised by the incident, which was first reported by the Associated Press, included the detailed personal information on the SF 86 "QUESTIONNAIRE FOR NATIONAL SECURITY POSITIONS," according to U.S. officials.

U.S. SUSPECTS LINK TO CHINA

As with another cyberattack on OPM disclosed earlier this month, U.S. officials suspect it was linked to China, though they have less confidence about the origins of the second attack than about the first.

China denies any involvement in hacking U.S. databases.

While the Central Intelligence Agency does its own clearance investigations, agencies such as the State Department, Defense Department and National Security Agency, which eavesdrops on the world, all use OPM's services to some degree.

Intelligence veterans said the breach may prove disastrous because China could use it to find relatives of U.S. officials abroad as well as evidence of love affairs or drug use which could be used to blackmail or influence U.S. officials.

An even worse scenario would be the mass unmasking of covert operatives in the field, they said.

"The potential loss here is truly staggering and, by the way, these records are a legitimate foreign intelligence target," said retired Gen. Michael Hayden, a former CIA and NSA director. "This isn't shame on China. This is shame on us."

The SF 86 form, which is 127-pages long, is extraordinarily comprehensive and intrusive.

Among other things, applicants must list where they have lived; contacts with foreign citizens and travel abroad; the names and personal details of relatives; illegal drug use and mental health counseling except in limited circumstances.

A review of appeals of security denials published on the web shows the variety of information now in possession of the hackers, including financial troubles, infidelities, psychiatric diagnoses, substance abuse, health issues and arrests.

"It's kind of scary that somebody could know that much about us," said a former senior U.S. diplomat, pointing out the ability to use such data to impersonate an American official online, obtain passwords and plunder bank accounts.

SOME AGENCIES LESS VULNERABLE

A U.S. official familiar with security procedures, but who declined to be identified, said some agencies do not use OPM for clearances, meaning their employees' data was at first glance less likely to have been compromised.

However, the former senior diplomat said someone with access to a complete set of SF 86 forms and to the names of officials at U.S. embassies, which are usually public, could compare the two and make educated guesses about who might be a spy.

"Negative information is an indicator just as much as a positive information," said the former diplomat.

A review of appeals of security denials published on the web shows a variety of information now in possession of the hackers, including financial troubles, infidelities, psychiatric diagnoses, substance abuse, health issues and arrests.

The case of the 51-year-old former military man who told the government, but not his wife, about his 20-year affair came to light when he filed an appeal because his effort to upgrade his security clearance ran into trouble.

According to a May 13 decision by an administrative judge who heard his case, the man revealed the affair in the "Additional Comments" section of SF 86 in January 2012, ended the affair in 2013, and told his wife about it in 2014.

"DOD (Department of Defense) is aware of the affair because Applicant disclosed it on his SF 86; the affair is over; and the key people in Applicant’s life are aware of it," the judge wrote, according to a Defense Office of Hearings and Appeals document posted online.

His access to classified information was approved.

(Reporting by Arshad Mohammed in Washington and Joseph Menn in San Francisco; Additional reporting by Mark Hosenball; Editing by David Storey and Sue Horton)
http://uk.reuters.com/article/2015/0...0OV0CC20150615





FBI, While Hating On Encryption, Starts Encrypting All Visits To Its Website
Mike Masnick

Last week, the Wikimedia Foundation announced that it was moving to encrypting access to all Wikipedia sites via HTTPS. This was really big news, and a long time coming. Wikipedia had been trying to move in this direction for years with fairly slow progress -- in part because some in the Wikimedia community had an irrational dislike of HTTPS. Thankfully, the Wikimedia Foundation pushed forward anyway, recognizing that the privacy of what you're browsing can be quite important.

And yet, I don't think that was the most significant website shift to HTTPS-by-default in the last week. Instead, that honor has to go to... [drumroll please]... FBI.gov. No, seriously. This may surprise you. After all, this is the very same FBI that just a couple of weeks ago had its assistant director Michael Steinbach tell Congress that companies needed to "prevent encryption above all else." Really. And it's the same FBI whose director has been deliberately scaremongering about the evils of encryption. The same director who insisted the world's foremost cybersecurity experts didn't understand when they told him that his plan to backdoor encryption was bonkers. The very same FBI who used to recommend mobile encryption to keep your data safe, but quietly deleted that page (the FBI claims it was moved to another site, but...).

But that very same FBI that has spent the past few months disparaging encryption at every opportunity apparently went over to Cloudflare and had the company help it get HTTPS set up. No joke.
The FBI.gov site now automatically pushes you to an encrypted connection. Because, no matter what the FBI says, encryption is good. And the FBI's techies know that.

Remember how, just last week, the US CIO announced that all federal governments would be moving to HTTPS. Well, thankfully, the CIO's office is also tracking how well it's doing. Just yesterday, here's what it said about FBI.gov:
And, here's what it says now:
(If you're interested, you can see the pull request at Github that has the change as well).

Either way, kudos to the FBI for letting us encrypt our connections. Now, please don't get in the way of us encrypting our data as well.
https://www.techdirt.com/articles/20...-website.shtml





Encryption “Would Not Have Helped” at OPM, Says DHS Official

Attackers had valid user credentials and run of network, bypassing security.
Sean Gallagher

During testimony today in a grueling two-hour hearing before the House Oversight and Government Reform Committee, Office of Personnel Management (OPM) Director Katherine Archuleta claimed that she had recognized huge problems with the agency's computer security when she assumed her post 18 months ago. But when pressed on why systems had not been protected with encryption prior to the recent discovery of an intrusion that gave attackers access to sensitive data on millions of government employees and government contractors, she said, "It is not feasible to implement on networks that are too old." She added that the agency is now working to encrypt data within its networks.

But even if the systems had been encrypted, it likely wouldn't have mattered. Department of Homeland Security Assistant Secretary for Cybersecurity Dr. Andy Ozment testified that encryption would "not have helped in this case" because the attackers had gained valid user credentials to the systems that they attacked—likely through social engineering. And because of the lack of multifactor authentication on these systems, the attackers would have been able to use those credentials at will to access systems from within and potentially even from outside the network.

House Oversight Chairman Jason Chaffetz (R-Utah) told Archuleta and OPM Chief Information Officer Donna Seymour, "You failed utterly and totally." He referred to OPM's own inspector general reports and hammered Seymour in particular for the 11 major systems out of 47 that had not been properly certified as secure—which were not contractor systems but systems operated by OPM's own IT department. "They were in your office, which is a horrible example to be setting," Chaffetz told Seymour. In total, 65 percent of OPM's data was stored on those uncertified systems.

Chaffetz pointed out in his opening statement that for the past eight years, according to OPM's own Inspector General reports, "OPM's data security posture was akin to leaving all your doors and windows unlocked and hoping nobody would walk in and take the information."

When Chaffetz asked Archuleta directly about the number of people who had been affected by the breach of OPM's systems and whether it included contractor information as well as that of federal employees, Archuleta replied repeatedly, "I would be glad to discuss that in a classified setting." That was Archuleta's response to nearly all of the committee members' questions over the course of the hearing this morning.

At least we found it

Archuleta told the committee that the breach was found only because she had been pushing forward with an aggressive plan to update OPM's security, centralizing the oversight of IT security under the chief information officer and implementing "numerous tools and capabilities." She claimed that it was during the process of updating tools that the breach was discovered. "But for the fact that OPM implemented new, more stringent security tools in its environment, we would have never known that malicious activity had previously existed on the network and would not have been able to share that information for the protection of the rest of the federal government," she read from her prepared statement.

Inertia, a lack of internal expertise, and a decade of neglect at OPM led to breach.

Dr. Ozment reiterated that when the malware activity behind the breach was discovered, "we loaded that information into Einstein (DHS' government-wide intrusion detection system) immediately. We also put it into Einstein 3 (the intrusion prevention system currently being rolled out) so that agencies protected by it would be protected from it going forward."

But nearly every question of substance about the breach—which systems were affected, how many individuals' data was exposed, what type of data was accessed, and the potential security implications of that data—was deferred by Archuleta on the grounds that the information was classified. What wasn't classified was OPM's horrible track record on security, which dates back at least to the George W. Bush administration—if not further.

A history of neglect

During his opening statement, Chaffetz read verbatim from a 2009 OPM inspector general report that noted, "The continuing weakness in OPM information security program results directly from inadequate governance. Most if not all of the [information security] exceptions we noted this year result from a lack of leadership, policy, and guidance." Similar statements were read from 2010 and 2012 reports, each more dire than the last. The OPM Office of the Inspector General only began upgrading its assessment of the agency's security posture in its fiscal year 2014 report—filed just before news of a breach at a second OPM background investigation contractor surfaced.

Rep. Will Hurd (R-Texas), a freshman member of Congress, told the OPM executives and the other witnesses—DHS' Ozment, Interior Department CIO Sylvia Burns, the new US CIO Tony Scott, and OPM Assistant Inspector General Michael Esser— that "the execution on security has been horrific. Good intentions are not good enough." He asked Seymour pointedly about the legacy systems that had not been adequately protected or upgraded. Seymour replied that some of them were over 20 years old and written in COBOL, and they could not easily be upgraded or replaced. These systems would be difficult to update to include encryption or multi-factor authentication because of their aging code base, and they would require a full rewrite.

Personnel systems have often been treated with less sensitivity about security by government agencies. Even health systems have had issues, such as the Department of Veterans' Affairs national telehealth program, which was breached in December of 2014. And there have been two previous breaches of OPM background investigation data through contractors—first the now-defunct USIS in August of last year, and then KeyPoint Government Solutions less than four months later. Those breaches included data about both government employees and contractors working for the government.

But some of the security issues at OPM fall on Congress' shoulders—the breaches of contractors in particular. Until recently, federal agents carried out background investigations for OPM. Then Congress cut the budget for investigations, and they were outsourced to USIS, which, as one person familiar with OPM's investigation process told Ars, was essentially a company made up of "some OPM people who quit the agency and started up USIS on a shoestring." When USIS was breached and most of its data (if not all of it) was stolen, the company lost its government contracts and was replaced by KeyPoint—"a bunch of people on an even thinner shoestring. Now if you get investigated, it's by a person with a personal Gmail account because the company that does the investigation literally has no IT infrastructure. And this Gmail account is not one of those where a company contracts with Google for business services. It is a personal Gmail account."

Some of the contractors that have helped OPM with managing internal data have had security issues of their own—including potentially giving foreign governments direct access to data long before the recent reported breaches. A consultant who did some work with a company contracted by OPM to manage personnel records for a number of agencies told Ars that he found the Unix systems administrator for the project "was in Argentina and his co-worker was physically located in the [People's Republic of China]. Both had direct access to every row of data in every database: they were root. Another team that worked with these databases had at its head two team members with PRC passports. I know that because I challenged them personally and revoked their privileges. From my perspective, OPM compromised this information more than three years ago and my take on the current breach is 'so what's new?'"

Given the scope and duration of the data breaches, it may be impossible for the US government to get a handle on the exact extent of the damage done just by the latest attack on OPM's systems. If anything is clear, it is that the aging infrastructure of many civilian agencies in Washington magnify the problems the government faces in securing its networks, and OPM's data breach may just be the biggest one that the government knows about to date.
http://arstechnica.com/security/2015...-dhs-official/





'Anonymous' Says it Cyberattacked Federal Government to Protest Bill C-51

No personal or sensitive government information compromised, public safety minister says

The online hacker group Anonymous has claimed responsibility for a cyberattack on federal government websites, in protest against the recent passing of the government's anti-terror Bill C-51.

"Today, Anons around the world took a stand for your rights," the group wrote Wednesday afternoon in an online post

"Do we trade our privacy for security? Do we bow down and obey what has become totalitarian rule? Don't fool [yourselves]. The Harper regime does not listen to the people, it acts only in [its] best interests."

A number of federal government websites appear to be back online after the brief blackout, including websites for the Senate, the Justice Department and Canada's spy agencies, CSEC and CSIS.

However, it's unclear whether the attacks have stopped, as government websites seem to be flashing on and offline intermittently.

Public Safety Minister Steven Blaney said at no point was personal information or sensitive government compromised.

Attackers have to face 'full force of the law'

"We are increasing our resources and polices to be better equipped to face cyberattacks, whether they are coming from hackers from a group, potentially, that has said they did it today, [or] state-sponsored or terrorist entities."

"Let's be clear. We are living in a democracy and there are many ways you can express your views in the country," Blaney said, addressing Anonymous's claim of responsibility.

"There are no excuses to justify an attack to public property and those that have committed those attacks will be prosecuted and will have to face the full force of the law."

Government employees have also reportedly had problems accessing email.

Internet access and "information technology assets" were also affected, according to a statement by Dave Adamson, acting chief information officer at the Treasury Board.

Denial of service attack

The government's servers were hit with a denial of service attack, the statement reads.

Treasury Board president Tony Clement earlier urged users to call 1-800-OCanada for help until full service is restored.

When CBC News phoned this number, the service operator was unaware of the interruption.

"Public Safety and of course Shared Services Canada are working to restore service," said Clement. "But in the meantime, we're working very diligently to restore services as soon as possible and to find out the origination of the attack."

A CBC News query to Shared Services Canada was not immediately answered.
http://www.cbc.ca/news/politics/anon...c-51-1.3117360





DuckDuckGo on CNBC: We’ve Grown 600% Since NSA Surveillance News Broke

The privacy-minded search engine is now doing three billion searches a year.
Juliana Reyes

DuckDuckGo has exploded in popularity since the federal government’s surveillance program came to light two years ago. Remember the privacy-minded search engine’s best week ever?

The service has grown 600 percent since then, DuckDuckGo CEO Gabe Weinberg said on CNBC.

“We’re doing about three billion searches a year,” Weinberg said, “so we’re already pretty mainstream.” (Tell ’em, Gabe.)

Browsers Firefox and Safari also made DuckDuckGo available last year.

Watch the CNBC clip below. The news anchor just can’t resist a little jab about DuckDuckGo’s location choice.
https://technical.ly/philly/2015/06/16/duckduckgo-cnbc/





Consumer Groups Back Out of Federal Talks on Face Recognition
Natasha Singer

A central component of President Obama’s effort to give consumers more control over how companies collect and share their most sensitive personal details has run aground.

Nine civil liberties and consumer advocate groups announced early Tuesday morning that they were withdrawing from talks with trade associations over how to write guidelines for the fair commercial use of face recognition technology for consumers.

In the last 16 months, the two sides had been meeting periodically under the auspices of the National Telecommunications & Information Administration, a division of the Commerce Department. But the privacy advocates said they were giving up on talks because they could not achieve what they consider minimum rights for consumers — the idea that companies should seek and obtain permission before employing face recognition to identify individual people on the street.

“At a base minimum, people should be able to walk down a public street without fear that companies they’ve never heard of are tracking their every movement — and identifying them by name — using facial recognition technology,” the privacy and consumer groups said in a statement. “Unfortunately, we have been unable to obtain agreement even with that basic, specific premise.”

The advocates included: American Civil Liberties Union; Center for Democracy & Technology; Center for Digital Democracy; Alvaro M. Bedoya, the executive director of the Center on Privacy & Technology at Georgetown University Law Center; Consumer Action; Consumer Federation of America; Consumer Watchdog; Common Sense Media; and Electronic Frontier Foundation.

Juliana Gruenwald, an N.T.I.A spokeswoman, said the telecommunications agency was disappointed that some participants had pulled out of the face recognition discussions.

“The process is the strongest when all interested parties participate and are willing to engage on all issues,” Ms. Gruenwald wrote in an email. The agency, she added, “will continue to facilitate meetings on this topic for those stakeholders who want to participate.”

With or without the consumer advocates, the participants intend to continue trying to develop a workable code of conduct for facial recognition privacy, said Carl Szabo, policy counsel for NetChoice, an e-commerce trade association.

“We think we can reach consensus on transparency, notice, data security and giving users meaningful control over the sharing of their facial recognition information with anyone who otherwise would not have access,” Mr. Szabo said in an email.

In 2012, the Obama administration published a plan for a consumer privacy bill of rights. Among other things, the report called for the Commerce Department to convene a series of “multi-stakeholder processes” in which trade and advocacy groups were to create industry codes of conduct for the use of drones, data-mining by mobile apps and other consumer-tracking technologies.

The withdrawal on Tuesday puts the viability of these multiparticipant negotiations into question.

“I would say that no one’s privacy is better off as a result,” Mr. Bedoya of the Center on Privacy & Technology said.

Face recognition is a subset of biometrics, a technology that involves recording and analyzing people’s unique physiological characteristics, like their fingerprint ridges or facial features, to learn or confirm their identities. Face recognition technology works by scanning a photo or video still of an unknown face and comparing its unique topography against a facial-scan database of people whose names are already known.

Because the technology can be used covertly, civil liberties advocates say its popularization has the potential to undermine people’s ability to conduct their personal business anonymously in stores, hotels and other public spaces. That is one reason that Texas and Illinois have passed state laws requiring companies to notify people and obtain their permission before taking facial scans or sharing their biometric information.

Mr. Bedoya said consumer advocates were troubled by the possibility that the federally convened face recognition discussions could end up endorsing an industry code of conduct that undermined those state laws.

“The message sent is clear,” he said in an email. “If you are a consumer, and you want better privacy laws, you should call your state legislator and head to your state capitol. Just don’t come to Washington, D.C.”
http://bits.blogs.nytimes.com/2015/0...e-recognition/





Remote Mass. Towns Welcome Broadband’s Arrival
Jack Newsham

When people think of the Massachusetts tech economy, they probably don’t think of this town of 1,900 built into the woods about 10 miles north of Amherst.

That could change.

Since April, this Western Massachusetts community has steadily connected homes, businesses, and town offices to a municipal fiber-optic network offering broadband services that have long bypassed this part of the state.

Today, software developer Al Nutile can video-conference and write code simultaneously with a project team scattered across three continents from his hillside home.

Carter Wall, who lives on a dirt road, can download big data files from the solar power installations she monitors. At the local cafe, people can bring iPads and get blazing-fast Wi-Fi instead of waiting their turn to surf the net at a dusty, old desktop with a satellite connection that cuts out in bad weather.

By the end of this month, Leverett will have linked every home in town to broadband. Nearby communities are not far behind in bringing broadband to their residents; they see high-speed Internet as an economic boon akin to rural electrification in the 1930s, one that could bring higher home values, better business climates, and easier access to the modern economy.

These new connections are the culmination of an eight-year, $90 million effort by the state to build a “backbone” of fiber-optic data transmission lines across Western Massachusetts.

The network, financed with state and federal stimulus money, will extend broadband to 45 isolated towns where 40 percent of homes have no Internet access and the rest are relegated to dial-up, DSL, and satellite connections operating at a fraction of speeds available in Eastern Massachusetts.

State and federal economic development officials view access to high-speed Internet as a way to boost rural economies, where traditional industries such as farming, forestry, and paper making have declined, and connect them to 21st-century services, such as online education and telemedicine.

With private telecommunication companies unable to profitably extend their networks to sparsely populated areas, state, federal, and local governments have stepped in.

In Leverett, for example, voters in 2012 approved borrowing $3.6 million — nearly $1,900 per resident — for the town to lay fiber lines to 800 premises and connect to the main trunk built by the state.

“More and more communities understand that high-speed wired Internet access represents critical infrastructure right up there with telephone and roads,” said David Talbot, a fellow at the Berkman Center for Internet & Society at Harvard University. “Community networks are often seen as a way to advance economic development, attract high-tech businesses, cut municipal costs, and bring competition to the market.”

The Massachusetts Broadband Institute, a quasi-public agency overseeing the Western Massachusetts project, estimates that completing the so-called last mile of fiber that connects end users will cost an additional $112 million. About $50 million will be funded by the state, and the rest from local funds.

About 32 towns have banded together in a group called Wired West, which is working with the broadband institute to build and operate fiber lines to individual buildings.

In recent months, about 19 of Wired West’s towns have approved borrowing a total of $30 million, according to Monica Webb, chairwoman of the collaborative. About five more will vote in few months, she said. All the local funding must be approved by June 2016 to qualify for state money.

Before construction starts in each town, 40 percent of its residents must sign up for service, which starts at $49 a month for speeds similar to Comcast’s basic offering in Boston and rises to $109 for a version that is 40 times faster.

When the fiber networks are completed in the next two to five years, the towns will own them and Wired West, a municipally owned entity, will operate them.

Leverett has contracted a private company to provide Internet service, which will cost subscribers $65 a month. That’s about same as Comcast and Verizon FIOS customers pay in Greater Boston, but the speeds in Leverett are about 10 times faster.

Residents already are witnessing the economic potential of broadband.

Now that her Leverett home is hooked up to the fiber network, Wall has sold her house in Medford, where she lived half the time because her job as a solar energy consultant often required higher speeds than she could get in Leverett.

She’s taking the $170,000 gain she made from the sale and splitting it between her retirement fund and a nonprofit she recently cofounded, the Future Face of American Energy, which will match women and people of color with internships in the energy industry.

In New Marlborough, a Wired West town, filmmaker Douglas Trumbull has already hooked up to the fiber backbone, paying the cost of laying fiber from his studio.

Trumbull is developing a high-resolution, high-frame-rate-viewing technology called Magi, which has required him to build his own server farm to process the images and send hard drives in the mail to studios and other clients because Internet connections were too slow.

He now expects to cut his costs by using cloud computing services to produce and share video.

Gerald Jones, the owner of Jones Group Realtors in Amherst, sells houses in the communities targeted by the state’s broadband initiative.

He said he has given financial support to the broadband committee in Shutesbury and his agents have advised clients in Shutesbury and elsewhere to vote in support of building the networks.

He said a growing number of homebuyers refuse to consider properties that can’t get high-speed Internet.

“It’s gotten more and more important as time has gone on,” Jones said. “It’s as if you had a town with no school system and you were trying to sell a house to someone with kids.”

The completion of the broadband “backbone” may also spur private investment.

Adam Chait, the owner of a small Internet service provider in Monterey, in the southwest corner of the state, said he is raising money to build a network connecting 1,000 homes in Berkshire County as a test to show that private companies can profitably serve remote areas.

He declined to estimate how much he’ll need.

Local officials admit the work of building broadband networks has been complicated, and some benefits, such as new businesses, could take years to appear.

Leverett has created Web pages and information packets to help educate residents about key parts of their new system.

Eric Nakajima, the head of the Massachusetts Broadband Institute, said his group’s priority is making sure that towns’ broadband systems were well-maintained and won’t run out of money.

“It is definitely early days,” Nakajima said. “There isn’t really a test case right now in the region or in the state where we can look and say, ‘Let’s evaluate.’ ”
http://www.betaboston.com/news/2015/...bands-arrival/





TWC ‘Well-Positioned’ to Bring 1-Gig Across L.A.

City Seeking Partners to Deliver 1-Gig, WiFi, Free Internet Tier
Jeff Baumgartner

Time Warner Cable said it is “well-positioned” to bring speeds of 1 Gbps across its Los Angeles footprint in the wake of a request for participants (RFP) issued by the city last week.

TWC said it will be able to hit those speeds across the city, rather than just in individual neighborhoods, as DOCSIS 3.1 technology begins to mature. D3.1, which the cable industry is promoting under the “Gigasphere” consumer brand, will enable cable operators to deliver multi-gigabit speeds on their hybrid fiber/coax networks.

“As Gigasphere technology is introduced, we are well-positioned to deliver residential Internet speeds of up to 1 Gigabit per second throughout our entire LA footprint—not just in a few neighborhoods—just as we said we would when we participated in the City’s RFI process.”

In July 2014, TWC said it was participating in a request for information about brining 1-gig to L.A. residences, businesses and city government facilities.

Last week the Los Angeles City Council approved a request for participants that seeks to identify one or more providers to commit to deploying wireline and WiFi networks that can provide speeds of 1 Gbps, and complete the job within the next five years. That project, called CityLinkLA, is allowing interested participants to bid on one or more quadrants.

In addition to providing 1 Gig wireline speeds, CityLinkLA’s requirments also call for WiFi that delivers 5 Mbps or second to every connected device (and to provide sufficient backhaul for 200 simultaneous users at 5 Mbps down by 1 Mbps upstream), and a free wireline service that delivers at least 5 Mbps/1 Mbps.

In exchange, the city said it will help to accelerate the pace of the buildouts with incentives that include expedited handling of applications for construction, space on some city property for hubs and central offices, and “favorable bulk rates” for access to city street light standards that can be used for WiFi access points.

TWC, which is in the process of being acquired by Charter Communications, noted that it completed its all-digital “TWC Maxx” upgrade in L.A. last year, enabling faster Internet speeds, including a new 300 Mbps (downstream) service and that it continues to expand its community WiFi network.

“As a result, Los Angeles now has one of the fastest and most advanced Internet infrastructures of any city in the nation,” TWC said.

“Of course, we are eager to work with the City to take advantage of new faster permitting processes and to gain access to the city resources that we need to further accelerate our path to gigabit Internet and add even more WiFi hotspots in the places where L.A. people work and play. We look forward to learning more about the process, which we can only assume will be fair and provide a level playing field for all service providers.”

The city, which will entertain demand-based proposals, has set a response date of Nov. 12, 2015.
http://www.multichannel.com/news/tec...ross-la/391400





TWC is About to be Sued for Violating Net Neutrality Rules and Holding Traffic for Ransom
Zach Epstein

The Federal Communications Commission’s new consumer-friendly net neutrality rules just took effect last week, and it looks like our first big lawsuit is already on the verge of being filed. That’s not surprising at all. What is surprising, perhaps, is that the target of the lawsuit isn’t Comcast or Verizon.

According to a new report, Time Warner Cable is about to be slapped with a lawsuit alleging that the ISP charged a video content provider exorbitantly high rates to avoid being throttled.

The FCC and its former cable lobbyist chairman Tom Wheeler shocked us all by proposing new net neutrality rules that were surprisingly fair and pro-consumer. Sure, there may be a loophole or two, but the bottom line is that the new rules prevent the three biggest threats to a fair and open Internet: Paid traffic prioritization, data blocking and bandwidth throttling.

Now, it looks like TWC is about to be accused of holding Internet traffic for ransom violating the new rules where at least two of those faux pas are concerned.

San Diego-based Commercial Network Services (CNS) owns and operates SunDiegoLive, which is a website that serves live video streams from webcams. According to a report from The Washington Post, the company is days away from filing a lawsuit alleging that Time Warner Cable held its traffic — and therefor business — hostage by charging unreasonably high rates to avoid being throttled.

Of course, if a webcam site is unable to deliver smooth video feeds, its business will undoubtedly suffer.

CNS boss Barry Bahrami says TWC’s policies are a “blatant violation” of the FCC’s new net neutrality rules, which took effect this past Friday. “This is not traffic we’re pushing to Time Warner; this is traffic that their paying Internet access subscribers are asking for from us,” he told The Post.

Time Warner Cable responded to the story by stating that it has done nothing wrong, and that CNS’s complaint falls under the much-debated topic of peering.

“TWC’s interconnection practices are not only ‘just and reasonable’ as required by the FCC, but consistent with the practices of all major ISPs and well-established industry standards,” TWC said. “We are confident that the FCC will reject any complaint that is premised on the notion that every edge provider around the globe is entitled to enter into a settlement-free peering arrangement.”
https://bgr.com/2015/06/17/net-neutr...wsuit-twc-cns/





AT&T Just Got Hit with a $100 Million Fine after Slowing Down its ‘Unlimited’ Data
Brian Fung

AT&T is being charged a $100 million fine after slowing down its "unlimited" data. Here's what that means for its users. (Alice Li/The Washington Post)

The Federal Communications Commission slapped AT&T with a $100 million fine Wednesday, accusing the country's second-largest cellular carrier of improperly slowing down Internet speeds for customers who had signed up for "unlimited" data plans.

The FCC found that when customers used up a certain amount of data watching movies or browsing the Web, AT&T "throttled" their Internet speeds so that they were much slower than normal. Millions of AT&T customers were affected by the practice, according to the FCC.

The fine, which AT&T says it will fight, is the largest ever levied by the agency.

AT&T implemented the practice in 2011, prompting thousands of customers to complain to the FCC, according to an agency statement.

By not properly disclosing the policy to consumers who thought they were getting "unlimited" data, the company violated the FCC's rules on corporate transparency, FCC Chairman Tom Wheeler said in a statement.

"Consumers deserve to get what they pay for,” Wheeler said. “Broadband providers must be upfront and transparent about the services they provide. The FCC will not stand idly by while consumers are deceived by misleading marketing materials and insufficient disclosure.”

Many of AT&T’s unlimited customers have 4G LTE service, which typically provides mobile Internet speeds of more than 30 megabits per second. That's roughly 60 times faster than the speeds experienced when AT&T throttled subscribers, who were slowed to speeds equivalent to dial-up, according to a senior FCC official.

But consumers are unlikely to receive any money from the fine, which will go instead to the U.S. Treasury, said the agency official.

AT&T disputed the charges. “The FCC has specifically identified this practice as a legitimate and reasonable way to manage network resources for the benefit of all customers, and has known for years that all of the major carriers use it," the company said in a statement.

"We have been fully transparent with our customers, providing notice in multiple ways and going well beyond the FCC’s disclosure requirements.”

This isn't the first time AT&T's unlimited data policy has landed the company in hot water. The Federal Trade Commission sued the telecom company in October, alleging that 3.5 million users had their Internet service slowed to dial-up speeds an average of 12 days every month.

This comes at a tricky time for AT&T, which is trying to convince regulators to approve its $49 billion acquisition of the nation’s largest satellite TV provider, DirecTV.
http://www.washingtonpost.com/blogs/...nlimited-data/





Verizon Ordered to Finish Fiber Build that it Promised But Didn’t Deliver

NYC says Verizon failed to extend FiOS to all households.
Jon Brodkin

New York City officials today ordered Verizon to complete fiber builds that the company was supposed to finish a year ago. If Verizon doesn't comply, the city can seek financial damages.

"In a 2008 agreement with New York City, Verizon committed to extend its FiOS network to every household across the five boroughs by June 30, 2014," said the announcement of an audit released today by the city's Department of Information Technology and Telecommunications (DoITT).

Verizon's FiOS fiber network delivers Internet, TV, and phone service to areas traditionally served by Verizon's copper landlines and DSL Internet.

“Through a thorough and comprehensive audit, we have determined that Verizon substantially failed to meet its commitment to the people of New York City,” Mayor Bill de Blasio said. “As I’ve said time and again, Verizon must deliver on its obligation to the City of New York and we will hold them accountable.”

The agreement, which gave Verizon a cable television franchise, says NYC may "seek and/or pursue money damages" from Verizon if it fails to deliver on its promises.

Verizon also failed to meet broadband promises in Pennsylvania and New Jersey, but those states let the company off the hook.

Verizon is disputing New York City's findings. Verizon met the requirement to pass all households with fiber, though not all residents can actually buy fiber service, the company says. Verizon last year blamed landlords for delays. It also blamed Hurricane Sandy from October 2012, even though Verizon was still claiming to be "ahead of schedule" in April 2013.

"We indeed have met the requirement to install fiber optics through all five boroughs," a Verizon spokesperson told Ars. "Our $3.5 billion investment and the 15,000 miles of fiber we have built have given New Yorkers added choices and a robust set of advanced, reliable, and resilient services. The challenge we have is gaining access to properties which of course would expand availability. We look forward to working with the City to seek solutions to this issue."

Verizon further said that "it is important to note that it’s not a mere coincidence that the report is made public today, and labor negotiations with our largest union begin on Monday. It’s well known the union has ties to the city administration, and things like this are a familiar union tactic we have seen before." The Communications Workers of America union has blamed Verizon's fiber shortcomings on job cuts.

Verizon has also called complaints about its landline maintenance "meaningless rhetoric and hyperbole from the unions."

The city's audit report said refusal of access by landlords cannot explain the full extent of Verizon's failure to bring fiber to all residents. Property managers interviewed by the city said Verizon has refused to extend service to buildings unless the company was granted exclusive agreements that would shut out other providers.

The city audit report said "Verizon must build facilities on every residential block in the City to comply with its households passed obligations."

"Because Verizon claims to have passed 100 percent of residential premises in the City, Verizon must no longer indicate that cable television service is 'unavailable' at any premises," the report also said. "Instead, Verizon must inform all prospective subscribers that they can place NSIs." The acronym refers to requests for "non-standard installation."

"Verizon must ensure that sufficient staff and resources are deployed in order to complete NSIs not related to refusal of access by landlords of multiple dwellings within the six-month and 12-month deadlines," the audit said.

The city summarized its findings as follows:

Quote:
Verizon has not run fiber throughout enough of the City’s residential neighborhoods to deliver on its commitments. DoITT field inspections confirm that blocks claimed by Verizon as completed in fact have not installed the necessary equipment to deliver service.

Verizon’s own records indicate that service is “unavailable” at certain residential addresses, despite company claims that it can deliver service to all New Yorkers who want FiOS. In fact, there is evidence of callers being told by Verizon that the company has no plans to bring FiOS to their address. And for prospective customers, details about current and future FiOS availability are unavailable from either Verizon’s customer service representatives or the company’s website.

Verizon has failed to consistently document service requests. Verizon staff admitted to DoITT that they did not record or track inquiries from prospective customers who requested service before fall 2014. This is in direct violation of the franchise agreement, which requires Verizon to track requests for cable service.

Where Verizon has accepted requests for service, it has consistently failed to respond to service requests within the required six- and 12-month timeframes. DoITT’s audit reveals that 75 percent of the more than 40,000 non-standard requests—i.e. requests from buildings that had not previously been wired for FiOS service—that were labeled outstanding as of December 31, 2014, had been outstanding for over a year.

Despite clear requirements in the franchise agreement, Verizon has only tracked complaints from actual subscribers and has not tracked complaints and inquiries from prospective customers. The franchise agreement requires Verizon to keep records of all complaints—with no distinction between current and prospective customers—for six years. However, Verizon’s own complaint procedures, glossary, and interviews reveal that the company only records and tracks complaints of actual paying subscribers, rather than potential subscribers who request service in their neighborhoods.

Verizon failed to cooperate with the City’s audit of FiOS rollout, in violation of its franchise agreement. Verizon initially failed to provide access to the systems used in calculating the status of network build, with access granted five months after the initial request. Throughout the course of the audit, and in violation of its franchise agreement, the company significantly delayed or failed to provide access to various other records, reports, and contracts requested by the City to conduct a full assessment of FiOS implementation.
In a response published as an addendum to the audit, Verizon said it did cooperate with the investigation. "DoITT undertook a freewheeling approach to its investigation—one that demanded direct, unmediated access to databases and systems that hold confidential customer information relating to Non-Cable Services, as well as numerous categories of collateral information that were unrelated to the obligations imposed by the Agreement and therefore outside the scope of verifying Verizon’s compliance with its contractual obligations," Verizon wrote.

The city acknowledged Verizon's response to the audit but said the reply did not materially change its findings.
http://arstechnica.com/business/2015...didnt-deliver/





An Early Net-Neutrality Win: Rules Prompt Sprint to Stop Throttling

FCC’s new net-neutrality rules went into effect Friday
Thomas Gryta

The Federal Communications Commission’s new net-neutrality rules are already having an effect.

Sprint, the third-largest U.S. wireless carrier, had been intermittently choking off data speeds for its heaviest wireless Internet users when its network was clogged. But it stopped on Friday, when the government’s new net-neutrality rules went into effect.

The rules, unlike prior attempts by the commission to ensure Internet traffic isn’t blocked or slowed, cover wireless networks like Sprint’s for the first time. That raises the stakes for carriers, whose past policies could in theory run afoul of newly vigilant regulators.

Sprint said it believes its policy would have been allowed under the rules, but dropped it just in case.

“Sprint doesn’t expect users to notice any significant difference in their services now that we no longer engage in the process,” a Sprint spokesman said.

The company also had reserved the right to prioritize data traffic depending on a subscriber’s plan. It had never done so, but has now decided the policy isn’t needed.

Sprint’s changes came days before the FCC said Wednesday that it plans to fine AT&T Inc. $100 million for allegedly misleading customers about unlimited wireless data plans. The FCC alleges AT&T sold consumers data plans advertised as unlimited, then capped data speeds after they used 5 gigabytes of data in a billing period.

The carrier says it did nothing wrong and will vigorously dispute the allegations.

AT&T and Verizon Communications Inc. stopped offering unlimited plans to new subscribers years ago, but T-Mobile and Sprint still sell them. T-Mobile doesn’t have a policy of throttling customers other than in extreme circumstances for network management, a spokeswoman said.

Verizon moved to throttle some unlimited users last year as well, but dropped the effort under pressure from the FCC.
http://www.wsj.com/article_email/an-...MTE0ODcxMzgzWj





Congress Just Got One Step Closer to Blocking Net Neutrality
Eric Geller

The House committee that writes the federal budget approved a provision on Wednesday that would freeze the FCC's net neutrality rules.

The provision of the spending bill written by the House Appropriations Subcommittee on Financial Services and General Government suspends the rules until a federal court rules on the legal challenge currently facing them. It also requires the FCC to publish the details of any rule it proposes at least 21 days before voting on it—a step that agency advocates oppose because it contravenes the typical rulemaking process—and significantly cuts the commission's funding.

The full House Appropriations Committee voted 30-20 to approve the bill, which becomes part of the overall spending bill that will eventually go to the House floor. The anti-net-neutrality provision is not present in the Senate appropriations bill.

The White House wrote to the committee to oppose the bill, attacking both the net-neutrality section and the reduction in the FCC's budget. "These cuts unnecessarily force the FCC to scale back important work on public safety, wireless spectrum, and universal service, while increasing overall costs for taxpayers," wrote Shaun Donovan, director of the White House's Office of Management and Budget.

Internet-freedom groups sent a letter to the leaders of the Appropriations Committee on Tuesday railing against the subcommittee's attack on net neutrality, which they said "would gut the Open Internet Order, leaving the American people and economy vulnerable to blocking, discrimination, and other unreasonable practices of gatekeeper broadband providers."

"By eliminating the FCC’s ability to protect net neutrality," the open-Internet groups' letter read, "this appropriations bill would have a chilling effect on our First Amendment rights and our economy."

Rep. Jose Serrano (D-N.Y.) introduced a rider to the spending bill that would have reversed the anti-net-neutrality language, but it was defeated. Joshua Stager, policy counsel for New America’s Open Technology Institute, said his group was disappointed at the result.

"As currently drafted, this bill jeopardizes our digital economy by tilting the Internet’s level playing field in favor of entrenched cable and telephone companies," Stager said in a statement.

After the vote, Chris Lewis, vice president of government affairs at open-Internet advocacy group Public Knowledge, slammed the committee's decision to leave the anti-FCC provisions intact.

"This partisan effort to legislate FCC action through appropriations riders is a sledgehammer to the important work to ensure open access to broadband and communications networks," Lewis said in a statement. "The Chairman of the Appropriations Committee made it clear that he wants to punish the FCC for its action on net neutrality and now his colleagues are making good on that threat."
http://www.dailydot.com/politics/net...use-committee/





Academic Publishers Reap Huge Profits as Libraries Go Broke

5 companies publish more than 50 per cent of research papers, study finds

A student leafs through bound journals at a library at the University of Toronto. A new study shows that the five largest, for-profit academic publishers now publish 53 per cent of scientific papers in the natural and medical sciences – up from 20 per cent in 1973.

A student leafs through bound journals at a library at the University of Toronto. A new study shows that the five largest, for-profit academic publishers now publish 53 per cent of scientific papers in the natural and medical sciences – up from 20 per cent in 1973. (Adrian Wyld/Canadian Press)

Think it's hard to make money in publishing in the digital age? Well, huge profits are still to be had – if you're a publisher of academic research journals.

While traditional book and magazine publishers struggle to stay afloat, research publishing houses have typical profit margins of nearly 40 per cent, says Vincent Larivière, a researcher at the University of Montreal's School of Library and Information Science.

Researchers rely on journals to keep up with the developments in their field. Most of the time, they access the journals online through subscriptions purchased by university libraries. But universities are having a hard time affording the soaring subscriptions, which are bundled so that universities effectively must pay for hundreds of journals they don't want in order to get the ones they do.

Larivière says the cost of the University of Montreal's journal subscriptions is now more than $7 million a year – ultimately paid for by the taxpayers and students who fund most of the university's budget. Unable to afford the annual increases, the university has started cutting subscriptions, angering researchers.

"The big problem is that libraries or institutions that produce knowledge don't have the budget anymore to pay for [access to] what they produce," Larivière said.

"They could have closed one library a year to continue to pay for the journals, but then in twenty-something years, we would have had no libraries anymore, and we would still be stuck with having to pay the annual increase in subscriptions."

Given the situation, he wanted to track what proportion of papers was being published by these large academic publishers compared to in the past (and how big a deal it would be to cut some of those subscriptions.)

'Oligarchy' of publishers

What he and his collaborators found was that the five largest, for-profit academic publishers now publish 53 per cent of scientific papers in the natural and medical sciences – up from 20 per cent in 1973. In the social sciences, the top five publishers publish 70 per cent of papers.

Essentially, they've become an oligarchy, Larivière and co-authors Stefanie Haustein and Philippe Mongeon say in a paper published last week in the open access, non-profit journal PLOS ONE.

"The control that they now have over the scientific output researchers I would say is way too high," he said. "So that's why they can come up with annual increases that are between five six , seven, even 10 per cent."

A look at a history of the journals showed how that happened. Traditionally, most journals were published by non-profit scientific societies. But when journals shifted from print to online digital formats, those societies couldn't afford the cost of the equipment needed to make the switch. Instead, they sold their journals to large, for-profit publishers, Larivière said.

Authors, reviewers unpaid

Aside from the costs of switching itself, the digital age has made publishing even cheaper for scientific journals, which already have a business model that sounds too good to be true. Unlike other authors, researchers don't get paid for the papers they write, and peer reviewers don't get paid either.
computer

Traditionally, most journals were published by non-profit scientific societies. But when journals shifted from print to online digital formats, those societies couldn't afford the cost of the equipment needed to make the switch. (Shutterstock )

"The quality control is free, the raw material is free, and then you charge very, very high amounts – of course you come up with very high profit margins."

This model originally existed because it was necessary for sharing research in the age of print. It's no longer a practical necessity in the digital age.

But it continues to exist because researchers' funding and career advancement are tied to the number of papers they publish in top journals.

"We need journals because of their prestige," Larivière said. "Journals give discoveries and researchers a hierarchy."

He said part of the problem is that university libraries and not researchers pay the subscription fees, so many researchers aren't even aware that access to the journals costs money.

New open access policy

He thinks the scientific community needs to come up with a solution.

Change is already happening. Physics researchers have been publishing publicly accessible preprints of their papers on a site called arxiv.org since 1991. Most are posted there before being submitted to a journal and some are never sent to journals at all. Other disciplines could do that too.

Meanwhile, Canada's biggest granting agencies announced a new "open access" policy, effective May 1. It requires all recipients of grants from the Natural Sciences and Engineering Research Council, the Social Sciences and Humanities Research Council and the Canadian Institutes of Health Research (which already had a similar policy) to make their research results publicly available within one year of publication either via an online repository or an open access journal.

Larivière hopes changes like those will eventually reduce the dependence that universities have on their library subscriptions.
http://www.cbc.ca/news/technology/ac...roke-1.3111535

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

June 13th, June 6th, May 30th, May 23rd

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
JackSpratts is online now   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 05:11 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)