P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 23-04-14, 08:16 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - April 26th, '14

Since 2002


































"This is the Sony Betamax of this century." – Chet Kanojia


"The cloud computing industry is freaked out about this case." – David C. Frederick


"This capitulation will represent Washington at its worst. Americans were promised, and deserve, an Internet that is free of toll roads, fast lanes and censorship." – Todd O’Boyle






































April 26th, 2014




Google Asked to Censor Two Million Pirate Bay URLS
Ernesto

The Pirate Bay reached a dubious milestone today, as copyright holders have now asked Google to remove two million of the site's URLs from its search results. According to Google this means that between one and five percent of all Pirate Bay links are no longer discoverable in its search engine.

In an effort to make it difficult for the public to find pirated content, copyright holders send millions of takedown notices to Google every week.

One of the top domains listed in these notices is thepiratebay.se. Since the notorious torrent site doesn’t accept takedown requests itself, copyright holders have to turn to Google to do something about the appearance of their work on the infamous torrent site.

This week the number of thepiratebay.se URLs submitted to Google reached the two million mark. Nearly all of these links have been removed and can no longer be accessed through search results.

The chart below shows the number of links that have been submitted per week. There is a sharp decline towards the end of 2013 when The Pirate Bay used another domain name. The requests increased again in December when the torrent site switched back.

In total, the two million URLs were submitted in 93,070 separate takedown notices, averaging more than 20 links per takedown request. A staggering number, but one that pales in comparison to other sites.

Looking at the list of domains that received the most URLs removal requests, The Pirate Bay ends up in 29th place. The top spot goes to filestube.com with more than 11 million URLs, followed by dilandau.eu, rapidgator.net, zippyshare.com and 4shared.com.Torrentz.eu, the first torrent site in the list, comes in 8th with 4.4 million URLs.

The million dollar question is of course whether all these takedown requests have had a significant impact on the availability of pirated content.

According to Google, the two million URLs represent between one and five percent of all links that are indexed, so it’s safe to say that there’s still plenty of Pirate Bay content available via Google. Similarly, removing the search results doesn’t hinder people from going to the notorious torrent site directly.

The Pirate Bay itself isn’t particularly concerned about this development. The site’s traffic has increased steadily over the past few years, and so has the number of files being uploaded to the site.

In common with the many ISP blockades of The Pirate Bay, it’s safe to conclude that people can find plenty of alternative routes to end up where they want to be…
http://torrentfreak.com/google-asked...y-urls-140420/





Popular UK Sports File-Sharing Site Shuttered
David Vranicar

The Sports Torrent Network, a brazenly named file-sharing site, shut down after UK police threatened to put its operators behind bars for up to 10 years.

TSTN was a hotbed for illicit broadcasts of European soccer, the National Hockey League, Formula 1 races and more. The site reportedly had about 20,000 members, making it, in the words of Torrent Freak, "possibly the largest site of its type."

TSTN did not itself host pirated content but instead facilitated users' quests to find said content on peer-to-peer networks.

Last year, the City of London Police launched the Police Intellectual Property Crime Unit, which acts on tips from industry groups to take down sites like TSTN.

The U.S. has embarked on its own crusade against sites that enable unauthorized sports streams and downloads. Authorities seized numerous sites just ahead of the Super Bowl in both 2011 and 2012. The 2012 pre-Super Bowl campaign boasted the cringe-worthy moniker "Operation Fake Sweep."
http://www.technewsworld.com/story/P...red-80337.html





Justices Void $3.4 Million Award to Child Pornography Victim
Adam Liptak

The Supreme Court on Wednesday set aside a $3.4 million award to a victim of child pornography who had sought restitution from a man convicted of viewing images of her. That figure was too much, Justice Anthony M. Kennedy wrote for a five-justice majority, returning the case to the lower courts to apply a new and vague legal standard to find a lower amount that was neither nominal nor too severe.

The victim in the case said the majority’s approach was arbitrary and confusing.

The two dissents to the majority opinion would have taken more categorical approaches. Chief Justice John G. Roberts Jr., joined by Justices Antonin Scalia and Clarence Thomas, said that restitution was a worthy goal, but that the federal law at issue did not allow awards when many people had viewed the images.

Justice Sonia Sotomayor took the opposite view, saying that each viewer could be held liable for the full amount of the victims’ losses.

The case arose from the prosecution of Doyle R. Paroline, who was convicted in 2009 of possessing 280 images of child pornography. Two of them were of a woman known in court papers as Amy.

Images of Amy being sexually assaulted by her uncle as a child have been widely circulated and have figured in thousands of criminal cases. Amy has often sought restitution for her losses under a 1994 federal law. Every viewing of child pornography, Congress found, “represents a renewed violation of the privacy of the victims and repetition of their abuse.”

Amy’s lawyers say her losses — for lost income, therapy and legal fees — amount to $3.4 million. She has been granted restitution in about 180 cases and has recovered about 40 percent of what she seeks.

The 1994 law allows victims of child pornography to seek the “full amount” of their losses from people convicted of producing, distributing or possessing it, and Amy asked the United States District Court in Tyler, Tex., to order Mr. Paroline to pay her the full $3.4 million.

Mr. Paroline said he owed Amy nothing, arguing that her problems did not stem from learning that he had looked at images of her. Amy’s uncle, who was sentenced to 12 years in prison for his crimes, bore the brunt of the blame, Mr. Paroline said, but was ordered to pay Amy just $6,325.

Mr. Paroline was sentenced to two years in prison, but the trial judge said Amy was not entitled to restitution, saying the link between Amy’s losses and what Mr. Paroline did was too remote.

The United States Court of Appeals for the Fifth Circuit, in New Orleans, disagreed and awarded Amy the $3.4 million she sought. Mr. Paroline should pay what he could and seek contributions from his fellow wrongdoers if he thought it was too much, the court said, relying on the legal doctrine of “joint and several” liability.

The Supreme Court adopted neither of the lower courts’ approaches. Acknowledging that he was employing “a kind of legal fiction,” Justice Kennedy said the only sensible method of apportionment was for courts to require “reasonable and circumscribed” restitution “in an amount that comports with the defendant’s relative role.”

“This cannot be a precise mathematical inquiry and involves the use of discretion and sound judgment,” Justice Kennedy wrote. Justices Ruth Bader Ginsburg, Stephen G. Breyer, Samuel A. Alito Jr. and Elena Kagan joined the majority opinion.

Chief Justice Roberts said the majority’s approach was arbitrary and impossible to square with the words of the 1994 law. “The statute as written allows no recovery,” he said. “We ought to say so, and give Congress a chance to fix it.”

Justice Sotomayor, in turn, was critical of the chief justice’s dissent, saying it “would result in no restitution in cases like this for the perverse reason that a child has been victimized by too many.”

Of the majority’s approach, she said that “the injuries caused by child pornography possessors are impossible to apportion in any practical sense.” She said she would award the full amount of Amy’s losses but let offenders pay them off over time until she was made whole.

In a statement issued through her lawyers, Amy said the Supreme Court’s decision, in Paroline v. United States, No. 12-8561, had left her “surprised and confused.”

“I really don’t understand where this leaves me and other victims who now have to live with trying to get restitution probably for the rest of our lives,” she said. “It’s crazy that people keep committing this crime year after year and now victims like me have to keep reliving it year after year.”

In another case, concerning the death penalty, the court split 6 to 3 over whether its precedents had established that capital defendants are entitled to a jury instruction that their failure to testify at sentencing hearings should not be held against them.

The case, White v. Woodall, No. 12-794, involved Robert K. Woodall, a Kentucky man who pleaded guilty to the 1997 rape, mutilation and drowning of Sarah Hansen, a 16-year-old high school student. He did not testify at his sentencing hearing in state court, and the judge declined to give the requested instruction. Mr. Woodall was sentenced to death.

Justice Scalia, writing for the majority, said Mr. Woodall’s challenge to his conviction in federal court must fail because the Supreme Court had not squarely ruled on whether defendants have a right to the instruction. In dissent, Justice Breyer said the right was clearly established.
http://www.nytimes.com/2014/04/24/us...hy-victim.html





Companies Built on Sharing Balk When It Comes to Regulators
David Streitfeld

In the newfangled sharing economy, questions about safety, taxes and regulation have tended to be an afterthought. That has helped propel companies like Uber, Airbnb and Lyft into the stratosphere.

But regulators as well as some elected officials across the country are increasingly questioning the presumptions and tactics of these start-ups, especially the notion that laws do not apply to them.

The companies are fighting back by rallying their impassioned and growing customer base. And they are stocking up on lawyers and lobbyists.

The latest confrontation comes Tuesday as Airbnb, the largest housing rental company, goes to court in Albany. It is fighting an effort by New York regulators to collect the names of Airbnb hosts who are breaking the law by renting out multiple properties for short periods.

Airbnb, which is now estimated to be worth $10 billion, is framing the dispute as a case of government scooping up more data than it needs for purposes that are vague.

“He claims to be targeting a small number of bad actors,” said David Hantman, Airbnb’s head of global public policy, referring to the New York attorney general, Eric T. Schneiderman. “But he is asking for data on thousands of regular New Yorkers.”

In a blog post, Airbnb said the regulators’ plan was to accuse Airbnb hosts in court “of being bad neighbors and bad citizens.” It added: “They’ll call us slumlords and tax cheats. They might even say we all faked the moon landing.”

It is against the law in New York City to rent out an apartment for less than a month, a 2010 measure meant to curb unregulated hotels. Mr. Schneiderman says 60 percent of Airbnb rentals in New York are illegal.

Existing laws, Airbnb executives say, do not fit the sharing economy.

“There are laws for people and there are laws for business, but you are a new category, a third category, people as businesses,” Brian Chesky, Airbnb’s chief executive, told an audience last fall. “As hosts, you are microentrepreneurs, and there are no laws written for microentrepreneurs.”

Micah Lasher, Mr. Schneiderman’s chief of staff, fired back that “being innovative is not a defense to breaking the law.”

Calling consumers to arms is a defense that has worked well for the businesses of the sharing economy. Last week, ride-sharing companies collected enough signatures in Seattle to overturn a new City Council measure that limited their growth. This week, they are trying to fight an effort by Arizona legislators to introduce some modest controls on their business.

Attempts at old-fashioned regulation are being cheered on by the taxi and hotel industries, who feel their livelihood is being threatened. Affordable-housing advocates are now joining the fray, saying companies like Airbnb are worsening a crisis.

“You are throwing gasoline on a fire that the rest of us are trying to put out,” a coalition of housing groups, Real Affordability for All, asserted Monday in an open letter to Airbnb.

Until recently, start-ups generally avoided areas that were heavily regulated, like transporting people and renting rooms. Even a well-funded start-up did not have the money, time or patience to wrestle approval from bureaucrats.

Then, a few years ago, Silicon Valley had a collective insight: Better to ask forgiveness than permission.

At a recent valley conference on the sharing economy, Kevin Laws of AngelList, a site that unites start-ups and investors, explained that “the approach almost all start-ups take is to see if they can be successful fast enough so they can have enough money to work with the regulators.”

Many start-ups define “working with” regulators as simply accusing them of holding back innovation. But Mr. Laws said regulators were trying to balance many competing interests. “This assumption that they’re always bad in my experience has been almost 100 percent wrong,” he said.

The New York regulators are seeking through a subpoena the names of landlords they think are breaking the law.

On Jan. 31, there were 19,522 listings for New York City properties on Airbnb from 15,677 hosts, according to data the attorney general submitted to the court. But nearly a third of the listings were from only 12 percent of the hosts.

One Airbnb landlord had 127 listings in Manhattan on a single weekend last fall. Sixteen other landlords had at least 15 listings each.

Even as Airbnb was increasing the volume of its attack on regulators on Monday, it was deleting numerous New York hosts from its site.

“When we examined our community in New York, we found that some property managers weren’t providing a quality, local experience to guests,” the company said in a blog post. The removed landlords controlled a total of 2,000 rooms.

The regulators took Airbnb’s action as proof that people are indeed abusing the site, and that Airbnb knows it.

“The publicly available data suggests that a disproportionate share of Airbnb’s business comes not from struggling artists and grandmothers but rather large commercial enterprises,” said Mr. Lasher, the chief of staff. He added that the “string of attacks” on the attorney general was “a transparent effort to distract from our investigation.”

On Tuesday morning, Airbnb intends to put forth a fresh attack on Mr. Schneiderman, releasing a memo that plays up those struggling small stakeholders.

“Kimberly is an Airbnb host on the Lower East Side,” the memo says. “She has a chronic illness that prevents her from working.” Kimberly is quoted directly: “My husband and I spent countless nights wondering if and when we would lose our home, or if we would have to stop treatment to keep a roof over our heads.” She concludes, “Airbnb saved us.”

What both sides seem to agree on is that being a New York landlord on Airbnb can be lucrative.

Two years ago, Airbnb hosts in New York were making an average of $21,000 a year, the company said at the time, and some as much as $100,000.

Of the top 40 highest-grossing Airbnb hosts in New York, each took in at least $400,000 over the last three years, Mr. Schneiderman said. Collectively, they have grossed more than $35 million over the last three years.
http://www.nytimes.com/2014/04/22/bu...egulators.html





Digital Public Library of America to Add Millions of Records to its Archive

At one year old, DPLA makes some big partnerships and looks back on its growth.
Megan Geuss

Today marks the Digital Public Library of America's one-year anniversary. To celebrate the occasion, the non-profit library network announced six new partnerships with major archives, including the US Government Printing Office and the J. Paul Getty Trust.

The DPLA is best described as a platform that connects the online archives of many libraries around the nation into a single network. You can search all of these archives through the digital library's website, and developers can build apps around the DPLA's metadata collection using the publicly available API.

It's easy to find historical documents, public domain works, and vintage photos online through a search on the DPLA's website. "To participate in the DPLA, all institutions have to donate their metadata under a CC0 license, send us a thumbnail, and host a publicly viewable full version of the item," DPLA Executive Director Dan Cohen told Ars.

The fledgling library said today that one of its early partners, the New York Public Library, agreed to expand access to its digital collections in the coming year. It will increase from the initial 14,000 digitized items it lent the DPLA catalog to over 1 million such records.

In addition, the DPLA announced partnerships with the California Digital Library, the Connecticut Digital Archive, the J. Paul Getty Trust, the US Government Printing Office, Indiana Memory, and the Montana Memory Project.

One could argue that the DPLA's most important partnership is with the US Government Printing Office, which will provide access to a “growing collection” of government documents through the DPLA. “Examples include: the Federal Budget, laws such as the Patient Protection and Affordable Care Act, Federal regulations and Congressional hearings, reports and documents,” a DPLA press release notes.

Making good on 2013's promises

Back when Ars spoke to Cohen in 2013, he talked about some of his plans for the digital library, calling it a “multi-decade effort.” In its first year, one of the DPLA's most important goals has been to catalog and connect to as many digital works as possible. Ars caught up with Cohen again this year, and he told us that the organization has made it a priority to help public libraries digitize their works using a grant from the Bill and Melinda Gates Foundation to advocate for putting resources online. “We are helping to train public librarians with the digital skills they will need for the twenty-first century and to participate in a large-scale digital project like DPLA,” Cohen wrote in an e-mail.

Cohen also told Ars last year that he hoped to increase the number of contemporary books to which the DPLA could direct its visitors. Given the state of copyright law in America, that's a tall order for a nascent non-profit, but the members of the DPLA have been mulling either supporting or creating an alternative form of licensing for authors who want to use it for years now (Creative Commons and Library License were discussed last year as potential standards to adopt). One year later, though, licensing is still a thorny issue at the DPLA. “We have continued to think about how to expand the realm of openly available e-books and indeed have made significant progress on this behind the scenes over the last year by talking to individuals and organizations interested in making [it a] common cause,” Cohen told Ars. He added that the DPLA would “have some announcements on this front in the near future.”

The DPLA received over $2 million in grants and donations in its first year. It says that it has amassed more than seven million digitized items in its archives to date, and last year it attracted more than one million unique hits to its website. But the more impressive numbers come from the fact that the digital library made its metadata available to anyone. It reported today that over the year it received nine million hits to its API.

Some of the apps that developers have made with the database include “a smartphone app called OpenPics that shows materials from DPLA related to the location where you are standing; a Pinterest-style app called Culture Collage that shows thumbnails of images related to a particular search on an endlessly scrolling page; and an app called FindDPLA that helps Wikipedia editors locate helpful primary sources to cite in their articles.” With third-party apps, the DPLA isn't just a public library, it lets anyone build their own public library to suit their needs.
http://arstechnica.com/business/2014...o-its-archive/





Netflix Researching “Large-Scale Peer-To-Peer Technology” for Streaming

Job ad says Netflix wants to "integrate P2P as an additional delivery mechanism."
Jon Brodkin

When we wrote about the possibility of Netflix using a peer-to-peer architecture for streaming earlier today, it seemed like more of a thought experiment than a real possibility.

But it turns out Netflix is looking for an engineer to research this very type of system. By searching Netflix job postings we found an opening for a senior software engineer who would work on Netflix's Open Connect content delivery network while researching how P2P technology could be used for streaming.

"Netflix seeks a seasoned Senior Software Engineer with a special focus in peer-to-peer networks," the listing says. Responsibilities include:

• Research and architecture of large-scale peer-to-peer network technology as applicable to Netflix streaming.
• Liaise with internal client and toolkit teams to integrate P2P as an additional delivery mechanism.
• Design and develop tools for the operation of peer-to-peer enabled clients in a production environment.

The successful applicant is required to have "At least five years of relevant experience with development and testing of large-scale peer-to-peer systems." Preferred qualifications include "Knowledge of and proven experience with P2P, CDN/HTTP cache/proxy technology."

The job posting appears to be at least a month old. When asked whether the company intends to stream video using P2P, a Netflix spokesperson replied only that "the best way to see it is that we look at all kinds of routes."

Our story this morning was spurred by a blog post written by BitTorrent, Inc. CEO Eric Klinker, who argued that a peer-to-peer architecture would help Netflix deliver its traffic without having to pay Internet service providers. We spoke with Klinker this afternoon, and he expanded on his thoughts.

"Netflix has a hard time getting traffic onto these networks. It's because they are in a hub-and-spoke model where the traffic flows in only one direction, from Netflix to the consumer," Klinker told Ars.

Netflix is paying Comcast for a direct connection to its network, even though it claims it should be eligible for "settlement-free peering," an exchange of traffic without money changing hands. Comcast (and other ISPs) say Netflix should pay up, since the video streaming company sends more traffic to consumers than vice versa.

"The foundation for settlement-free peering is you need something resembling a balance [in traffic]," Klinker said. "If you could make Netflix traffic look more like that, then you would have an opportunity to ride the same settlement-free economics that all the Tier 1 ISPs use to connect with each other."

Netflix CEO Reed Hastings himself has made the same point, writing that the company has asked ISPs "if we too would qualify for no-fee interconnect if we changed our service to upload as much data as we download—thus filling their upstream networks and nearly doubling our total traffic."

Given Netflix's job posting, that may be something the company is seriously considering.

Netflix, via torrents

Klinker stressed that his own idea is just a thought experiment and that he hasn't talked to Netflix on this topic. However, we asked him to describe how the system he envisions would work.

"I think you could use torrent technology in ways that provided a different user experience than torrents do today," Klinker said. "You could integrate torrents into something that looks exactly like Netflix today. It could even be a hybrid solution… [where] they would begin to stream from servers, but over time if the content were particularly popular most of that load could be picked up by peers around their customer base."

This wouldn't be entirely unprecedented. The Vudu video service used peer-to-peer technology when it launched in 2007.

Movies would be downloaded by customers if Netflix adopted such a model, Klinker said. Naturally, that would raise concerns about copyright and piracy. "I would imagine the rights holders for the content would insist on some basic locking technologies," Klinker said. Movies or TV shows could be downloaded by consumers even before they became available to the general public, so it could be "living on my storage ready to go once that release date comes and the technology unlocks it," he said.

This wouldn't necessarily make it easier to copy and redistribute Netflix movies to non-subscribers, he argued, noting that shows like House of Cards are already pirated. "Is there a foolproof way to lock content? No there isn't, and even Netflix isn't one," he said.

Klinker pointed to a technology that his company developed a few years ago called BitTorrent DNA as a possible model for Netflix.

"For content companies that wanted to integrate a hybrid hub-and-spoke and peer-to-peer solution, the consumers would access the content on the servers first and begin to get the streams flowing from Netflix servers," he said. "But quickly the peer-to-peer components would kick in and determine how available the underlying content was throughout the peer network and then the clients would regulate how much came from the servers and how much came from other peers based on that availability. There's a lot of intelligence that was built into the edge of this network [in BitTorrent DNA] that did all that balancing and regulation and filling the buffer and keeping the playback smooth."

Consumers would need to have control over whether they participate and how much they download, in part because they might need to be mindful of data caps imposed by their ISPs.

We asked Klinker why consumers would want to help a multi-billion-dollar company like Netflix reduce its costs, and he noted that Netflix could pass on benefits with lower prices and better quality. "The consumer could see value just in a better experience or Netflix could share in the value of this solution to consumers. In other words it could be a discount on their subscription," he said.

The system would have to adapt to all the different devices consumers own. While a desktop or laptop and some gaming consoles could store video, plenty of other devices that connect to Netflix cannot.

"Over time you could integrate storage into the Rokus and the Apple TVs," Klinker said. "I would expect Netflix to take a very broad approach to the market where they would simply have the technology and make it available for devices that have storage."

While Netflix could sell its own hardware to solve the storage problem, Klinker said, "I wouldn't expect they would do that. I think their approach to the consumer electronics market is the right one, where they integrate and license their technology to all players and that way they get broad market adoption."

Theoretically, BitTorrent could help Netflix with building a peer-to-peer system, but Klinker said there haven't been any discussions along those lines. "We have done licensing deals in the past. It's not our core business, which today is largely focused on enterprise applications like BitTorrent Sync and our consumer clients," he said.
http://arstechnica.com/information-t...for-streaming/





OpenSSL and Linux: A Tale of Two Open-Source Projects
Nicole Perlroth

The Heartbleed bug has cast a bright and not entirely flattering light on the open-source movement’s incentive model.

When a crucial and ubiquitous piece of security code like OpenSSL — left vulnerable for two years by the Heartbleed flaw — can be accessed by all the world’s programming muscle, but only has one full-time developer and generates less than $2,000 in donations a year, clearly something is amiss.

But then there’s Linux.

Linux, arguably the world’s most emblematic open-source project, provides a counterpoint to OpenSSL’s problems. Volunteers all over the world submit seven changes to Linux every hour, and millions of lines of code improvements and fixes are voluntarily added to the software every year. Over 180 major companies, including Hewlett-Packard, Oracle, IBM and Samsung, every year contribute around half a million dollars to the Linux Foundation, the nonprofit that supports the Linux system.

So what explains the discrepancy between the inattention to OpenSSL and the great fortune of Linux? Good old lack of awareness, experts say.

Open-source advocates and participants say Linux has simply had the benefit of strong brand ambassadors and better name recognition than OpenSSL.

Thousands of televisions sold every day are controlled by Linux. Ninety-two percent of the world’s high-performing computing systems run on Linux. It runs financial systems, Internet and air traffic control systems, Android smartphones and Kindles.

Linux may be invisible to most consumers, but because it is used so widely and in so many vital systems, experts say, companies are acutely aware that their livelihood depends on Linux’s health and are more than happy to contribute financially and in the form of programmers’ time and energy.

The fact that OpenSSL escaped such awareness was “a screw-up,” said Jim Zemlin, the executive director of the Linux Foundation.

“I don’t believe there was some nefarious free-rider problem going on here, or that this was a case of perverse incentives,” he said. “It was a screw-up. With Heartbleed and OpenSSL, most people were looking around and saying ‘Oh yeah, what ever happened to those guys?’”

Part of the problem, some open-source advocates say, is that OpenSSL is in dire need of someone like Linus Torvalds, the Finnish programmer who developed the Linux operating system, jump-started the open-source movement and is actively involved with the project today.

“You do need an ambassador,” Mr. Zemlin said. “We regularly employ Linus Torvalds and Greg Kroah-Hartman,” another well-known Linux developer.

Mr. Zemlin regularly sits on conference panels, does press interviews, and is often on the TED conference circuit reminding people of Linux’s humble origins and its current importance.

Some open-source advocates say Mr. Zemlin is exactly what OpenSSL is missing. ”OpenSSL simply hasn’t had the benefit of a leader like Jim around it,” said Brian Behlendorf, a board member at the Electronic Frontier Foundation and the Mozilla Foundation, another open-source project, which runs the Firefox browser.

“There’s a new world order where developers have become the new king-leaders and I’m just their lobbyist,” Mr. Zemlin said. “You have to fund these projects in a way that is on a level with their importance to our society.”

Until the Heartbleed bug was disclosed last week, not many had heard of Dr. Stephen Henson, the developer who is the only person to work full time on OpenSSL, or the OpenSSL Software Foundation and its director, Steve Marquess.

The good news, open-source advocates say, is that in the last week that has changed.

Now Mr. Zemlin and other open-source advocates are working to fix what is being coined as the “OpenSSL problem,” which affects many other open-source projects that play a crucial role in the digital age but are maintained by the work of a small, strained cadre of volunteers.

“OpenSSL is far from unique,” said Eric Steven Raymond, author of book “The Cathedral and the Bazaar”, an open-source manifesto of sorts. ”There are lots of critical libraries maintained by volunteers that are not given enough attention.”

Mr. Raymond pointed to other crucial systems like the Domain Name System, a kind of switchboard for the Internet that is held together by volunteers at nonprofits and some corporations, and the Internet Time Service protocol, a crucial feature that syncs the clocks of computers over the Internet. Major financial exchanges depend on the protocol’s health, but it’s currently being managed by one volunteer programmer in Maryland.

“The leadership is aware this is a serious incentive problem here and we are going to fix it,” Mr. Raymond said.
http://bits.blogs.nytimes.com/2014/0...urce-projects/





One Week of OpenSSL Cleanup
weerd

After the news of heartbleed broke early last week, the OpenBSD team dove in and started axing it up into shape. Leading this effort are Ted Unangst (tedu@) and Miod Vallat (miod@), who are head-to-head on a pure commit count basis with both having around 50 commits in this part of the tree in the week since Ted's first commit in this area. They are followed closely by Joel Sing (jsing@) who is systematically going through every nook and cranny and applying some basic KNF. Next in line are Theo de Raadt (deraadt@) and Bob Beck (beck@) who've been both doing a lot of cleanup, ripping out weird layers of abstraction for standard system or library calls.

Then Jonathan Grey (jsg@) and Reyk Flöter (reyk@) come next, followed by a group of late starters. Also, an honorable mention for Christian Weisgerber (naddy@), who has been fixing issues in ports related to this work.

All combined, there've been over 250 commits cleaning up OpenSSL. In one week. Some of these are simple or small changes, while other commits carry more weight. Of course, occasionally mistakes get made but these are also quickly fixed again, but the general direction is clear: move the tree forward towards a better, more readable, less buggy crypto library.
http://undeadly.org/cgi?action=artic...20140418063443





Big Tech Companies Offer Millions to Help with Heartbleed Crisis

The world's biggest technology companies have agreed to donate millions of dollars to set up a group that will fund improvements in open source programs like OpenSSL, the software whose "Heartbleed" bug has sent the computer industry into turmoil.

Amazon.com Inc, Cisco Systems Inc, Facebook Inc, Google Inc, IBM, Intel Corp and Microsoft Corp are among a dozen companies that have agreed to be founders of the group, known as Core Infrastructure Initiative. Each will donate $300,000 to the venture.

The non-profit Linux Foundation announced formation of the group on Thursday.

It will support development of open source software that makes up critical parts of the world's technology infrastructure, but whose developers do not necessarily have adequate funding to support their work, said Jim Zemlin, executive director of the Linux Foundation.

(Reporting by Jim Finkle; Editing by Chizu Nomiyama)
http://www.reuters.com/article/2014/...A3N13E20140424





The Plot to Kill the Password

The world’s most powerful companies want you to log in with fingerprints and eyescans
Russell Brandom

Last Friday, Samsung's new Galaxy S5 arrived with an unexpected and underhyped feature. Like the iPhone 5S, it came with a fingerprint reader, but this reader plugs directly into PayPal, which in turn connects you to dozens of different payment systems. It’s a clever trick: instead of a password, all you need is a fingerprint, carrying you through the entire web. If it catches on, soon you won’t need a password at all.

Of course, the S5’s fingerprint scanner might fail — by all accounts, it’s far from perfect — but that won’t be our only chance. Google is working on USB keyfobs that would log users into their Google accounts; they’re being tested internally, and should roll out before the end of the year. Microsoft wouldn’t name specifics, but also teased an "alternative to passwords" that’s based on the same spec.

It seems like good luck, all the companies arriving at the same moment at the same time, but it’s just the opposite. This moment was carefully planned, built on top of a delicate standard that’s taken two years to construct. Since 2012, a group called the FIDO Alliance has been working on that standard, building a bridge between hardware projects like Samsung’s fingerprint reader and the online services they’re connecting to. The work has been helped along by some of the most powerful names in the tech and finance, Google and Microsoft together with Valley outsiders like Bank of America and MasterCard. The nature of the spec makes it easy to plug into, so if Samsung strikes out with its fingerprint scanner, it can try again next year with an iris scanner or an NFC token.

It’s a plot to kill the password, one that’s taken years and millions of dollars to set in motion. And with the launch of the Galaxy S5, the first major phone to embrace the FIDO spec, the plot is underway.

THE PASSWORD'S BILLION-DOLLAR PROBLEM

The problems with the password are obvious. The login system was first designed for time-sharing computers in the ’60s, working on mainframes that took up an entire lab. To use the computer, you tapped in your login name and password, which told the computer who was sitting at the terminal and which files to make available. Stealing someone’s password was good for a practical joke, but not much else: there was only one computer where you could use it, and not much personal information on display once you’d broken in.

50 years later, the right password can get you almost anything. You can read emails, order a new TV, or hijack cloud-storage accounts until you’ve accessed or deleted every trace of a person’s digital life. You can do it from anywhere with an internet connection — effectively anywhere in the world — and it’s easy to hide where you’re doing it from. You can get the password from a data breach (most people still use the same password in multiple services) or just socially engineer a customer-service rep. It happens all the time. These hacks are personally devastating, and cost businesses billions of dollars every year. Two-factor authentication helps, splitting the password between two different systems and devices, but it's far from perfect; in the end, it just means attackers have to crack two codes instead of one. No matter how you try to fix it, you run into the basic insecurity of the password at the root of it all.

Around 2010, PayPal decided to do something about it. It started with a conversation between PayPal's head of security Michael Barrett, fingerprint security entrepreneur Ramesh Kesanupalli, and Taher Elgamal, the father of SSL and one of the most renowned cryptographers in the world. Kesanupalli wanted a new standard for fingerprinting, something that would let his print readers be used without an expensive database. Barrett wanted a stronger, easier way for PayPal to log in, and Elgamal, with his legendary cryptography background, was clearly the man to build it. Two years later, the group launched the FIDO Alliance, an open group trying to wean companies off passwords for good, funded by companies who thought they would benefit. The group launched in 2012 with PayPal and five hardware companies, but grew fast. Google signed on in April of last year; Microsoft followed suit in December.

ZERO-KNOWLEDGE PROOF

The alliance is built on a simple, powerful idea. If web-goers logged into their computers with local fingerprint readers, sites could log them in automatically using a technique called Zero-Knowledge Proof. It’s a protocol for proving that a successful ID has been made, like a fingerprint or iris scan, without giving away any details of the fingerprint or iris in question. (It means that, in a Heartbleed scenario, attackers wouldn't get access to your actual fingerprint.) Using that protocol, a single local device could authenticate you to the entire web. In the age of the mobile web, you might not even need a new gadget. "Users have very high device affinity, and they tend to have devices with them a lot," says Barrett. "I’m looking around my office, and I’ve got within 5 feet of me, a smartphone, two PCs, and a tablet."

This is what we’ve seen on the S5: you’re not just using your fingerprint to log in, but a combination of the right fingerprint and the right phone. You’ve always got a finger and a phone, so logging in isn’t a problem, but the combination makes the security much, much harder to break. Either one can be duped individually (your phone could be stolen, your fingerprint could be copied), but duping both at once would be incredibly difficult.

And using Zero-Knowledge Proof, that authentication can be shared with any service you want to log into, whether it’s using a remote code or something more direct like NFC. It’s a line of thinking that’s also taken hold at Google. Mayank Upadhyay, the Googler directing the company’s authentication efforts, sees the keyfob as just the first step, moving towards a time when every login happens on a mobile device. "We believe longterm that it needs to be built into the thing that you're already carrying, which is your smartphone."

THE TOUCH ID PROBLEM

While FIDO has some powerful supporters, there’s one name that’s noticeably absent: Apple. The iPhone 5S’ Touch ID is still the most usable mobile fingerprint scanner we’ve seen, and it’s kept its distance from FIDO. The company behind the hardware, AuthenTec, dropped out of the consortium as soon as it was acquired by Apple, and since then, Apple and FIDO have developed their tech separately. While FIDO has kept their spec open, Apple has taken the opposite approach, keeping Touch ID closed off even from iOS developers. The current version of Touch ID can only be used to unlock the phone and log into iTunes, and it’s unclear how or when it will open up further. It’s a walled garden, and with the full force of the iPhone behind it, it could be a serious roadblock to FIDO’s plans.

But even if FIDO loses the battle for fingerprints, it could still win the larger war of authentication. The open standard makes FIDO easy to plug into, so if Samsung decides it wants to shift from a FIDO-compliant fingerprint scanner to a FIDO-compliant iris scanner, it would be as easy as swapping out the hardware. On the service side, PayPal never needs to know the difference. While the iPhone is locked into fingerprint-scanning for the next few generations, the rest of the industry can use whatever works. At the moment, that means eye scanners and USB keys, but it also means making room for tech that hasn’t been invented yet, like DNA scans or biorhythm markers. As long as the standard is open, it can accommodate anything.

They could still be wrong. The new generation of ID systems could flop, leading to a mass retreat and another 15 years of leaky logins. Consumers might find fingerprints and eye-scans creepy, or push back against the idea that they can’t log in from a friend’s computer. Like most ambitious schemes, it’s a bet — and there are dozens of reasons it might not pan out.

In the end, Barrett’s bet is that the new systems will just be too easy to pass up. What’s a fingerprint or a USB key, weighed against 30 passwords? Who could turn down an easier way to log in? FIDO might want something safer, but customers just want what’s easy. "Like water, they flow downhill," he says. "They flow to the point of lowest friction."
http://www.theverge.com/2014/4/15/56...l-the-password





Electronics Owners: Get $10 in a DRAM Class Action Settlement
Dave Greenbaum

If you owned any electronic device that had DRAM in it, a price fixing class action settlement could pay you at least $10. Here's how to find out if you qualify.P

The settlement includes any individual or business that bought a device with DRAM between 1998 and 2002. The products included are not just computers but other electronics such as graphic cards, DVD players and MP3 players. Filing a claim does not require proof of purchase but "Large claims will likely be required to supply proof of purchases." and "the Claims Administrator may request it at a later time, so save any documentation/proof that you may still have"P

The details of how much the settlement pays out are tricky. The FAQ indicate the payment minimum is $10. Larger claims could get a larger settlement. Alternatively, if the number of small claimaints exceed more than 5 million, smaller claims may not pay out (see Question 11). The claim process is simple: tell them how many devices you had between 1998 and 2002, provide your contact info, tell them you are being honest, and click submit. Quick and easy potential $10 or more (or nothing at all).P

The deadline for filing a claim is August 1, 2014. If per chance you want to opt out of the settlement in case you want to file suit directly, you only have until May 5th 2014.P

DRAM - $310 Million Settlement | DRAMClaims.com
http://lifehacker.com/electronics-ow...ett-1567697819





Toshiba’s U3 Microsd Cards are Fast Enough to Handle 4K Video, Still Tiny and Easy to Lose
Les Shu

MicroSD cards have become so ubiquitous that even some digital cameras are now using it. That’s fine and dandy if speed isn’t a concern, but if you’re snapping lots of photos and you need your camera to keep up, MicroSD pales in comparison to its larger SD brethren. But that’s about to change, as Toshiba has announced the world’s fastest MicroSD cards, giving a boost to smartphones, tablets, cameras, and anything could utilize fast data transfers. Just how fast? The ability to handle 4K video fast.

The new cards, available in 32GB and 64GB capacities, are rated UHS Speed Class 3 (U3) and the first MicroSD cards to meet the UHS-II interface standard. This means that the cards can support “high-quality 4K video capture at constant minimum write spees of 30MB/S,” Toshiba says. “This means that 4K2K video, live broadcast, and content can be recorded on high-performance cameras.” It also means that future products like cameras and smartphones could record and store high-res media without having to significantly change their form factors to accommodate larger or specialty cards – we know how much all us love our gadgets to be svelte.

The 64GB card has a max read speed of 260MB per second and write speed of 240MB/s; the 32GB handles 145MB/s and 130MB/s, respectively. Toshiba says it “represents an 8x write speed improvement and 2.7x read speed improvement when compared to Toshiba’s current 32GB MicroSD UHS-1 card. Besides being able to support high-res photography in burst mode, it reduces the transfer time when accessing or downloading content to your smart devices, such as music, photos, and videos. That’s just in the consumer market, as these cards could benefit commercial products like security cams. The nice part is that you can use it in standard SD slots via an adapter, although we assume there’d be a price premium for the smaller card.

Price and availability have not been determined, but Toshiba has reportedly already given these cards to manufacturers. This means we could see upcoming devices supporting the cards, able to better support more demanding applications like ultra-high-res photography and video. We’re sure manufacturers like SanDisk won’t be far behind.
http://www.digitaltrends.com/photogr...iny-easy-lose/





Cord Cutting on The Rise, Especially Among the Young
Karl Bode

Cord cutting is on the rise, according to a new study by Experian Marketing Services. According to the report, the number of cord cutters (defined by Experian as people who have broadband but never have had cable or just dropped cable) has risen 44% in the last three years. The firm claims that 6.5 percent of households in the U.S. cut the cord in 2013, a number that was 4.5 percent in 2010. The number increases for users who subscribe to Netflix or Hulu, with a fifth of Americans who use those services refusing to subscribe to cable TV.

Experian notes however it's the young where the real sea change is occurring. Almost a quarter of young adults between 18 and 34 who subscribe to Netflix or Hulu don't pay for TV.

"The young millennials who are just getting started on their own may never pay for television," said John Fetto, a senior analyst at Experian Marketing Services. "Pay TV is definitely declining."
http://www.dslreports.com/shownews/C...e-Young-128605





Comcast Profit Rises as NBC Revenue Grows
AP

Comcast Corp. said Tuesday that its first-quarter net income rose by 30 percent as ad revenue surged at broadcast network NBC, helped by the Winter Olympics in Sochi and Jimmy Fallon's elevation as host of "The Tonight Show."

The results beat Wall Street estimates and its shares edged up in premarket trading.

Comcast is the largest cable company in the country with 22 million video customers and 19.8 Internet customers. It is in the midst of an expected yearlong review of its $45 billion acquisition of No. 2 rival Time Warner Cable Inc.

Regulators are examining whether the combination would give it undue pricing power over customers and too much leverage with programmers.

Its net income in the quarter through March rose to $1.87 billion, or 71 cents per share, from $1.44 billion, or 54 cents per share a year ago.

Excluding one-time items, adjusted earnings came to 68 cents per share, beating the 64 cents expected by analysts polled by FactSet.

Revenue grew 14 percent to $17.41 billion from $15.31 billion. That's also higher than the $16.99 billion expected by analysts.

NBCUniversal revenue grew 29 percent to $6.88 billion while cable services revenue grew 5 percent to $10.76 billion.

Olympics broadcast rights boosted NBCU revenue by $1.1 billion. Even excluding the games, broadcast revenue rose 8 percent, helped by Fallon's selection for NBC's late night slot, replacing longtime host Jay Leno. The network was also boosted by more hours of "The Voice" and the popularity of new shows like "The Blacklist."

On the cable connections side, Comcast added 24,000 video customers during the quarter, the second quarterly gain in a row following a six-and-a-half year losing streak. However, those gains are likely to come to an end in the current quarter as college students disconnect service at the end of the semester.

Comcast added 383,000 high-speed data customers and 142,000 voice customers.

The company says the roll-out of its newest X1 set-top box is starting to contribute to better video results. It is now setting up 15,000 to 20,000 boxes per day, up from 10,000 at the end of the year. An improved user interface is helping reduce customer disconnects while boosting video-on-demand spending and increasing uptake of digital video recorder service. Within three years, Comcast hopes the majority of its customers will have X1.

Comcast shares rose 37 cents to $50.25 in premarket trading about an hour before the market open. Its shares have fallen 4 percent in regular trading so far this year.
http://www.nytimes.com/aponline/2014...s-comcast.html





Comcast Bills Lowered $2.4 Million by Scammers Who Accessed Billing System

Men plead guilty, face prison and have to pay money back to Comcast.
Jon Brodkin

Two men pleaded guilty to a scam that lowered the bills of 5,790 Comcast customers in Pennsylvania by a total of $2.4 million. They now face prison time and will have to pay their ill-gotten wealth back to Comcast.

30-year-old Richard Justin Spraggins of Philadelphia pleaded guilty in February and was "ordered to make $66,825 in restitution and serve an 11- to 23-month sentence," the Times-Herald of Norristown wrote at the time.

Scaggins was described as the second-in-command of the operation. The accused ringleader, 30-year-old Alston Buchanan, pleaded guilty last week. "Buchanan faces up to 57½ to 115 years in prison, although Buchanan will likely serve a lesser sentence than the maximum," the newspaper wrote.

There is no agreed-upon sentence—the judge will decide how long Buchanan will spend in prison and how much he'll have to pay back. “Comcast lost $2.4 million and it will be up to the judge to see how much the defendant will be required to pay back," prosecutor Jeremy Lupo said.

Comcast customers saved an average of $414 in exchange for paying the defendants $75 to $150. "According to the affidavit of probable cause, Buchanan bought the login identification from a Comcast employee and was able to login to the system remotely and change the accounts to lower monthly bills," the Times-Herald wrote. According to another Times-Herald article, Buchanan "worked as a dispatcher for Comcast in from May 2007 to March 2008 and was familiar with the company’s billing system."

Tipped off by a suspicious customer, Comcast reported the scam to police in April 2012 after it had been going on for a year. "Investigators were able to make contact with someone known as 'Nick' who told them to deposit money into a bank account. The bank account was Buchanan’s," the newspaper wrote. Police searched Buchanan's apartment and found ledgers filled with customer information, along with $100,000 in cash.

UPDATE: In response to questions from Ars, Comcast said that it has "enhanced our audit/review process to help identify and prevent this type of activity in the future."

Customers were not charged retroactively for the discounted amounts, but their bills were "corrected on a moving-forward basis."

Lupo said in a court hearing last year that "all customers ended up having to pay more money to make up for the loss."
http://arstechnica.com/tech-policy/2...illing-system/





Netflix Teases a Price Hike, Doesn’t Deny Comcast Played a Role

The shrewd politics behind the Netflix price hike
Brian Fung

If you're on the fence about getting a Netflix subscription, you may want to act now. The company is hinting at a price hike that could hit sometime in the next couple months.

In a letter to shareholders, company executives wrote that a one- or two-dollar monthly increase could soon be levied on new customers, with a rise in prices eventually hitting existing customers as well:

As expected, we saw limited impact from our January price increase for new members in Ireland (from €6.99 to €7.99), which included grandfathering all existing members at €6.99 for two years. In the U.S. we have greatly improved our content selection since we introduced our streaming plan in 2010 at $7.99 per month. Our current view is to do a one or two dollar increase, depending on the country, later this quarter for new members only. Existing members would stay at current pricing (e.g. $7.99 in the U.S.) for a generous time period. These changes will enable us to acquire more content and deliver an even better streaming experience.

The increase comes just weeks after Netflix signed a controversial deal agreeing to pay Comcast -- the nation's largest cable TV company and among the biggest providers of fast Internet connections -- an undisclosed amount of money in exchange for better streaming speeds. That led critics to wonder whether Netflix would pass on any new costs it may have incurred as a result of the agreement to consumers.

When asked whether the Comcast negotiations played a role in Netflix's decision to raise prices, Netflix spokesperson Joris Evers was vague, writing in an e-mail that "the main motivation is to provide more great things to watch, but content delivery costs are part of the costs we have to pay."

When asked in a follow-up inquiry whether those content deliver costs specifically included the Comcast deal, Evers replied: "I am saying that content delivery is part of our costs. I am not saying there is an increase or decrease in content delivery costs."

More likely is that Netflix has chosen to raise costs while people's memory of the Comcast deal is still fresh — opportunistically deflecting any ill-will over the hike toward the cable company, one analyst said.

"I expect the price increase is more driven by content costs than interconnection fees," said Paul Gallant, a telecom policy analyst at Guggenheim Partners. "But politically, it would certainly make sense for Netflix to explain its price increase partly by reference to its Comcast deal."

Consumer advocates are closely watching for any fallout from the Comcast deal, which they viewed as a step toward big distributors controlling what travels over the Internet, at what speed and at what price.
http://www.washingtonpost.com/blogs/...ix-price-hike/





Netflix Lays Out Opposition to Comcast-Time Warner Cable Merger in Shareholder Letter
Cecilia Kang and Brian Fung

Netflix took a swing at Comcast’s $45 billion bid for Time Warner Cable on Monday, arguing that the merger would create a single company with too much power over the delivery of high-speed Internet service.

At the same time, Netflix delivered a more subtle jab at the cable giant, hinting that its controversial deal to pay Comcast for better streaming speeds may have contributed to its decision to raise monthly prices for subscribers, analysts said.

Netflix, the first major Web company to criticize Comcast’s acquisition plans, laid out its opposition to the merger in a letter to shareholders. Combined, Comcast and Time Warner Cable could control as much as half of all broadband Internet subscriptions, with most of those homes having no alternative options for broadband service, wrote chief executive Reed Hastings and Chief Financial Officer David Wells.

The executives added that Web companies could be forced to pay Comcast more for high-quality connections. In February, Netflix reached a deal with Comcast to pay “interconnection” fees to ensure a smooth stream of its video service to Comcast’s customers.

“Comcast is already dominant enough to be able to capture unprecedented fees from . . . services such as Netflix,” the executives said. “The combined company would possess even more anti-competitive leverage to charge arbitrary interconnection tolls for access to their customers. For this reason, Netflix opposes this merger.”

In a statement, Comcast spokes#woman Jennifer Khoury, a senior vice president, responded: “Netflix’s opposition to our Time Warner Cable transaction is based on inaccurate claims and arguments. There has been no company that has had a stronger commitment to openness of the Internet than Comcast.”

She also argued that the deal to stream Netflix videos at high speeds had little to do with that company’s decision to raise prices.

She added: “Netflix should be transparent that its opinion is not about protecting the consumer or about net neutrality. Rather, it’s about improving Netflix’s business model by shifting costs that it has always borne to all users of the Internet and not just to Netflix customers.”

In a letter to shareholders Monday, Netflix executives said the price hike would include a $1 or $2 monthly increase that could be levied on new customers at some point soon. A similar rise in prices eventually would hit existing customers.

“In the U.S., we have greatly improved our content selection since we introduced our streaming plan in 2010 at $7.99 per month. Our current view is to do a one or two dollar increase, depending on the country, later this quarter for new members only. Existing members would stay at current pricing (e.g. $7.99 in the U.S.) for a generous time period,” the Netflix shareholder letter said.

In after-hours trading, Netflix stock shot up as much as 6.6 percent.

When asked whether the Comcast streaming deal played a role in Netflix’s decision to raise prices, Netflix spokesman Joris Evers was vague, writing in an e-mail that “the main motivation is to provide more great things to watch, but content delivery costs are part of the costs we have to pay.”

When asked in a follow-up inquiry whether those content delivery costs specifically included the Comcast streaming deal, Evers replied: “I am saying that content delivery is part of our costs. I am not saying there is an increase or decrease in content delivery costs.”

More likely is that Netflix has chosen to raise costs while people’s memory of the Comcast deal is still fresh — opportunistically deflecting any ill will over the hike toward the cable company, one analyst said.

“I expect the price increase is more driven by content costs than interconnection fees,” said Paul Gallant, a telecom policy analyst at Guggenheim Partners. “But politically, it would certainly make sense for Netflix to explain its price increase partly by reference to its Comcast deal.”

Consumer advocates have been watching closely for any fallout from the Comcast-Netflix streaming deal, which they viewed as a step toward big distributors controlling what travels over the Internet, at what speed and at what price.
http://www.washingtonpost.com/busine...b57_story.html





Aereo Case Will Shape TV’s Future
David Carr

Throughout America’s business history, the victories and spoils went to the visionaries who made all manner of things — actual things like cars, pharmaceuticals and entertainment.

But more and more, many of the splashy business victories are going to companies that find a way to put a new skin on things that already exist. Uber does not own a single cab, yet it has upended the taxi industry. Airbnb doesn’t possess real estate, yet it has become a huge player in the lodging market. WhatsApp remapped texting on existing telecommunications infrastructure and — thanks to its acquisition by Facebook — has as much as $19 billion to show for it. The list goes on, but you get the idea.

Since 2012, Chet Kanojia has been building a business, backed by the media mogul Barry Diller, with ambitions to join that cohort. His start-up, Aereo, uses tiny remote antennas to capture broadcast TV signals and store them in the cloud, where consumers can watch them on a device of their choosing — no cable box, no cable bundle and most important, no expensive cable bill.

Instead, consumers pay $8 to $12 a month to watch almost live — there is a delay of a few seconds — and recorded programs from the major broadcast networks and public television. It’s a threat to both the lucrative cable bundle and the networks that receive rich fees for being part of that cable package. Aereo would give so-called cord cutters the means to assemble a more affordable package of online streaming options like Amazon Prime, Apple TV or Netflix, and still spend a Sunday afternoon watching the N.F.L. and “60 Minutes” immediately afterward. As antenna-driven viewing has dropped and digital consumption has surged, Aereo is a way to put old wine in a new bottle.

It is a crafty workaround to existing regulations, which rides on the Cablevision court ruling in 2008, which held that consumers had the right, through their cable boxes, to record programming. But then, cable companies pay broadcasters billions in so-called retransmission fees while Aereo pays them exactly nothing. (And the case is not just about Aereo — it opens the gate for cable companies or others to build a similar service and skip the billions in payments to the networks.)

The broadcast networks have a technical legal term for this particular innovation — theft — and they have been trying to shut down Aereo from the start.

It all collides on Tuesday, when the Supreme Court will hear the case American Broadcasting Companies v. Aereo. It will be up to the court to decide whether the service is a consumer-friendly reskinning of the broadcast universe or just one more example of an Internet pirate trying to loot copyrighted content. In some senses, the case is as big of a deal as the Betamax ruling in 1984, which allowed consumers to record programming.

“This is the Sony Betamax of this century,” Mr. Kanojia said on the phone last week, citing a case that is likely to come up a lot on Tuesday.

The entertainment industry hated the Betamax decision and said it would lead to ruin — it didn’t — and the networks are just as opposed to a federal appeals court ruling last year to let what they see as Aereo’s chronic, classic infringement continue. In the broadcasters’ brief asking the Supreme Court to reverse that decision, Aereo was described as “an entire business model premised on massive and unauthorized commercial exploitation of copyrighted works.”

As a matter of copyright law, television programs can be shown only by those who have that right or a license to do so. That’s why bars and hotels must pay a fee for the programming they show on their televisions. And broadcasters say that Aereo is similarly a middleman that should pay for what they consider a public performance.

Aereo was conceived in the belief that because the consumer is the one who is pushing the button to watch live or recorded programming, that transaction is one-to-one and not a public performance. That the DVR is in the cloud and the antenna is remote is, in Aereo’s view, beside the point. In its arguments, Aereo embraces both the past (consumers have been using VCRs and then DVRs to record programming for decades) and the future (everything from Dropbox to Google Drive lets the consumer store what he wishes without any liability on the provider’s part).

Speaking on the phone on Thursday, Mr. Kanojia said he liked his facts but had no idea how things would play out.

“It’s a bit of a coin flip,” he said.

A lot of people will be watching to see how that coin lands, less because of what it means for Aereo specifically than what it portends for the broader media ecosystem. A decision is expected this summer.

I spent time in Hollywood last week chatting with various executives, and Aereo was described variously as “a fencing operation peddling stolen goods” and “thieves masquerading as innovators.” That’s about as friendly as it got: Aereo may be small — Mr. Diller called it “a pimple” — but it represents something mighty important. If Aereo is allowed to store and transmit signals without payment, the television industry will be profoundly reconfigured.

Should Aereo win, the $3.3 billion in retransmission fees broadcasters now receive from cable companies will be in doubt, and in response, broadcasters might just stop broadcasting and become cable networks. Right now, broadcast signals reach about 117 million American homes, but cable penetration is so mature that approximately 102 million homes are now wired.

The vast majority of people already get their television, including the broadcast networks, through their cable or satellite service. If Aereo wins, networks could let the antennas go dark and tuck themselves inside the cable and satellite universe, where, like AMC or ESPN, they would then be paid programming fees.

That would be bloody. There are over 200 local broadcast affiliates, all of which depend on networks for a share of revenue and much of their programming. Local news, which is part of their mandate as public broadcasters, might wither, and as existing contracts expired, there would be a brawl for lucrative local advertising. Companies that own large groups of local stations like the Tribune Company and the Sinclair Broadcast Group would suddenly find themselves in possession of a much diminished collection of assets.

When Aereo began operating two years ago, Mr. Diller, who built a fourth broadcast network at Fox before joining the insurgency, told me he knew he was kicking the hornet’s nest.

“I understand that any existing business tries to protect its borders from any and all incursions,” he said.

Mr. Kanojia said that if Aereo loses, “There is no plan B.” Regardless of the outcome, he says he believes that the television business is built on unsustainable margins and that many consumers will eventually use Internet services to build their own menu of shows. He’s right about that.

Don’t take my word for it. A report released last week by Experian suggested a firm link between the use of streaming services and cord-cutting. Last year, 18.1 percent of households with a Netflix or Hulu account cut the cord, while in 2010, that number was just 12.7 percent. The report indicated that the percentage of cord-cutting households in the United States increased to 6.5 percent in 2013 from 4.5 percent in 2010. And the rise of mobile devices is not driving the cord-cutting, but the availability of a wide range of shows on the good old television set without the need for a cable subscription — through services like Aereo.

No business is immune to disruption, but the television space is particularly ripe. Some of the players have looked down the road and realized they can’t stop what’s coming, which is partly why Comcast is so eager to merge its way beyond cable and dominate broadband.

Aereo, like the Sony Betamax, might end up as a footnote to television history. Once consumers decide — be they cord cutters or so-called cord nevers — it doesn’t really matter what programmers, broadcasters or even the Supreme Court decides.
http://www.nytimes.com/2014/04/21/bu...vs-future.html





Supreme Court Justices Criticize Aereo, But Worry About Overbroad Ruling

Judges don't want decision to overreach into cloud
Ira Teinowitz

Several of the Supreme Court justices who heard arguments in the broadcast networks’ case against the online TV site Aereo criticized the company's business model but worried about passing down a ruling that could affect other technology.

Broadcasters contend that Aereo — a New York based company with less than a million subscribers — is stealing their signals. The company uses millions of small antenna to relay broadcast signals to subscribers’ laptops, tablets and other devices, collecting a small fee.

While the justices expressed skepticism that Aereo's technology violates copyrights, they also questioned whether the company was trying to take advantage of legal loopholes.

“If your model is correct, can't you just put your antenna up and then do it? I mean, there's no technological reason for you to have 10,000 dime-sized antenna, other than to get around the copyright laws,” said Chief Justice John Roberts.

Chief Justice John Roberts questioned why Aereo used the antennas, if not to skirt copyright laws. Justice Ruth Bader Ginsburg noted that Aereo pays networks nothing to pass their signals on to consumers.

But the justices worried that any decision could have a much larger reach than intended, affecting consumers’ ability to remotely store content like TV shows and songs. (Apple's iCloud is one of the best-known cloud computing services.)

It's always difficult to say if justices’ questions indicate which way they're leaning, because they often play devil's advocate while looking for holes in litigants’ arguments.

“I have to understand the effect it will have on other technologies,” said Roberts, who repeatedly questioned how the court could write an opinion that would affect only Aereo, and not companies like Apple.

“I don't know how to get out of it,” added Justice Stephen Breyer.

Justices Elena Kagan and Sonia Sotomayor also expressed concerns about the impact on the cloud. They asked more questions of the networks’ attorney, Paul D. Clement, than they did of Aereo's attorney, David C. Frederick.

“They've thrown up a series of problems this would create for the cloud,” Breyer told Clement, referring to Aereo's legal team.

Kagan questioned the difference between using an antenna or DVR in a home, versus placing those devices elsewhere. That got to the heart of Aereo's contention that it is merely expanding on existing technology like antennas, DVR, and VCRs.

Networks contend that Aereo is more like a cable or satellite company than a lone viewer, watching free TV with an antenna and recording it with a VCR. Cable and satellite companies spend billions in retransmission fees to air broadcasters’ shows, and pass those costs on to their subscribers.

Aereo, meanwhile, charges subscribers $8 to $12 a month — and pays nothing to the broadcasters.

Several past precedents will came into play Tuesday: In 1984, the high court ruled 5-4 that the sale of Betamax machines did not constitute contributory copyright infringement.

In the 2008 Cablevision decision, the U.S. Court of Appeals for the Second Circuit found that DVR recordings don't infringe on copyright holders’ protections against their works being publicly performed without compensation. (Bars, for example, are supposed to pay to air a show.)

The court ruled that watching shows on DVR is not a public performance because the person who makes the recording is also the one who watches it. The right of individual customers to play their own songs and shows has become a cornerstone of cloud computing.
http://www.thewrap.com/supreme-court...oud-computing/





The Cloud Industry Needs Aereo to Win. But Consumers Need Something Better.
Farhad Manjoo

The best way to think about Aereo, the company at the center of this week’s Supreme Court battle over the future of computing, is as an example of legal performance art. Aereo is based entirely on a legalistic leap of faith: If it’s legal to set up an antenna and record a TV show at your own house, which it is, shouldn’t it also be legal to rent an antenna and server space at a big data center, and then stream the show over the Internet to your computer, tablet or set-top box?

It’s a clever argument, one that highlights the extreme lengths that tech companies go to in order to avoid copyright restrictions. The argument is designed to show off the similarities between Aereo and more traditional cloud services like Dropbox — services that the Supreme Court would have to strive mightily to separate out of any ruling against Aereo.

But for all its cleverness, Aereo is also a gimmick. Aereo is a for-pay, middleman service whose sole function is to let you stream TV shows that are already freely available over the air. For consumers, the best outcome of this case is for Aereo to win, and then scare broadcasters into streaming their content directly to users, either for free or for a lower price than Aereo charges.

Aereo is based on a loophole. To offer TV shows over the Internet, most streaming services like Netflix or Hulu pay licensing fees to studios. But licensing is expensive and restrictive; some of the best content, like the Olympics or the Super Bowl, isn’t even available for licensing.

In most parts of the country, though, broadcast networks — ABC, NBC, CBS, FOX, PBS and others — send their signals over the air, free to anyone who has an antenna. Aereo saw these over-the-air signals as an opportunity. It created a data center that houses thousands of antennas, each about the size of a dime. The company assigns a single antenna to a single subscriber. When you sign up for Aereo — which is available in more than a dozen cities, including New York, starting at $8 a month — the company rents you an antenna and assigns some server space to you.

If you have Aereo, then, you can watch the Super Bowl or the Olympics on your computer or your phone. Because it records each show for each customer separately, the firm maintains that it is simply performing a service in a data center that a customer is allowed to do anyway.

It’s a clever legal maneuver precisely because it highlights the similarities between Aereo and the rest of the cloud-computing industry. In briefs they filed with the Supreme Court, many tech companies argued that what Aereo does isn’t very different from what Dropbox and other cloud companies do: It creates space on a server for each customer to store content. During oral arguments at the court this week, Aereo pressed this point. “The cloud computing industry is freaked out about this case,” David C. Frederick, Aereo’s lawyer, said, arguing that a ruling against Aereo would expose cloud companies to “potentially ruinous liability.”

Aereo created a data center that houses thousands of tiny antennas. When you sign up for the service — which is available in more than a dozen cities — the company rents you an antenna and assigns some server space to you.

This seems logical. After all, if the court rules that you can’t store on a server a TV show that you, alone, have recorded and are allowed to access, then what if you rip your CDs and store the files on Dropbox — wouldn’t that violate the same rule? The justices appeared to be wrestling with this quandary. “What disturbs me on the other side is I don’t understand what the decision for you or against you when I write it is going to do to all kinds of other technologies,” Justice Stephen G. Breyer said.

But if Aereo’s win is important for the cloud industry, the service itself is more dubious. The technology at the heart of Aereo is fantastically inefficient, and I can’t see how it can be a net gain for consumers in the long run.

For one thing, Aereo can’t benefit from one of the primary factors that makes the Internet such an amazing system for distributing data — the idea that all digital copies are identical. Most cloud companies take advantage of this fact. For instance, Netflix does not need to store a distinct copy of “Breaking Bad” for each of its subscribers; if it did, its server costs would be unsustainable. Instead, because every digital copy is the same, Netflix can keep just a handful of copies of “Breaking Bad” in a few data centers around the country. Then it serves up one copy to millions of customers on demand.

Dropbox, too, benefits from the idea that digital data is fungible. If you try to store a huge file that has already been uploaded by another user, Dropbox detects the duplicate file and skips your upload. It then simply makes a note that your file is the same as one it already has. This saves Dropbox space on its servers, and it saves you uploading time.

But to sustain the legal gimmick that it is creating separate copies of each show for each of its users, Aereo can’t benefit from the huge efficiency gains afforded by the Internet. Instead, if thousands of Aereo users choose to record “Modern Family,” the company has to keep thousands of distinct — though technically identical — copies.

The company argues that its system is still more efficient than the current regime, in which we all record a separate copy on separate TVs in all of our houses, copies that can’t easily be streamed from device to device. That may be true, but it’s such a paltry gain that I’m not sure we should be celebrating it.

I’m happy that Aereo has been willing to spend its vast resources to make an elaborate legal point. Its win would be crucial for much of the tech industry.

But there’s a better way for consumers to get cloud access to TV shows than anything like Aereo: Networks could stream their content over the air, for fees far lower than what Aereo charges. And they ought to.

Networks would be able to provide this service more efficiently; because they own the content, they wouldn’t need to keep millions of different copies of identical shows. And they would be able to pursue novel business models with the content. For instance, unlike Aereo, they could sell ads on the streams, thus helping to lower subscriber fees.

But why would TV networks do this? Wouldn’t streaming their channels throw their current businesses into disarray? Probably. But if Aereo wins and begins to thrive, networks may not have much of a choice.
http://bits.blogs.nytimes.com/2014/0...ething-better/





AT&T Looks to Expand High-Speed Fiber Network to 21 Cities
Marina Lopes

AT&T Inc said on Monday it expects to expand its ultra-fast fiber network and TV services to up to 21 U.S. cities, including Chicago and Atlanta.

AT&T, which is fighting rivals such as Google Inc as well as cable companies with its fiber-based product, is considering providing broadband Internet speeds of up to 1 gigabit per second and its U-verse television service in cities, including Chicago, Los Angeles and Miami.

Before the company can enter these markets, it must make agreements with local leaders in each city.

The services are currently available in Austin, Texas and some surrounding communities, and will be rolled out in parts of Dallas this summer, the company said.

AT&T also said it may consider expanding its reach to 100 cities eventually.

Earlier this month, AT&T announced it was in discussions with North Carolina Next Generation Network to bring U-verse with high-speed internet to North Carolina.

U-verse launched in 2006 and currently has 10.7 million combined Internet and TV customers.

(With additional reporting by Liana B. Baker; Editing by Marguerita Choy)
http://www.reuters.com/article/2014/...A3K0ZK20140421





AT&T's 'Expansion' of 1 Gbps to 100 Cities is a Big, Fat Bluff
Karl Bode

AT&T today announced that the company is "eyeing" 100 potential target cities as locations they may deploy faster 1 Gbps "Gigapower" service. According to the company's press release, this "major initiative" will target 100 "candidate cities and municipalities" across 21 metropolitan areas nationwide. Those users could then get AT&T's $70-$100 per month 1 Gbps service, currently only available in a very small portion of Austin, Texas.

Before you get too excited, you need to understand that this is a bluff of immense proportion. It's what I affectionately refer to as "fiber to the press release."

Ever since Google Fiber came on the scene, AT&T's response has been highly theatrical in nature. What AT&T would have the press and public believe is that they're engaged in a massive new deployment of fiber to the home service. What's actually happening is that AT&T is upgrading a few high-end developments where fiber was already in the ground (these users were previously capped at DSL speeds) and pretending it's a serious expansion of fixed-line broadband.

It's not. At the same time AT&T is promising a massive expansion in fixed line broadband, they're telling investors they aren't spending much money on the initiative, because they aren't. AT&T's focus is on more profitable wireless. "Gigapower" is a show pony designed to help the company pretend they're not being outmaneuvered in their core business by a search engine company.

The press release admits as much if you look carefully. "This expanded fiber build is not expected to impact AT&T’s capital investment plans for 2014," notes AT&T. That's what they noted last year, and will surely say the same thing next year. In fact, AT&T's been reducing their fixed-line CAPEX each year. What kind of major 1 Gbps broadband expansion doesn't hit your CAPEX? One that's either very tiny, or simply doesn't exist.

"Similar to previously announced metro area selections in Austin and Dallas and advanced discussions in Raleigh-Durham and Winston-Salem, communities that have suitable network facilities, and show the strongest investment cases based on anticipated demand and the most receptive policies will influence these future selections and coverage maps within selected areas," notes the company.

In short, if your city plays nice and gives AT&T what they want legislatively (namely gut consumer protections requiring they keep serving DSL users they don't want so they can focus on more profitable wireless) you'll get 1 Gbps fiber to a few high-end developments and apartment buildings. As an added bonus, your local politician can hold a lovely cord-cutting ceremony where he or she pretends to be encouraging the broadband networks of tomorrow (while in reality doing the exact opposite).

You can understand AT&T's executive and marketing logic to a degree. Google received an absolute blast of positive press coverage for their recent announcement that they might deploy Google Fiber to 34 cities in 9 major metro areas. Countless news outlets didn't understand what Google was even announcing, and stated breathlessly that all these cities would be getting fiber. The free marketing Google receives for what really is a very small actual deployment is staggering.

Granted, while there is a heavy theatrical component to Google Fiber and its few thousand actual users, Google's interest really is in driving competition and helping cities build business plans -- even if Google doesn't deploy there themselves. In contrast, AT&T's interest is in pretending they're not a lumbering duopolist failing to keep pace, home to tens of millions of annoyed users on slow, capped and expensive DSL lines the company has no intention of upgrading anytime soon.

We're in the heart of the age of "fiber to the press release" and 1 Gbps mania, where all you need to do is simply mention 1 Gbps and you get a ticker-tape parade and a statue in the town square without having to deliver a single byte. AT&T's certainly counting on that reaction from the press and public. Look for specifics as to how many users will actually get 1 Gbps "Gigapower" service and at what cost to AT&T, and you'd be hard-pressed to find any whatsoever. That's because "Gigapower" is about 10% actual broadband, and 90% finely-manicured bullshit.
http://www.dslreports.com/shownews/A...t-Bluff-128628





In Policy Shift, F.C.C. Will Allow a Web Fast Lane
Edward Wyatt

The principle that all Internet content should be treated equally as it flows through cables and pipes to consumers looks all but dead.

Companies like Disney, Google or Netflix will be allowed to pay Internet service providers like Comcast and Verizon for special, faster lanes to send video and other content to their customers under new rules to be proposed by the Federal Communications Commission, the agency said on Wednesday.

The proposed rules are a change for the agency on what is known as net neutrality — the idea that no providers of legal Internet content should be discriminated against in providing their offerings to consumers and that users should have equal access to see any legal content they choose.

The proposal comes three months after a federal appeals court struck down, for the second time, agency rules intended to guarantee a free and open Internet.

The rules are also likely to eventually raise prices as the likes of Disney and Netflix pass on to customers whatever they pay for the speedier lanes, which are the digital equivalent of an uncongested car pool lane on a busy freeway.

Consumer groups immediately attacked the proposal, saying that not only would costs rise, but that big, rich companies with the money to pay large fees to Internet service providers would be favored over small start-ups with innovative business models — stifling the birth of the next Facebook or Twitter.

“If it goes forward, this capitulation will represent Washington at its worst,” said Todd O’Boyle, program director of Common Cause’s Media and Democracy Reform Initiative. “Americans were promised, and deserve, an Internet that is free of toll roads, fast lanes and censorship — corporate or governmental.”

If the new rules deliver anything less, he added, “that would be a betrayal.”

Broadband companies have pushed for the right to build special lanes. Verizon said during appeals court arguments that if it could make those kinds of deals, it would.

Tom Wheeler, the F.C.C. chairman, defended the agency’s plans, saying speculation that the F.C.C. was “gutting the open Internet rule” is “flat out wrong.” He added: “There is no ‘turnaround in policy.’ The same rules will apply to all Internet content. As with the original open Internet rules, and consistent with the court’s decision, behavior that harms consumers or competition will not be permitted.”

The providers would have to disclose how they treat all Internet traffic and on what terms they offer more rapid lanes, and would be required to act “in a commercially reasonable manner,” agency officials said. That standard would be fleshed out as the agency seeks public comment.

The proposed rules will also require Internet service providers to disclose whether in assigning faster lanes, they have favored their affiliated companies that provide content. That could have significant implications for Comcast, the nation’s largest provider of high-speed Internet service, because it owns NBCUniversal.

Also, Comcast is asking for government permission to take over Time Warner Cable, the third-largest broadband provider, and opponents of the merger say that expanding its reach as a broadband company will give Comcast more incentive to favor its own content over that of unaffiliated programmers.

Mr. Wheeler has signaled for months that the federal appeals court decision striking down the earlier rules could force the commission to loosen its definitions of what constitutes an open Internet.

Those earlier rules effectively barred Internet service providers from making deals with services like Amazon or Netflix to allow those companies to pay to stream their products to viewers through a faster, express lane on the web. The court said that because the Internet is not considered a utility under federal law, it was not subject to that sort of regulation.

Opponents of the new proposed rules said they appeared to be full of holes, particularly in seeking to impose the “commercially reasonable” standard.

“The very essence of a ‘commercial reasonableness’ standard is discrimination,” Michael Weinberg, a vice president at Public Knowledge, a consumer advocacy group, said in a statement. “And the core of net neutrality is nondiscrimination.”

He added that the commission and courts had acknowledged that it could be commercially reasonable for a broadband provider to charge a content company higher rates for access to consumers because that company’s service was competitively threatening.

“This standard allows Internet service providers to impose a new price of entry for innovation on the Internet,” Mr. Weinberg said.

Consumers can pay Internet service providers for a higher-speed Internet connection. But whatever speed they choose, under the new rules, they might get some content faster, depending on what the content provider has paid for.

The fight over net neutrality has gone on for at least a decade, and is likely to continue at least until the F.C.C. settles on new rules. Each of the last two times the agency has written rules, one of the Internet service providers has taken it to court to have the rules invalidated.

If anything, lobbying over the details of the new net neutrality standard is likely to increase now that the federal court has provided a framework for the F.C.C. to work from as it fills in the specifics of its regulatory authority.

The proposed rules, drafted by Mr. Wheeler and his staff, will be circulated to the agency’s other four commissioners beginning on Thursday and will be released for public comment on May 15. They are likely to be put to a vote by the full commission by the end of the year.

News of the F.C.C. proposal was first reported online by The Wall Street Journal.
http://www.nytimes.com/2014/04/24/te...ity-rules.html





Brazilian Congress Passes Internet Bill of Rights
Anthony Boadle

Brazil's Senate unanimously approved groundbreaking legislation on Tuesday that guarantees equal access to the Internet and protects the privacy of Brazilian users in the wake of U.S. spying revelations.

President Dilma Rousseff, who was the target of U.S. espionage according to documents leaked by former NSA analyst Edward Snowden, plans to sign the bill into law. She will present it on Wednesday at a global conference on the future of the Internet, her office said in a blog.

The legislation, dubbed Brazil's "Internet Constitution," has been hailed by experts, such as the British physicist and World Wide Web inventor Tim Berners-Lee, for balancing the rights and duties of users, governments and corporations while ensuring the Internet continues to be an open and decentralized network.

To guarantee passage of the bill, Rousseff´s government had to drop a contentious provision that would have forced global Internet companies to store data on their Brazilian users on data centre servers inside the country.

The rule was added to the bill after revelations last year that the U.S. National Security Agency had spied on the personal Internet communications of Brazilians, including those of Rousseff among other world leaders.

Instead, the bill says companies such as Google Inc and Facebook Inc will be subject to Brazil's laws and courts in cases involving information on Brazilians, even if the data is stored on servers abroad.

The government refused to drop a net neutrality provision that was fiercely opposed by telecom companies because it bars them from charging higher rates for access to content that uses more bandwidth, such as video streaming and voice services like Skype.

The legislation protects freedom of expression and information, establishing that service providers will not be liable for content published by users, but they must comply with court orders to remove offensive or libellous material.

The bill limits the gathering and use of metadata on Internet users in Brazil.

Following the spying revelations by Snowden, including allegations that the NSA secretly collected data stored on servers by Internet companies such as Google and Yahoo Inc, Brazil sought to force them to store data on Brazilian servers in the country.

Internet companies complained that would push up their costs and create barriers to the free flow of information.

The revelations of NSA espionage using powerful surveillance programs upset relations between the United States and Brazil and led Rousseff to cancel a state visit to Washington in October and denounce massive electronic surveillance of the Internet in a speech to the U.N. General Assembly.

Rousseff and German Chancellor Angela Merkel, another leader that the NSA was alleged to have spied upon, have led international efforts to limit digital espionage over the Internet.

(Reporting by Anthony Boadle; Editing by Cynthia Osterman)
http://uk.reuters.com/article/2014/0...edName=topNews





Are You Ready for a Driver’s License for the Internet?

The White House is leading efforts for a new authentication system that would have users prove their identity with a single ID across the Web. And states are starting to pilot the system.
Colin Wood

Government is raising its expectations. While it hasn’t been uncommon in the past for governments to consider money wasted by fraud, mismanagement or inefficiency as an expense of doing business, times are changing. New technologies are preventing such waste and initiating cultural change in the public sector. At the Florida Department of Children and Families (DCF), that transformation is being realized through the adoption of an online authentication tool the agency is using to ensure that the benefits it issues, like food assistance, are going to the right people.

Such incarnations of online authentication technology are sprouting up in state government agencies around the country, led by a White House vision of a new, central form of identification, what some are calling “a driver’s license for the Internet.”

The DCF reported that in 2013 it saved about $14.7 million through the use of an online authentication tool, with an initial investment of about $1 million and a total contract of just under $3 million. The tool and subscription service was purchased from LexisNexis and operates similarly to the systems used by financial institutions to verify the identity of loan or mortgage applicants. Now when people apply for various programs online, they are prompted with identity verification questions about their previous employers or the names of streets where they lived.

The DCF says the technology is saving so much money because it saves staff the time of verifying identities manually, and even better, there’s been a reduction in cases of identity fraud.

The agency began its move to online services in 2012, said Andrew McClenahan, director of the Office of Public Benefits Integrity at DCF. “It’s changing the way that people are looking at public assistance fraud and how to maintain the integrity of these public benefits systems," he said. "[It’s] changing the mindset that fraud is no longer considered a cost of doing business. These modernizations, data analysis and predictive modeling and now this customer authentication tool that works with identity verification, these are all realities that we as a state and other states are having to face, and I think it’s here to stay.”

The move away from authenticating people in person began two years ago when the state started centralizing its physical offices to one per county. That move, McClenahan said, prompted more online usage, but also introduced a new problem: The state had no reliable way of verifying identity online and the result was a lot of waste – wasted time and wasted benefits issued to illegitimate applicants. So the agency began piloting the system in Orlando, and in August 2013, the system was spread throughout the state.

It was important to get away from the old model, McClenahan said, and it’s easy to see why. Fraud and abuse of government services in general has been common for years, and especially so in Florida. In 2007, federal officials randomly visited 1,600 businesses in Miami that had billed for “durable medical equipment” and found that 481 of those businesses didn’t even exist, accounting for $237 million of fraud in just one year.

In 2012, the attorney general announced that the Medicare Fraud Strike Force had arrested more than 100 people, including doctors, nurses and other health professionals, accounting for more than $452 million in fraud across seven cities.

These instances of fraud, enabled for decades by a lack of government oversight or technological wherewithal, has cost taxpayers untold sums. In 2010, the Government Accountability Office released a report in which it identified $48 billion in “improper payments” for the previous year.

But, of course, fraud doesn’t only happen in Florida. In 2011, the White House started looking at the issue differently when it released the National Strategy for Trusted Identities in Cyberspace (NSTIC). The program outlines a framework for an online identity verification system that would attempt to reduce fraud, while creating a convenient way, federal officials say, for Internet users to prove their identity, without the need to remember passwords. The New York Times called it “a driver’s license for the Internet.” Even better, the White House reported that such a system would improve the Web economy by bolstering public confidence in security and authentication of online businesses and services.

In fall 2013, the National Institute of Standards and Technology (NIST), the agency overseeing the program, awarded $1.3 million and $1.1 million in pilot funding to Michigan and Pennsylvania, respectively. Rather than develop entirely new systems or even some form of comprehensive Internetwide identification system, the implementations in each state look at how existing systems can be used to simplify authentication across departments. These pilots are just the beginning – NIST is awarding pilot funding to 10 additional organizations, which will be announced in August.

Pennsylvania is developing an implementation that would allow users to operate a single identity across state departments, rather than requiring users to manage usernames and passwords for each department, which is the case today. In a pilot scheduled to run from this spring through September, Deloitte will bridge various departments and agencies, each of which would require varying levels of authentication on behalf of the user, according to GCN. For example, if a user only wants a fishing license, he could simply authenticate his identity at a low level, but if he later wanted to use that same online ID for welfare benefits, he would need to raise the authentication level by providing more information in order to access those services. But he would only need one set of credentials to access any state service.

In a pilot scheduled to run May to September, Michigan will use the funding to establish an online authentication system for residents who use its MI Bridges portal to access services like food and cash assistance programs, the same kinds of services for which Florida developed its authentication system.

Identity verification for MI Bridge is done manually today using several different types of identity proofing to verify each applicant. For that reason, there's little fraud in that program, according to an agency spokesperson. However, reducing the work needed to verify the identity of an online user could save the agency money.

Michigan's project is expected to operate similarly to the system that was launched in Florida’s DCF, asking the user various questions similar to what might be seen during an online application for a mortgage or loan.

The success of the NSTIC pilots will be determined by analysis conducted by nonprofit RTI International, funded with $300,000 from NIST. The organization will compare the efficacy of the new system compared to the old manual processes of identity verification. If the pilots are successful, they could end up being the first step toward a single set of standardized credentials that Internet users provide to prove who they are.

IDENTIFY VERIFICATION FOR THE WEB

A single ID that can be used across the entire Internet is an idea that has been talked about for a long time, and since the 1980s, the technology world has known that the password model is inadequate, said technology analyst Rob Enderle. A single set of credentials that could be used to verify identity would be far superior to what's used today, he said, and the National Strategy for Trusted Identities in Cyberspace would lead the Internet toward that goal.

“Given that we don’t have that on the Web and there is a substantial amount of fraud and identity theft going to the core of it, having a validatable ID is, you would think, a very high priority,” Enderle said. “It should be a higher priority than it is.”
This isn’t just a good idea, Enderle said, it’s a necessity. “If you can’t create a method to ensure a person is who they say they are, then you really can’t secure bank accounts, identities, anything that’s done on the Web,” he said. “Moving to something else would seem to be decades overdue.”

Though the White House created the program to begin research around such a system, the government is generally not good at developing these kinds of technologies or working within a fast timeframe, Enderle said – a successful technology like this needs to come from the private sector.

“It has to be driven by the market. Remember, we were supposed to be on the metric standard decades ago and we aren’t,” he said. “There have to be some penalties involved for not doing it. I think after a couple major breaches where the liability is passed to the organization that didn’t properly assure the identities of the people that were accessing it, that motivation will probably drop into place.”

The technology for this is here, Gartner analyst Avivah Litan said, it’s just a matter of getting the market properly aligned. “People have been talking about it for years,” Litan said. “The main issue is you have to get identity providers standing behind it and backing up the identities, and you have to solve the business model. In other words, if they get the identity wrong, who’s liable? It’s a great concept, but it hasn’t taken place because no one’s willing to be the identity provider or issue the identity. It’s not a technology issue, it’s a business issue.”

Proposed legislation in the United Kingdom shows that the market is demanding better authentication online, not just to curtail fraud, but to restrict access to certain content. The proposed law would require that websites hosting adult content take better measures of authenticating age than just using the honor system. The Children’s Online Privacy Protection Act is the existing legislation that requires U.S. websites hosting adult content to require the user to enter an adult’s age before proceeding, a standard that websites in other countries also have adopted. But the problem is that it simply doesn’t keep young users out. A quick lie is all that’s needed to proceed. The thinking behind the proposed legislation is that the rules that apply offline should also apply online.

PRIVACY CONCERNS

Not everyone thinks a driver’s license for the Internet is a great idea. Lee Tien, senior staff attorney with the Electronic Frontier Foundation, is skeptical whether the government’s main motivation with such a program would even be fraud prevention – and not tracking.

“We think it’s a terrible idea,” Tien said. “The main substantive issue is that much of what we do on the Internet is plain old speech: writing comments, posting on blogs or whatever. And one of the things about speech in the United States, especially under the First Amendment of the Constitution, is that you have a right to speak anonymously. The EFF has long believed that it’s really important to preserve and protect that right to speak anonymously on the Internet. Any mandatory type of ID online runs really directly counter to that.”

Even a voluntary online ID could be problematic, Tien said. If the ID became popular, it could still become a de facto requirement that people would need to access a variety of services, and the result, again, would be loss of privacy and anonymity. The thing that’s unclear about such a solution, he said, is how this form of authentication would prevent various types of fraud in a way that others cannot. If there is a difference, Tien said he doesn’t know what it is.

“One of the great things about modern cryptography is that if it’s implemented well, you can have highly secure transactions, and you can have cryptographic proof for verification as to whether or not a person is or isn’t who they represent themselves to be in a mathematically secure manner,” he said. “A lot of times the issue is not fraud. The issue for government is that they want to track, regardless of fraud.”
http://www.govtech.com/security/Driv...-Internet.html





When the Internet Dies, Meet the Meshnet that Survives
Hal Hodson

If a crisis throws everyone offline, getting reconnected can be tougher than it looks, finds Hal Hodson during a test scenario in the heart of New York

IN THE heart of one of the most connected cities in the world, the internet has gone down. Amid the blackout, a group of New Yorkers scrambles to set up a local network and get vital information as the situation unfolds.

The scenario is part of a drill staged on 5 April in Manhattan by art and technology non-profit centre Eyebeam, and it mimics on a small scale the outage that affected New Yorkers after superstorm Sandy hit in 2012. The idea is to test whether communication networks built mostly on meagre battery power and mobile devices can be created rapidly when disaster strikes.

I'm a volunteer node in the network, and an ethernet cable runs over my shoulders into a wireless router in my left hand. It is powered by a battery in my jacket pocket.

Other routers link up with mine from a few hundred metres away. Soon I'm at the centre of a web of seven or eight nodes, connected through my smartphone. This meshnet, as it is called, is my only link to the others. The messages start coming in on my phone, flowing through an app called ChatSecure, built by the Guardian Project, a group of developers who design software for private communication. The app enables peer-to-peer communication between devices that are networked, but that don't necessarily have an internet connection.

Building a mesh is fiddly and slow – even downloading ChatSecure involved using near field communication to establish a radio connection between nearby smartphones. I got my app from Hans-Christoph Steiner of the Guardian Project. In the absence of app stores like Google Play and the Apple store, other useful apps were made available through a hacked version of the app market software F-Droid, which let each person's phone act as a server so others could connect and download what they need.

One of the network engineers running the drill hurries up and down West 21st Street with a laptop, monitoring the signal strength between each router, adjusting our positions to optimise the network. As the mesh gets larger and people start sending chat messages and pictures back to base through the network, the router in my hand heats up. It's a cool feeling, though, to be exchanging data without the help of Comcast, Verizon or Google.

The Wi-Fi routers we are using and the software that binds them into a mesh are part of a networking toolkit called Commotion, developed by the Open Technology Institute (OTI) in Washington DC. This drill is not their first mesh though.

When superstorm Sandy hit the Brooklyn neighbourhood of Red Hook and the power went down, the OTI already had an experimental meshnet in place. The US Federal Emergency Management Agency managed to plug its high-bandwidth satellite uplink into it and instantly provided connectivity to the community and the Red Cross relief organisation.

"Immediately after the storm, people came to the Red Hook Initiative because they knew it was a place where they could get online and reach out to their families," says Georgia Bullen of the OTI. The institute added more routers to the network to boost its range over the following three weeks while the power was out.

Without as much time to work out the kinks, the Manhattan meshnet isn't as stable as Red Hook's – the tall buildings interfere with the Wi-Fi signal, making connectivity sketchy. But the situation mirrors the challenges a meshnet might face as people struggle to get it up and running in a crisis.

The experiment also shows that digital communication is possible without big technology companies and governments – something that could be handy if a regime decided to shut down the internet to quell dissent, as happened in Egypt in 2011. "Why do we need to go through the centralised, expensive communications system?" asks Ryan Gerety of the OTI. "Maybe we should go back to the state of the internet when a lot of the work was more local."
http://www.newscientist.com/article/...l#.U1OUR8etvy8





U.S. Promotes Network to Foil Digital Spying
Carlotta Gall and James Glanz

This Mediterranean fishing town, with its low, whitewashed buildings and sleepy port, is an unlikely spot for an experiment in rewiring the global Internet. But residents here have a surprising level of digital savvy and sharp memories of how the Internet can be misused.

A group of academics and computer enthusiasts who took part in the 2011 uprising in Tunisia that overthrew a government deeply invested in digital surveillance have helped their town become a test case for an alternative: a physically separate, local network made up of cleverly programmed antennas scattered about on rooftops.

The State Department provided $2.8 million to a team of American hackers, community activists and software geeks to develop the system, called a mesh network, as a way for dissidents abroad to communicate more freely and securely than they can on the open Internet. One target that is sure to start debate is Cuba; the United States Agency for International Development has pledged $4.3 million to create mesh networks there.

Even before the network in Sayada went live in December, pilot projects financed in part by the State Department proved that the mesh could serve residents in poor neighborhoods in Detroit and function as a digital lifeline in part of Brooklyn during Hurricane Sandy. But just like their overseas counterparts, Americans increasingly cite fears of government snooping in explaining the appeal of mesh networks.

“There’s so much invasion of privacy on the Internet,” said Michael Holbrook, of Detroit, referring to surveillance by the National Security Agency. “The N.S.A. is all over it,” he added. “Anything that can help to mitigate that policy, I’m all for it.”

Since this mesh project began three years ago, its original aim — foiling government spies — has become an awkward subject for United States government officials who backed the project and some of the technical experts carrying it out. That is because the N.S.A., as described in secret documents leaked by the former contractor Edward J. Snowden, has been shown to be a global Internet spy with few, if any, peers.

“Exactly at the time that the N.S.A. was developing the technology that Snowden has disclosed, the State Department was funding some of the most powerful digital tools to protect freedom of expression around the world,” said Ben Scott, a former State Department official who supported the financing and is now at a Berlin policy nonprofit, the New Responsibilities Foundation. “It is in my mind one of the great, unreported ironies of the first Obama administration.”

Sascha Meinrath, founder of the Open Technology Institute at the New America Foundation, a nonpartisan research group in Washington that has been developing the mesh system, said that his group has had “hundreds of queries from across the U.S.” since the Snowden leaks began. “People are asking us, how do they protect their privacy?” Mr. Meinrath said.

He is quick to point out that nothing is foolproof against determined surveillance, whether American or foreign. “The technology is built from the ground up to be resistant to outside snooping,” Mr. Meinrath said, “but it’s not a silver bullet.”

Even so, it is clear that the United States sees Sayada as a test of the concept before it is deployed in more contested zones. The United States Agency for International Development “awarded a three-year grant to the New America Foundation to make this platform available for adoption in Cuba,” said Matt Herrick, a spokesman for the agency, which recently stirred controversy by financing a Twitter-like social media site in Cuba. As for the mesh project, Mr. Herrick said, “We are reviewing the program, and it is not operational in Cuba at this time. No one has traveled to Cuba for this grant.”

Radio Free Asia, a United States government-financed nonprofit, has given $1 million to explore multiple overseas deployments. The countries involved have not been revealed, Mr. Meinrath said, adding, “I can’t talk about specific locations because lives could be at risk.”

The citizens of Sayada — population 14,000 — are more focused on using the mesh for local governance and community building than beating surveillance since President Zine el-Abidine Ben Ali was ousted in 2011, said Nizar Kerkeni, 39, a resident and professor of computer science at the nearby University of Monastir.

The mesh network blankets areas of town including the main street, the weekly market, the town hall and the train station, and users have access to a local server containing Wikipedia in French and Arabic, town street maps, 2,500 free books in French, and an app for secure chatting and file sharing.

The mesh is not linked to the wider Internet, Professor Kerkeni says — a point in his favor when he invites families to connect in this Muslim community. “Some parents ask me if it is safe to connect to the server,” he said. “They don’t allow their little children to connect to the Internet. I say, ‘I know it’s safe.’ ”

The mesh software, called Commotion, is a major redesign of systems that have been run for years by experts across Europe, said Mr. Meinrath, who is now director of the New America Foundation’s X-Lab. The idea, he said, was to take the technology out of what he calls “the geekosphere” and make it accessible to the public. (Commotion is available to download free from the project’s website.)

The open Internet is difficult to operate securely, in part because it acts as both a routing system for data and a sort of giant electronic phone book. The simplest action — say, calling up a website or sending an email — involves communicating with multiple servers and routers along numerous paths.

Mesh allows users in a local area, from a few square blocks to an entire city, to create a network that is physically distinct from the Internet. Wireless routers that cost $50 to $80 each are attached to rooftops, lashed to balconies and screwed to the ledges of apartment buildings. As long as each router has an unobstructed view to one or two others and the Commotion software has been set up, the routers automatically form a mesh network, said Ryan Gerety, a senior field analyst at the foundation.

“I just put my router up, and it will connect to anything it sees,” Ms. Gerety said. “You just keep putting up more routers.”

The same routers can provide access to anyone with a wireless device in range. The system’s simplicity seems undeniable: In Tunisia, Ms. Gerety and two colleagues worked with Professor Kerkeni to set up workshops with about 50 local residents. Over two weekends in December, 13 routers and a functioning mesh were put in place.

There are some drawbacks, as communications can slow when signals make multiple “hops” from one router to another, leading some Internet experts to question how large a single mesh could grow. Other experts counter that mesh networks in Europe, including some serving large sections of Berlin, Vienna and Barcelona, have thousands of routers, although they require highly technical skills.

Many of those networks were built to compensate for spotty or nonexistent coverage by corporate Internet providers. A similar motivation is at work in some Detroit neighborhoods, where the State Department financed trial runs of mesh networks as a low-cost gateway to wireless Internet access and as a community organizing tool.

“Access to information changes your life,” said Uri House, known as Heru, who has led the creation of a mesh he calls the Ecosphere in his struggling neighborhood.

But privacy issues also provoke intense discussion, particularly among groups that have historically been targets of racial and other profiling, said Diana J. Nucera, the community technology director at an organization called Allied Media Projects, which has already helped several Detroit neighborhoods put up mesh networks.

“I don’t want the N.S.A., the government, anyone to necessarily know how I think about something,” Mr. Holbrook, an African-American who is a Detroit social and political activist, said at a workshop led by Ms. Nucera.

Residents of Red Hook, Brooklyn, found that the mesh was useful during a natural disaster. Red Hook, which is dominated by public housing, was one of New York’s most exposed neighborhoods when Hurricane Sandy struck the coast in October 2012.

By chance, two activists, J. R. Baldwin and Tony Schloss, had been trying to create a mesh network in Red Hook. They had set up a router atop a community center run by the Red Hook Initiative, where Mr. Schloss worked as a technologist, and managed to connect it to a second one in an apartment overlooking Coffey Park, a local gathering point a few blocks away.

As the storm struck, standard Internet and cellphone networks collapsed across nearly all of Red Hook. But the mesh stayed up.

Hearing about the work, the Federal Emergency Management Agency installed a satellite Internet connection at the community center, using the mesh to spread Internet access to the park, a center for relief efforts. Residents relied on the mesh to get emergency updates and connect with people outside the city. About 25 routers are now in place, Mr. Schloss said.

Resilience could become the prime argument for mesh networks, with privacy as a bonus, said Jonathan Zittrain, a professor of law and computer science at Harvard and co-founder of the Berkman Center for Internet and Society. That is similar to the original Internet, before it was controlled by corporate hands and scoured by government spies, he said.

“It makes mesh more like the Internet than the Internet,” he said.
http://www.nytimes.com/2014/04/21/us...al-spying.html





Feds Beg Supreme Court to Let Them Search Phones Without a Warrant
Andy Greenberg

American law enforcement has long advocated for universal “kill switches” in cellphones to cut down on mobile device thefts. Now the Department of Justice argues that the same remote locking and data-wiping technology represents a threat to police investigations–one that means they should be free to search phones without a warrant.

In a brief filed to the U.S. Supreme Court yesterday in the case of alleged Boston drug dealer Brima Wurie, the Justice Department argues that police should be free to warrantlessly search cellphones taken from suspects immediately at the time of arrest, rather than risk letting the suspect or his associates lock or remotely wipe the phone before it can be searched.

The statement responds to briefs made to the court by the Center for Democracy and Technology and the Electronic Frontier Foundation arguing that warrantless searches of cellphones for evidence represents a serious violation of the suspect’s privacy beyond that of a usual warrantless search of a suspect’s pockets, backpack, or car interior.

“This Court should not deprive officers of an investigative tool that is increasingly important for preserving evidence of serious crimes based on purely imaginary fears that police officers will invoke their authority to review drug dealers’ ‘reading history,’ … ‘appointments with marital counselors,’ or armed robbers’ ‘apps to help smokers quit,’” reads the statement written by DOJ attorney Donald Verrilli Jr., responding to specific examples cited by the CDT.

At another point in the brief, Verrilli adds that “searching an arrestee’s cell phone immediately upon arrest is often critical to protecting evidence against concealment in a locked or encrypted phone or remote destruction.”

That last statement strikes civil liberties advocates as especially ironic, given law enforcement’s enthusiasm for new requirements that all cellphones implement exactly that sort of “kill switch” as a bulwark against a rising tide of cellphone thefts. The technology lets cellphone owners remotely wipe or encrypt the data on their phone if it’s stolen, or kill the phone entirely so it can’t be used by the thief.

Law enforcement officials ranging from New York State Attorney General Eric Schneiderman to San Francisco District Attorney General George Gascón to several major city police commissioners have all pushed for a bill introduced in February by Minnesota Senator Amy Klobuchar requiring the kill switches in all smartphones.

“You have this weird scenario where law enforcement has demanded remote wiping be deployed,” says ACLU principal technologist Chris Soghoian, “and now they’re using that to also justify warrantless searches.”

In its brief, the Department of Justice describes those same wiping functions as dangerous tools for covering the tracks of criminals:

For example, in one California case,the members of a narcotics-trafficking organization “admitted that they had a security procedure, complete with an IT department, to immediately and remotely wipe all digital evidence from their cellphones.” And because remote-wiping capability is widely and freely available to all users of every major mobile communications platform, individuals have used the same tactic. That problem will only increase as mobile technology improves and criminals become more sophisticated.

But there are better ways to respond to the threat of evidence destruction on mobile phones than warrantlessly rifling through the devices’ data on the spot, argues Hanni Fakhoury, an attorney with the Electronic Frontier Foundation. He points out that it’s easier–and potentially less unconstitutional–to simply remove the phone’s battery, turn it off, or put it in a Faraday cage that blocks all radio communications while the police wait for a judge to sign a warrant.

He adds that the Justice Department has yet to prove that the remote wiping problem is a real issue. “The government can point to no actual statistics that show this is a widespread problem,” says Fakhoury. “And the reality is that most people don’t even have remote wiping technology on their phone.”

In its brief, the Justice Department also argues that regardless of the Court’s decision on warrantless access to the entire phone, police should at least be granted access to phones’ call logs. The brief references an argument known as the “third party doctrine,” that individuals don’t have a “reasonable expectation of privacy” for information shared with a third party like the phone company, so it’s not covered by the Fourth Amendment prohibition on warrantless searches.

“If this Court were to draw a special exception from that settled doctrine for information on cell phones…it should at least preserve officers’ authority to review information in which the individual lacks a significant privacy interest, such as information that is also conveyed to telecommunications companies,” the brief reads.

But that argument ignores the fact that the specific data being searched in this case isn’t actually held by the phone companies, but stored on the device itself, argues the ACLU’s Soghoian. If it were held by the companies, cops wouldn’t need to search the phone in the first place. “What matters isn’t just the information, but where they get it from,” says Soghoian. “They’re saying that there are certain things on your phone that have less protections than others under the law, which is crazy.”

Read the Department of Justice’s full brief to the Supreme Court below:

Warrantless Cell Phone Search Government SCOTUS Brief by Justin Kelly
http://www.wired.com/2014/04/smartphone-kill-switch/





U.S. Judge Rules Search Warrants Extend to Overseas Email Accounts
Joseph Ax

Internet service providers must turn over customer emails and other digital content sought by U.S. government search warrants even when the information is stored overseas, a federal judge ruled on Friday.

In what appears to be the first court decision addressing the issue, U.S. Magistrate Judge James Francis in New York said Internet service providers such as Microsoft Corp or Google Inc cannot refuse to turn over customer information and emails stored in other countries when issued a valid search warrant from U.S. law enforcement agencies.

If U.S. agencies were required to coordinate efforts with foreign governments to secure such information, Francis said, "the burden on the government would be substantial, and law enforcement efforts would be seriously impeded."

The ruling underscores the debate over privacy and technology that has intensified since the disclosures by former National Security Agency contractor Edward Snowden about secret U.S. government efforts to collect huge amounts of consumer data around the world.

"It showcases an increasing trend that data can be anywhere," said Orin Kerr, a law professor at George Washington University who studies computer crime law.

The decision addressed a search warrant served on Microsoft for one of its customers whose emails are stored on a server in Dublin, Ireland.

In a statement, Microsoft said it challenged the warrant because the U.S. government should not be able to search the content of email held overseas.

"A U.S. prosecutor cannot obtain a U.S. warrant to search someone's home located in another country, just as another country's prosecutor cannot obtain a court order in her home country to conduct a search in the United States," the company said. "We think the same rules should apply in the online world, but the government disagrees."

The company plans to seek review of Francis' decision from a federal district judge.

Microsoft has recently emphasized to its customers abroad that their data should not be searchable by U.S. authorities and said it would fight such requests.

In a company blog post in December, Microsoft's general counsel, Brad Smith, said it would "assert available jurisdictional objections to legal demands when governments seek this type of customer content that is stored in another country."

The search warrant in question was approved by Francis in December and sought information associated with an email account for a Microsoft customer, including the customer's name, contents of all emails received and sent by the account, online session times and durations and any credit card number or bank account used for payment.

It is unclear which agency issued the warrant, and it and all related documents remain under seal.

Microsoft determined that the target account is hosted on a server in Dublin and asked Francis to throw out the request, citing U.S. law that search warrants do not extend overseas.

Francis agreed that this is true for "traditional" search warrants but not warrants seeking digital content, which are governed by a federal law called the Stored Communications Act.

A search warrant for email information, he said, is a "hybrid" order: obtained like a search warrant but executed like a subpoena for documents. Longstanding U.S. law holds that the recipient of a subpoena must provide the information sought, no matter where it is held, he said.

(Reporting by Joseph Ax; Editing by Noeleen Walder and Dan Grebler)
http://www.reuters.com/article/2014/...A3O24P20140425





Unnamed Phone Company Challenges NSA's Bulk Records Collection; FISC Says It's Perfectly Legal
Mike Masnick

Late on Friday, the FISA Court unclassified a few documents, including a ruling on an until now secret attempt by a telco to challenge the latest FISC order demanding that the telco hand over metadata on all phone records under Section 215 of the Patriot Act. The telco's name is redacted, but it relied entirely on Judge Richard Leon's ruling from December, which found the bulk collection of phone records unconstitutional. Basically, the telco appears to have received the renewed Section 215 bulk collection order from FISC in January, and then challenged it on the basis of Judge Leon's ruling. The FISC shoots down that challenge, rejecting Judge Leon's reasoning, and insisting that bulk collection of phone records is perfectly legal and constitutional.

Turning now to the merits of the Fourth Amendment issue, this Court finds Judge Leon's analysis in Klayman to be unpersuasive and concludes that it provides no basis for vacating or modifying the Secondary Order issued [REDACTED] January 3, 2014....

FISC, of course, immediately highlights the infamous Smith v. Maryland case that all defenders of bulk collection point to (and which Judge Leon said did not apply here, given the very different circumstances). But, FISC still argues it applies claiming that the differences are "indistinguishable."

The information [REDACTED] produces to NSA as part of the telephony metadata program is indistinguishable in nature from the information at issue in Smith and its progeny. It includes dialed and incoming telephone numbers and other numbers pertaining to the placing or routing of calls, as well as the date, time and duration of the calls.

That seems disingenuous at best. You need to be willfully distorting the facts to argue that Smith and the bulk data collection programs are "indistinguishable" from one another. Smith involved information on a single person. The bulk collection covers everyone. In fact, Judge Leon himself went through a rather detailed explanation of what "distinguishes" the Smith case from the bulk collection, including the fact that while people may expect phone companies to occasionally provide information to law enforcement on suspects, they do not reasonably expect the telcos to do that on everything from every person.

FISC Judge Rosemary Collyer admits that Judge Leon explained why the two situations are wholly different, but simply disagrees on every distinguishing factor.

This Court respectfully disagrees with Judge Leon's reasons for deviating from Smith. To begin with, Judge Leon focused largely on what happens (and what could happen) to the telephony metadata after it has been acquired by NSA -- e.g., how long the metadata could be retained and how the Government could analyze it using sophisticated technology. Smith and the Supreme Court's other decisions applying the third-party disclosure principle make clear that this focus is misplaced in assessing whether the production of telephony metadata constitutes a search under the Fourth Amendment.

Smith reaffirmed that the third-party disclosure principle -- i.e., the rule that "a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties" ... applies regardless of the disclosing person's assumptions or expectations with respect to what will be done with the information following its disclosure.


From there, Judge Collyer goes on to restate the rather expansive view that, under the third party doctrine, basically you have absolutely no 4th Amendment rights whatsoever to anything held by a third party. There's also this fun tidbit, in which the ruling dismisses the "everyone's content" vs. "just one target's content"

The aggregated scope of the collection and the overall size of NSA's database are immaterial in assessing whether any person's reasonable expectation of privacy has been violated such that a search under the Fourth Amendment has occurred. To the extent that the quantity of the metadata collected by NSA is relevant, it is relevant only on a user-by-user basis. The pertinent question is whether a particular user has a reasonable expectation of privacy in the telephony metadata associated with her or her own calls. For purposes of determining whether a search under the Fourth Amendment has occurred, it is irrelevant that other users' information is also being collected and that the aggregate amount acquired is very large.

Basically, even though there is a very big distinguishing factor between collecting one targeted person's info and everyone's info, the FISA Court insists that this factor can be ignored, because you have to look at it in terms of each person's individual situation. That seems like a highly questionable analysis, and a very dangerous "cheat" to hide from the biggest factor that makes the 215 bulk collection orders so different from the situation in Smith.

Even more troubling, is that the FISC seems to argue that phone metadata probably isn't that revealing anyway -- which is clearly bogus. It points to the case that Smith mainly relied upon, the Miller case involving bank records, and argues that phone metadata and bank records are basically the same:

It is far from clear to this Court that even years' worth of non-content call detail records would reveal more of the details about a telephone user's personal life than several months' worth of the same person's bank records. Indeed, bank records are likely to provide the Government directly with detailed information about a customer's personal life -- e.g., the names of the persons with whom the customer has had financial dealings, the sources of his income, the amounts of money he has spent and on what forms of goods and services, the charities and political organizations that he supports -- that the call detail records simply do not, by themselves, provide.

I find it equally questionable that bank record information isn't considered private, but even if we grant that premise, the rest of the argument makes little sense. In fact, much of the above information may not actually be supplied by bank records, and a person can often use cash to leave no such record. While both records may be quite revealing (beyond what I think the 4th Amendment should allow), given the choice, I'd argue that my phone records are a hell of a lot more revealing and private than my bank records.

FISC also rejects the idea that the Supreme Court's decision in the Jones case (arguing that GPS tracking may go too far) changes the analysis here. Judge Collyer points out that the rulings that Judge Leon relies on were concurring opinions, but not, technically part of the majority ruling (he basically lumped together Justice Alito and Justice Sotomayor's rulings, despite each taking slightly different approaches).

In the end, the FISC rejects the attempt by the unnamed telco, and basically says that Judge Leon's ruling is wrong. Kudos to the nameless telco for actually challenging the Section 215 order. Hopefully we'll find out soon which telco actually made a move to protect its users' privacy.
http://www.techdirt.com/articles/201...ly-legal.shtml





Low-Level Federal Judges Balking at Law Enforcement Requests for Electronic Evidence
Ann E. Marimow and Craig Timberg

Judges at the lowest levels of the federal judiciary are balking at sweeping requests by law enforcement officials for cellphone and other sensitive personal data, declaring the demands overly broad and at odds with basic constitutional rights.

This rising assertiveness by magistrate judges — the worker bees of the federal court system — has produced rulings that elate civil libertarians and frustrate investigators, forcing them to meet or challenge tighter rules for collecting electronic evidence.

Among the most aggressive opinions have come from D.C. Magistrate Judge John M. Facciola, a bow-tied court veteran who in recent months has blocked wide-ranging access to the Facebook page of Navy Yard shooter Aaron Alexis and the iPhone of the Georgetown University student accused of making ricin in his dorm room. In another case, he deemed a law enforcement request for the entire contents of an e-mail account “repugnant” to the U.S. Constitution.

For these and other cases, Facciola has demanded more focused searches and insisted that authorities delete collected data that prove unrelated to a current investigation rather than keep them on file for unspecified future use. He also has taken the unusual step, for a magistrate judge, of issuing a series of formal, written opinions that detail his concerns, even about previously secret government investigations.

“For the sixth time,” Facciola wrote testily, using italics in a ruling this month, “this Court must be clear: if the government seizes data it knows is outside the scope of the warrant, it must either destroy the data or return it. It cannot simply keep it.”

The Justice Department declined to comment for this article, although it said in an appeal to a Facciola ruling this week that his position was “unreasonable,” out of step with other judges and would slow searches of the e-mails of criminal suspects “to a snail’s pace.”

Facciola, 68, a former state and federal prosecutor known as “Fatch” around the limestone E. Barrett Prettyman Federal Courthouse a block from the Mall, remains an outlier among the 500-plus federal magistrates nationwide, say legal experts.

Yet he is part of a small but growing faction, including judges in Texas, Kansas, New York and Pennsylvania, who have penned decisions seeking to check the reach of federal law enforcement power in the digital world. Although some rulings were overturned, they have shaped when and how investigators can seize information detailing the locations, communications and online histories of Americans.

“There’s a newfound liberation to scrutinize more carefully,” said Albert Gidari Jr., a partner at Perkins Coie who represents technology and telecommunications companies. “They also don’t want to be the ones who approve an order that later becomes public and embarrassing. . . . Nobody likes to be characterized as a rubber stamp.”

‘Magistrates’ Revolt’

The seeds of what legal observers have dubbed “the Magistrates’ Revolt” date back several years, but it has gained power amid mounting public anger about government surveillance capabilities revealed by former National Security Agency contractor Edward Snowden. Judges have been especially sensitive to backlash over the Foreign Intelligence Surveillance Court, which made secret rulings key to the growth of the surveillance programs.

Central to the cases before magistrate judges has been the Fourth Amendment’ s prohibition of unreasonable search and seizure. Inspired by the Founding Fathers’ unhappy memories of the aggressive tactics by British soldiers, it has been continually reinterpreted through more than two centuries of technological change.

Such issues are increasingly urgent in an era when a typical smartphone carries video clips, e-mails, documents, location information and enough detail on a user’s communications to allow authorities map out a nearly complete universe of personal relationships. The Supreme Court plans to hear two cases next week on issues related to how police search cellphones after arrests.

Magistrate judges, who do much of the routine work of the criminal justice system, influence each other through conversations at judicial conferences and through the federal e-mail system, which allows any magistrate judge to query all others on a vexing legal question with a single click of the mouse.

Published opinions by magistrates are relatively rare, making it hard to track shifting attitudes toward government data requests. But legal experts say the overall level of skepticism from magistrates is on the rise.

“In talking to magistrate judges, they are saying, ‘I’m not writing anything. I’m just saying no,’ ” said Brian L. Owsley, a former magistrate judge now teaching at Texas Tech’s law school.

Magistrate Judge Stephen W. Smith, based in Houston’s federal court, is often credited with touching off the insurrection among his colleagues with a 2005 ruling in which he denied a government request for real-time access to the detailed location information that cellphones emit. He ruled that requiring a telecommunications company to provide subjects’ ongoing data amounted to placing a tracking device on them — something permitted only with the issuance of a search warrant, which the government had not requested.

The distinction is crucial: Search warrants require that the government show probable cause that a crime was committed and that the search will turn up evidence that helps prove the crime. Other magistrates had routinely allowed cellphone location data to be seized using court orders, which require the government to meet a less stringent standard of showing only that the information is “relevant and material” to an ongoing investigation.

“We understand law enforcement has a difficult job, and we don’t want to blow an investigation or tip off a suspect,” said Smith, who has known Facciola for years through their shared work for an online legal journal. “On the other hand, he feels, like we all do, the special responsibility to safeguard the Fourth Amendment. . . . We are the ultimate backstop.”

Tackling such issues, even in the face of possible reversal by higher courts, has become something of a badge of honor among some magistrates. Judge James Orenstein of Brooklyn, a former federal prosecutor who also wrote an early, influential ruling on cellphone location data, once joked with Smith that they would soon have enough like-minded magistrates to form a bowling team, Smith recalled.

That prompted Orenstein to design shirts featuring the image of a bowling ball rolling toward a cellphone and nine cell towers arranged in a triangle like a set of bowling pins. Above the image it read, “CSI: Cell Site Information.” Below it read, “Bowling for Dialers.”

When other magistrates write opinions on the issue — regardless of which side they take in the debate — they are offered one of the shirts.

20 warrants modified

Facciola, whose chambers are on the floor below where the secretive Foreign Intelligence Surveillance Court meets, has been a magistrate since 1997. Despite a record of challenging law enforcement, legal observers say he has taken a new, more public role in digital privacy cases since he was reappointed for a third term in June. He declined interview requests for this article.

In one recent opinion, Facciola estimated that he has modified about 20 search warrants in the last six months. In rejecting the search of the Georgetown student’s iPhone last month, Facciola warned that with electronic devices, “the potential for abuse has never been greater: it is easy to copy them and store thousands or millions of documents with relative ease.”

He also demanded to know — in precise technical language — how the government intended to search those documents. “The government should not be afraid to use terms like ‘MD5 hash values,’ ‘metadata,’ ‘registry,’ ‘write blocking’ and ‘status marker,’ nor should it shy away from what kinds of third party software are used and how they are used to search for particular types of data,” he wrote.

Facciola ultimately approved the warrant to search the Navy Yard shooter’s Facebook account last fall, but only after limiting the type of information the company could give the government. Law enforcement officials, he wrote, should narrowly craft their requests because investigators will come across “innocuous and irrelevant” messages sent by other people “who could not possibly have anticipated that the government would see what they have posted.”

In two other cases, he has raised concerns about government requests to prevent Twitter and Yahoo from notifying customers of grand jury subpoenas involving the users’ information. Facciola also took the unusual step this month of inviting the Electronic Frontier Foundation, a civil liberties group, to serve as a counterpoint to the government’s request for cellphone location information in a robbery investigation. He expressed particular interest in how the technology works, how long the data are stored and how precisely the data can locate a person within an area.

“We’re hearing from an increasing number of magistrates that they’re uncomfortable with the requests they’re getting from the FBI and the Justice Department for surveillance,” said Cindy Cohn, legal director of the Electronic Frontier Foundation.

Facciola has twice in recent rulings denied requests to secretly search e-mails related to the investigation of possible kickbacks involving a defense contractor. First, he objected to the Justice Department’s request for the “entire universe” of information related to a particular account and compared the broad request to rummaging randomly through a person’s belongings.

When an investigator renewed the request a few weeks later, Facciola dismissed it as “the same defective and unconstitutional request” and said he had no choice but to order the a narrower search to be performed by the service provider, Apple, rather than the government. He referred to an earlier opinion by Magistrate Judge David J. Waxse, based in Kansas City, who had suggested a similar approach in a 2012 case, saying that turning over the entire contents of an e-mail account to authorities was equivalent to giving police every piece of mail sent to a home, instead of only those letters demonstrably related to an investigation.

Such rulings have set off alarms across the tech industry, prompting companies to worry that they could be dragged into the machinations of the criminal justice system. Outside legal scholars raised serious concerns as well.

Orin Kerr, a George Washington University law professor and former Justice Department attorney specializing in computer crimes, said several of Facciola’s opinions are “effectively daring the Justice Department to appeal him as a way of starting a conversation.”

Challenge by Justice

The Justice Department obliged Monday. It filed an appeal that called Facciola’s approach to searching e-mail contrary to the rulings of other courts and said that it would be impractical to have employees of a private company, instead of trained government investigators, searching for evidence.

The U.S. attorney for the District of Columbia has appealed at least two of Facciola’s recent decisions in filings that remain under seal. The judge has had mixed results when the Justice Department challenges his decisions.

In the leak investigation involving Fox News reporter James Rosen, for instance, Facciola ruled that the government was obligated to notify the reporter that his Gmail account was the target of a warrant. U.S. District Judge Royce C. Lamberth reversed Facciola. The warrant was served on Google, the service provider, and Lamberth said the government had no obligation to notify Rosen.

Other magistrates who set a high threshold for collecting digital evidence have been overruled on appeal or have seen their rulings modified. Magistrate Judge Lisa Pupo Lenihan, based in Pittsburgh, sided with Smith and Orenstein in requiring a search warrant before cellphone location data was turned over to police. The court’s other magistrate judges signed on as well.

But a federal appeals court overruled Lenihan in a 2010 decision that allowed magistrates to require probable cause only “sparingly,” in cases that clearly merit it. That left the standard — along with many issues involving what digital evidence the government can collect and how — a patchwork across the country that attorneys sometimes struggle to navigate.

“The decisions are getting to be increasingly inconsistent,” said Marc Zwillinger, founder of ZwillGen, a Washington-based law firm that has major tech companies as clients. “It places a provider in a difficult position, accepting an order from a judge in one district that a judge in another district would find unconstitutional.”

Alice Crites and Carol D. Leonnig contributed to this report.
http://www.washingtonpost.com/local/...52c_story.html





Russia's Putin Calls the Internet a 'CIA Project'
Nataliya Vasilyeva

President Vladimir Putin on Thursday called the Internet a CIA project and made comments about Russia's biggest search engine Yandex, sending the company's shares plummeting.

The Kremlin has been anxious to exert greater control over the Internet, which opposition activists - barred from national television - have used to promote their ideas and organize protests.

Russia's parliament this week passed a law requiring social media websites to keep their servers in Russia and save all information about their users for at least half a year. Also, businessmen close to Putin now control Russia's leading social media network, VKontakte.

Speaking Thursday at a media forum in St. Petersburg, Putin said that the Internet originally was a "CIA project" and "is still developing as such."

To resist that influence, Putin said, Russia needs to "fight for its interests" online.

A Russian blogger complained to Putin that foreign websites and Yandex, the web search engine which is bigger in Russia than Google, are storing information on servers abroad, which could be undermining Russia's security.

In his reply, Putin mentioned unspecified pressure that was exerted on Yandex in its early years and chided the company for its registration in the Netherlands "not only for tax reasons but for other considerations, too."

Although Putin's comments didn't include any specific threats to Yandex, one of Russia's most successful tech companies, its shares plunged by 5 percent at the Nasdaq's opening on Thursday.

In a statement Thursday, Yandex said the company got registered in the Netherlands "solely due to the specifics of corporate law," not because of the low taxes there and added that its core business is based in Russia and "practically all the taxes are paid in Russia."

Reacting to Putin's claims that Yandex was under "pressure," the company said it got its first investments from international funds and investors, "which is the usual practice for any online start-up in any country."
http://hosted.ap.org/dynamic/stories...LATE=DEFAU LT





F.B.I. Informant Is Tied to Cyberattacks Abroad
Mark Mazzetti

An informant working for the F.B.I. coordinated a 2012 campaign of hundreds of cyberattacks on foreign websites, including some operated by the governments of Iran, Syria, Brazil and Pakistan, according to documents and interviews with people involved in the attacks.

Exploiting a vulnerability in a popular web hosting software, the informant directed at least one hacker to extract vast amounts of data — from bank records to login information — from the government servers of a number of countries and upload it to a server monitored by the F.B.I., according to court statements.

The details of the 2012 episode have, until now, been kept largely a secret in closed sessions of a federal court in New York and heavily redacted documents. While the documents do not indicate whether the F.B.I. directly ordered the attacks, they suggest that the government may have used hackers to gather intelligence overseas even as investigators were trying to dismantle hacking groups like Anonymous and send computer activists away for lengthy prison terms.

The attacks were coordinated by Hector Xavier Monsegur, who used the Internet alias Sabu and became a prominent hacker within Anonymous for a string of attacks on high-profile targets, including PayPal and MasterCard. By early 2012, Mr. Monsegur of New York had been arrested by the F.B.I. and had already spent months working to help the bureau identify other members of Anonymous, according to previously disclosed court papers.

One of them was Jeremy Hammond, then 27, who, like Mr. Monsegur, had joined a splinter hacking group from Anonymous called Antisec. The two men had worked together in December 2011 to sabotage the computer servers of Stratfor Global Intelligence, a private intelligence firm based in Austin, Tex.

Shortly after the Stratfor incident, Mr. Monsegur, 30, began supplying Mr. Hammond with lists of foreign websites that might be vulnerable to sabotage, according to Mr. Hammond, in an interview, and chat logs between the two men. The New York Times petitioned the court last year to have those documents unredacted, and they were submitted to the court last week with some of the redactions removed.

“After Stratfor, it was pretty much out of control in terms of targets we had access to,” Mr. Hammond said during an interview this month at a federal prison in Kentucky, where he is serving a 10-year sentence after pleading guilty to the Stratfor operation and other computer attacks inside the United States. He has not been charged with any crimes in connection with the hacks against foreign countries.

Mr. Hammond would not disclose the specific foreign government websites that he said Mr. Monsegur had asked him to attack, one of the terms of a protective order imposed by the judge. The names of the targeted countries are also redacted from court documents.

But according to an uncensored version of a court statement by Mr. Hammond, leaked online the day of his sentencing in November, the target list was extensive and included more than 2,000 Internet domains. The document said Mr. Monsegur had directed Mr. Hammond to hack government websites in Iran, Nigeria, Pakistan, Turkey and Brazil and other government sites, like those of the Polish Embassy in Britain and the Ministry of Electricity in Iraq.

An F.B.I. spokeswoman declined to comment, as did lawyers for Mr. Monsegur and Mr. Hammond.

The hacking campaign appears to offer further evidence that the American government has exploited major flaws in Internet security — so-called zero-day vulnerabilities like the recent Heartbleed bug — for intelligence purposes. Recently, the Obama administration decided it would be more forthcoming in revealing the flaws to industry, rather than stockpiling them until the day they are useful for surveillance or cyberattacks. But it carved a broad exception for national security and law enforcement operations.

Mr. Hammond, in the interview, said he and Mr. Monsegur had become aware of a vulnerability in a web-hosting software called Plesk that allowed backdoor access to thousands of websites. Another hacker alerted Mr. Hammond to the flaw, which allowed Mr. Hammond to gain access to computer servers without needing a user name or password.

Over several weeks in early 2012, according to the chat logs, Mr. Monsegur gave Mr. Hammond new foreign sites to penetrate. During a Jan. 23 conversation, Mr. Monsegur told Mr. Hammond he was in search of “new juicy targets,” the chat logs show. Once the websites were penetrated, according to Mr. Hammond, emails and databases were extracted and uploaded to a computer server controlled by Mr. Monsegur.

The sentencing statement also said that Mr. Monsegur directed other hackers to give him extensive amounts of data from Syrian government websites, including banks and ministries of the government of President Bashar al-Assad. “The F.B.I. took advantage of hackers who wanted to help support the Syrian people against the Assad regime, who instead unwittingly provided the U.S. government access to Syrian systems,” the statement said.

The court documents also refer to Mr. Monsegur’s giving targets to a Brazilian hacker. The hacker, who uses the alias Havittaja, has posted online some of his chats with Mr. Monsegur in which he was asked to attack Brazilian government websites.

One expert said that the court documents in the Hammond case were striking because they offered the most evidence to date that the F.B.I. might have been using hackers to feed information to other American intelligence agencies. “It’s not only hypocritical but troubling if indeed the F.B.I. is loaning its sting operations out to other three-letter agencies,” said Gabriella Coleman, a professor at McGill University and author of a forthcoming book about Anonymous.

During the prison interview, Mr. Hammond said that he did not have success hacking a large number of the Plesk websites that Mr. Monsegur had identified, and that his ability to create a so-called back door to a site depended on which operating system it ran on.

He added that Mr. Monsegur never carried out the hacks himself, but repeatedly asked Mr. Hammond for specific details about the Plesk vulnerability.

“Sabu wasn’t getting his hands dirty,” he said. Federal investigators arrested Mr. Monsegur in mid-2011, and his cooperation with the F.B.I. against members of Anonymous appears to have begun soon after.

In a closed hearing in August 2011, a federal prosecutor told a judge that Mr. Monsegur had been “cooperating with the government proactively” and had “literally worked around the clock with federal agents” to provide information about other hackers, whom he described as “targets of national and international interests.”

“During this time the defendant has been closely monitored by the government,” said the prosecutor, James Pastore, according to a transcript of the hearing. “We have installed software on a computer that tracks his online activity. There is also video surveillance in the defendant’s residence.”

Mr. Monsegur’s sentencing hearing has been repeatedly delayed, leading to speculation that he is still working as a government informant. His current location is unknown.

Exactly what role the F.B.I. played behind the scenes during the 2012 attacks is unclear. Mr. Hammond said he had been in constant contact with Mr. Monsegur through encrypted Internet chats. The two men often communicated using Jabber, a messaging platform popular among hackers. Mr. Monsegur used the alias Leondavidson and Mr. Hammond used Yohoho, according to the court records.

During a conversation on Feb. 15, 2012, Mr. Hammond said he hoped all the stolen information would be put “to good use.”

“Trust me,” Mr. Monsegur said, according to the chat logs. “Everything I do serves a purpose.”

Now, sitting in prison, Mr. Hammond wonders if F.B.I. agents might also have been on the other end of the communications.
http://www.nytimes.com/2014/04/24/wo...ks-abroad.html





87% of Electronic Spying is Conducted by Governments, with Cyber Espionage Accounting for 22% of Data Breaches
Ishbel MaCleod

Governments were responsible for over three quarters (87 per cent) of the 511 spying incidents recorded, the latest Data Breach Investigations Report by Verizon Communications has found.

Half (49 per cent) of the attacks originated in China and other East Asian nations, while it is believed that 21 per cent came from Eastern European countries. It was also discovered that 54 per cent of these cyber attacks affect the US

The report stated: “The public, professional, and manufacturing sectors are more targeted by espionage than the rest of the field. Many of these organisations are targeted because of the contracts and relationships they have with other organisations. For some, they can serve as both a valuable aggregation point for victim data and a trusted exfiltration point across several target organisations.”

In total, there were 63,437 security incidents, while there were 1,367 confirmed breaches of data.

The report also discovered that 94 per cent these incidents fell into nine categories.

“But (using our best infomercial voice) that’s not all! When we apply the same method to the last three years of breaches, 95 per cent can be described by those same nine patterns,” the report added.

The nine categories are: POS intrusions, web app attacks, cyber espionage, insider misuse, physical theft, miscellaneous errors, crimeware, DOS attacks and card skimmers.
http://www.thedrum.com/news/2014/04/...unting-22-data





Inside the Secret Digital Arms Race: Facing the Threat of a Global Cyberwar
Steve Ranger

The team was badly spooked, that much was clear. The bank was already reeling from two attacks on its systems, strikes that had brought it to a standstill and forced the cancellation of a high profile IPO.

The board had called in the team of security experts to brief them on the developing crisis. After listening to some of the mass of technical detail, the bank's CEO cut to the chase.

"What should I tell the Prime Minister when I get to Cobra?" he demanded, a reference to the emergency committee the government had set up as it scrambled to respond to what was looking increasingly like a coordinated cyberattack.

The security analysts hesitated, shifting in their seats, fearing this was the beginning, not the end, of the offensive.

"We think this could just be a smokescreen," one said, finally. And it was. Before the end of next day, the attack had spread from banks to transport and utilities, culminating in an attack on a nuclear power station.

The mounting horror of the analysts, the outrage and lack of understanding from the execs was all disturbingly authentic, but fortunately, none of it was real. The scene formed part of a wargame, albeit one designed by the UK's GCHQ surveillance agency among others to attract new recruits into the field of cybersecurity.

As I watched the scenario progress (hosted in a World War II bunker under London for added drama) it was hard not to get just as caught up in the unfolding events as the competition finalists played the security analysts tasked with fighting the attack, and real industry executives took the role of the bank's management, if only because these sorts of scenarios are now increasingly plausible.

And it's not just mad criminal geniuses planning these sorts of digital doomsday attacks either. After years on the defensive, governments are building their own offensive capabilities to deliver attacks just like this against their enemies. It's all part of a secret, hidden arms race, where countries spend billions of dollars to create new armies and stockpiles of digital weapons.

This new type of warfare is incredibly complex and its consequences are little understood. Could this secret digital arms race make real-world confrontations more likely, not less? Have we replaced the cold war with the coders' war?

Even the experts are surprised by how fast the online threats have developed. As Mikko Hypponen, chief research officer at security company F-Secure, said a conference recently, "If someone would have told me ten years ago that by 2014 it would be commonplace for democratic western governments to develop and deploy malware against other democratic western governments, that would have sounded like science fiction. It would have sounded like a movie plot, but that's where we are today."

The first casualty of cyberwar is the web

It's taken less than a decade for digital warfare to go from theoretical to the worryingly possible. The web has been an unofficial battleground for many modern conflicts. At the most basic level, groups of hackers trying to publicise their cause have been hijacking or defacing websites for years. Some of these groups have acted alone, some have at least the tacit approval of their governments.

Most of these attacks -- taking over a few Twitter accounts, for example -- are little more than a nuisance, high profile but relatively trivial.

However, one attack has already risen to the level of international incident. In 2007, attacks on Estonia swamped banks, newspaper and government websites. They began after Estonia decided to move a Soviet war memorial, and lasted for three weeks (Russia denied any involvement).

Estonia is a small state with a population of just 1.3 million. However, it has a highly-developed online infrastructure, having invested heavily in e-government services, digital ID cards, and online banking. That made the attack particularly painful, as the head of IT security at the Estonian defence ministry told the BBC at the time, "If these services are made slower, we of course lose economically."

The attacks on Estonia were a turning point, proving that a digital bombardment could be used not just to derail a company or a website, but to attack a country. Since then, many nations have been scrambling to improve their digital defenses -- and their digital weapons.

While the attacks on Estonia used relatively simple tools against a small target, bigger weapons are being built to take on some of the mightiest of targets.

It's all part of a secret, hidden arms race, where countries spend billions of dollars to create new armies and stockpiles of digital weapons.

Last year the then-head of the US Cyber Command, General Keith Alexander, warned on the CBS 60 Minutes programme of the threat of foreign attacks, stating: "I believe that a foreign nation could impact and destroy major portions of our financial system."

In the same programme, the NSA warned of something it called the "BIOS plot," a work by an unnamed nation to exploit a software flaw that could have allowed them to destroy the BIOS in any PC and render the machine unusable.

Of course, the US isn't just on the defensive. It has been building up its own capabilities to strike, if needed.

The only documented successful use of a such a weapon -- the famous Stuxnet worm -- was masterminded by the US in the form that caused damage and delay to the Iranian nuclear programme.

Building digital armies

The military has been involved with the internet since its the start. It emerged from a US Department of Defense-funded project, so it's no surprise that the armed forces have kept a close eye on its potential.

And politicians and military leaders of all nations are naturally attracted to digital warfare as it offers the opportunity to neutralise an enemy without putting troops at risk.

As such, the last decade has seen rapid investment in what governments and the military have dubbed "cyberwar" -- sometimes shortened to just "cyber." Yes, it sounds like a cheaply sensational term borrowed from an airport thriller, (and to some the use of such an outmoded term reflects the limited level of understanding of the issues involved by those in charge) but the intent behind the investment is deadly serious.

The UK's defence secretary Philip Hammond has made no secret of the country's interest in the field, telling a newspaper late last year, "We will build in Britain a cyber strike capability so we can strike back in cyberspace against enemies who attack us, putting cyber alongside land, sea, air and space as a mainstream military activity."

The UK is thought to be spending as much as £500m on the project over the next few years. On an even larger scale, last year General Alexander revealed the NSA was building 13 teams to strike back in the event of an attack on the US. "I would like to be clear that this team, this defend-the-nation team, is not a defensive team," he said told the Senate Armed Services Committee last year.

And of course, it's not just the UK and US that are building up a digital army. In a time of declining budgets, it's a way for defence ministries and defence companies to see growth, leading some to warn of the emergence of a twenty-first century cyber-industrial complex. And the shift from investment in cyber-defence initiatives to cyber-offensives is a recent and, for some, worrying trend.

Peter W. Singer, director of the Center for 21st Century Security and Intelligence at the Brookings Institution, said 100 nations are building cyber military commands of that there are about 20 that are serious players, and a smaller number could carry out a whole cyberwar campaign. And the fear is that by emphasising their offensive capabilities, governments will up the ante for everyone else.

"We are seeing some of the same manifestations of a classic arms race that we saw in the Cold War or prior to World War One. The essence of an arms race is where the sides spend more and more on building up and advancing military capabilities but feel less and less secure -- and that definitely characterises this space today," he said.

It's taken less than a decade for digital warfare to go from theoretical to the worryingly possible.

Politicians may argue that building up these skills is a deterrent to others, and emphasise such weapons would only be used to counter an attack, never to launch one. But for some, far from scaring off any would-be threats, these investments in offensive cyber capabilities risk creating more instability.

"In international stability terms, arms races are never a positive thing: the problem is it's incredibly hard to get out of them because they are both illogical [and] make perfect sense," Singer said.

Similarly Richard Clarke, a former presidential advisor on cybersecurity told a conference in 2012, "We turn an awful lot of people off in this country and around the world when we have generals and admirals running around talking about 'dominating the cyber domain'. We need cooperation from a lot of people around the world and in this country to achieve cybersecurity and militarising the issue and talking about how the US military have to dominate the cyber domain is not helpful."

Thomas Rid, a reader in War Studies at King's College London said that many countries now feel that to be taken seriously they need to have a cyber command too.

"What you see is an escalation of preparation. All sorts of countries are preparing and because these targets are intelligence intensive you need that intel to develop attack tools you see a lot of probing, scanning systems for vulnerabilities, having a look inside if you can without doing anything, just seeing what's possible," Rid said.

As a result, in the shadows, various nations building up their digital military presence are mapping out what could be future digital battlegrounds and seeking out potential targets, even leaving behind code to be activated later in any conflict that might arise.

How cyber weapons work

As nations race to build their digital armies they also need to arm them. And that means developing new types of weapons.

While state-sponsored cyberwarfare may use some of the same tools as criminal hackers, and even some of the same targets, its wants to go further.

So while a state-sponsored cyber attack could use the old hacker standby of the denial of service attack (indeed the UK's GCHQ has already used such attacks itself, according to leaks from Edward Snowden), something like Stuxnet -- built with the aim of destroying the centrifuges used in the Iranian nuclear project -- is another thing entirely.

"Stuxnet was almost a Manhattan Project style in terms of the wide variety of expertise that was brought in: everything from intelligence analysts to some of the top cyber talent in the world to nuclear physicists to engineers, to build working models to test it out on, and another entire espionage effort to put it in to the systems in Iran that Iran thought were air-gapped. This was not a couple of kids," said Singer.

The big difference between military-grade cyber weapons and hacker tools is that the most sophisticated digital weapons want to breaking things. To create real, physical damage. And these weapons are bespoke, expensive to build, and have a very short shelf life.

To have a real impact, these attacks are likely to be levelled at the industrial software that runs production lines, power stations or energy grids, otherwise known as SCADA (supervisory control and data acquisition) systems.

Increasingly, SCADA systems are being internet-enabled to make them easier to manage, which of course, also makes them easier to attack. Easier doesn't mean easy though. These complex systems, often build to last for decades, are often built for a very narrow, specific purpose -- sometimes for a single building.

According to Rid, this makes them much harder to undermine. A bespoke, highly specific system requires a bespoke, highly specific attack, and a significant amount of intelligence, too.

"The only piece of target intelligence you need to attack somebody's email or a website is an email address or a URL. In the case of a control system, you need much more information about the target, about the entire logic that controls the process, and legacy systems that are part of the process you are attacking," Rid said.

That also means that delivering any more than a few of these attacks at a time would be almost impossible, making a long cyberwar campaign hard to sustain.

Similarly, these weapons need to exploit a unique weakness to be effective: so-called zero day flaws. These are vulnerabilities in software that have not been patched and therefore cannot be defended against.

This is what makes them potentially so devastating, but also limits their longevity. Zero-day flaws are relatively rare and expensive and hard to come by. They're sold for hundreds of thousands of dollars by their finders. A couple of years ago a Windows flaw might have earned its finder $100,000 on the black market, an iOS vulnerability twice that.

Zero-day flaws have an in-built weakness, though: they're a one-use only weapon. Once an attack has been launched, the zero-day used is known to everyone. Take Stuxnet. Even though it seems to have had one specific target -- an Iranian power plant -- once it was launched, Stuxnet spread very widely, meaning security companies around the world could examine the code, and making it much harder for anyone to use that exact same attack again.

"It's like dropping the bomb, but also [saying] here's the blueprint of how to build the bomb," explains Singer, author of the recent book Cybersecurity and Cyberwar.

But this leads to another, unseen problem. As governments stockpile zero-day flaws for use in their cyber-weapons, it means they aren't being reported to the software vendors to be fixed -- leaving unpatched systems around the world at risk when they could easily be fixed.

When is a cyberwar not a cyberwar?

The greatest trick cyberwar ever played was convincing the world it doesn't exist.

While the laws of armed conflict are well understood -- if not always adhered to -- what's striking about cyberwar is that no one really knows what the rules are.

As NATO's own National Cybersecurity Framework Manual notes: "In general, there is agreement that cyber activities can be a legitimate military activity, but there is no global agreement on the rules that should apply to it."

Dr. Heather A. Harrison Dinniss of the International Law Centre at the Swedish National Defence College said that most cyber warfare shouldn't need to be treated differently to regular warfare, and that the general legal concepts apply "equally regardless of whether your weapon is a missile or a string of ones and zeros."

But cyberwarfare does raise some more difficult issues, she says. What about attacks that do not cause physical harm, for example: do they constitute attacks as defined under the laws of armed conflict?

Dinniss says that some sort of consensus emerging that attacks which cause loss of functionality to a system do constitute an attack, but the question is certainly not settled in law.

Western nations have been reluctant to sign any treaty that tries to define cyberwar. In the topsy-turvy world of international relations, it is China and Russia that are keenest on international treaties that define cyberwarfare as part of their general desire to regulate internet usage.

The reluctance from the US and the UK is partly because no state wants to talk candidly about their cyberwarfare capabilities, but also by not clearly defining the status of cyberwarfare, they get a little more leeway in terms of how they use those weapons.

And, because in many countries cyberwarfare planning has grown out of intelligence agencies as much as out of the military, the line between surveillance-related hacking and more explicitly-offensive attacks is at best very blurred.

That blurring suits the intelligence agencies and the military just fine. While espionage is not illegal under international law, it is outlawed under most states' domestic laws.

"It could well be that states were waiting to see what use would be made of cyber operations -- how much they could get away with under the rubric of espionage," Dinniss adds. For example, although the US might consider Stuxnet to be an espionage project, that might not be the way it is interpreted by others.

This is not some arcane debate, though. If a cyber attack can be defined as an attack under the laws of armed conflict, a nation has a much better case for launching any kind of response, up to and including using conventional weapons in response. And that could mean that using digital weapons could have unexpected -- and potentially disastrous -- consequences.

Right now all of this is a deliberately grey area, but it's not hard to envisage an internet espionage attempt that goes wrong, damages something, and rapidly escalates into a military conflict. Can a hacking attempt really lead to casualties on the battlefield? Possibly, but right now those rules around escalation aren't set. Nobody really knows how or if escalation works in a digital space.

If I hack your power grid, is it a fair response to shut down my central bank? At what point is a missile strike the correct response to a denial of service attack? Nobody really knows what a hierarchy of targets here would look like. And that's without the problem of working out exactly who has attacked you. It's much easier to see a missile launch than work out from where a distributed digital attack is being orchestrated. Any form of cyber arms control is a long way off.

The targets in cyberwar

"If I look out of the window I can see all sorts of [industrial control software] systems behind these building and bridges. That's the problem, not military systems," says King's College's Rid.

You can drop a bomb on pretty much anything, as long as you can find it. It's a little different with digital weapons.

Some targets just don't have computers and while politicians may dream of being able to 'switch off' an enemy's airfield, it's likely to be the civilian infrastructure that's going to be the most obvious target. That's the same as standard warfare. What is different now is that virtually any company could be a target, and many probably don't realise it.

Companies are only gradually understanding the threats they face, especially as they start to connect their industrial control systems to the internet. Like the executives at the London cyber wargame, most real life executives fail to realise that they might be a target, or the potential risks.

Mark Brown, director of risk advisory at consultant KPMG, says, "Companies have recognised they can connect them to their core networks, to the internet, to operate them remotely but they haven't necessarily applied the same risk and controls methodology to the management of operational technology as they have to traditional IT."

Indeed, part of the problem is that these systems have never been thought about as security risks and so no-one has taken responsibility for them. "Not many CIOs have responsibility for those operational technology environments, at least not traditionally. Often you are caught in the crossfire of finger-pointing; the CIO says it's not my job, the head of engineering says it's not my job," Brown said.

A recent report warned that the cybersecurity efforts around the US electricity supply network are fragmented and not moving fast enough, while in the UK insurers are refusing cover to power companies because their defences are too weak.

Cyberwar: Coming to a living room near you?

Cyberwar is -- for all the billions being spent on it -- still largely theoretical, especially when it comes to the use of zero-day attacks against public utilities. Right now a fallen tree is a bigger threat to your power supply than a hacker.

While states have the power to launch such attacks, for now they have little incentive. And the ones with the most sophisticated weapons also have the most sophisticated infrastructure and plenty to lose, which is why most activity is at the level of espionage rather than war.

However, there's no reason why this should remain the case forever. When countries spend billions on building up a stockpile of weapons, there is always the increased risk of confrontation, especially when the rules of engagement are still in flux.

But even now a new and even more dangerous battlefield is being built. As we connect more devices -- especially the ones in our homes -- to the web, cyberwar is poised to become much more personal.

As thermostats, fridges and cars become part of the internet of things, their usefulness to us may increase, but so does the risk of them being attacked. Just as more industrial systems are being connected up, we're doing the same to our homes.

The internet of things, and even wearable tech, bring with them great potential, but unless these systems are incredibly well-secured, they could be easy targets to compromise.

Cyberwarfare might seem like a remote, fanciful threat, but digital weapons could create the most personal attacks possible. Sure, it's hard to see much horror lurking in a denial of service attack against your internet-enabled toothbrush (or the fabled internet fridge) but the idea of an attack that turned your gadgets, your car or even your home against you is one we need to be aware of.

We tend to think of our use of technology as insulating us from risk, but in future that may no longer be the case. If cyberwar ever becomes a reality, the home front could become an unexpected battlefield.
http://www.techrepublic.com/article/...tal-arms-race/





Easter Egg: DSL Router Patch Merely Hides Backdoor Instead of Closing It

Researcher finds secret “knock” opens admin for some Linksys, Netgear routers.
Sean Gallagher

First, DSL router owners got an unwelcome Christmas present. Now, the same gift is back as an Easter egg. The same security researcher who originally discovered a backdoor in 24 models of wireless DSL routers has found that a patch intended to fix that problem doesn’t actually get rid of the backdoor—it just conceals it. And the nature of the “fix” suggests that the backdoor, which is part of the firmware for wireless DSL routers based on technology from the Taiwanese manufacturer Sercomm, was an intentional feature to begin with.

Back in December, Eloi Vanderbecken of Synacktiv Digital Security was visiting his family for the Christmas holiday, and for various reasons he had the need to gain administrative access to their Linksys WAG200G DSL gateway over Wi-Fi. He discovered that the device was listening on an undocumented Internet Protocol port number, and after analyzing the code in the firmware, he found that the port could be used to send administrative commands to the router without a password.

After Vanderbecken published his results, others confirmed that the same backdoor existed on other systems based on the same Sercomm modem, including home routers from Netgear, Cisco (both under the Cisco and Linksys brands), and Diamond. In January, Netgear and other vendors published a new version of the firmware that was supposed to close the back door.

However, that new firmware apparently only hid the backdoor rather than closing it. In a PowerPoint narrative posted on April 18, Vanderbecken disclosed that the “fixed” code concealed the same communications port he had originally found (port 32764) until a remote user employed a secret “knock”—sending a specially crafted network packet that reactivates the backdoor interface.

The packet structure used to open the backdoor, Vanderbecken said, is the same used by “an old Sercomm update tool”—a packet also used in code by Wilmer van der Gaast to "rootkit" another Netgear router. The packet’s payload, in the version of the backdoor discovered by Vanderbecken in the firmware posted by Netgear, is an MD5 hash of the router’s model number (DGN1000).

The nature of the change, which leverages the same code as was used in the old firmware to provide administrative access over the concealed port, suggests that the backdoor is an intentional feature of the firmware and not just a mistake made in coding. “It’s DELIBERATE,” Vanderbecken asserted in his presentation.

There are some limitations to the use of the backdoor. Because of the format of the packets—raw Ethernet packets, not Internet Protocol packets—they would need to be sent from within the local wireless LAN, or from the Internet service provider’s equipment. But they could be sent out from an ISP as a broadcast, essentially re-opening the backdoor on any customer’s router that had been patched.

Once the backdoor is switched back on, it listens for TCP/IP traffic just as the original firmware did, giving “root shell” access—allowing anyone to send commands to the router, including getting a “dump” of its entire configuration. It also allows a remote user to access features of the hardware—such as blinking the router’s lights.

Just how widely the old, new backdoor has been spread is unknown. Vanderbecken said that because each version of the firmware is customized to the manufacturer and model number, the checksum fingerprints for each will be different. While he’s provided a proof-of-concept attack for the DGN1000, the only way to find the vulnerability would be to extract the filesystem of the firmware and search for the code that listens for the packet, called “ft_tool”, or the command to reactivate the backdoor (scfgmgr –f ).

We attempted to reach Sercomm and Netgear for comment on the backdoor. Sercomm did not respond, and a Netgear spokesperson could not yet comment on the vulnerability. Ars will update this story as more details are made available by the device manufacturers.
http://arstechnica.com/security/2014...of-closing-it/





Hackers Are Getting Better at Offense. Companies Aren’t Getting Better at Defense.
Hayley Tsukayama

High-profile data breaches at retailers such as Target, Neiman Marcus and Michaels brought the sorry state of corporate cybersecurity into sharp focus last year as millions of customers found the data they had entrusted to companies had fallen into the hands of cybercriminals.

But are you ready for the bad news? It is likely to get worse in 2014.

That’s the takeaway from a report from Verizon to be released Wednesday, which found that hackers are becoming more efficient and organized while many companies are struggling to get even fundamental cybersecurity measures into place.

The number of data breaches is growing quickly, but corporations aren’t managing to keep up with the pace or scope of breaches, according to Verizon’s latest annual Data Breach Investigations Report.

“We’ve got a lagging situation here, where businesses are not acting quick enough to keep up with the capabilities of threat actors,” said David Burg, the global and U.S. advisory cybersecurity leader at PricewaterhouseCoopers (PwC).

The report compiled data from 50 security organizations that track breaches — up from 18 groups surveyed for the report last year. Now in its 10th year, the report was expanded this year to include any compromise of a company’s security system, even if the hacker did not steal data.

The results aren’t pretty. The report found 63,437 incidents in which hackers were able to breach a company’s security system, resulting in 1,367 instances of cybercriminals lifting user data. While a direct comparison to last year’s figures are not possible, Bryan Sartin, director of Verizon’s risk management team, which issued the report, said there appeared to be an increase in the number of attacks carried out by organized groups of hackers — those with signs of being state-sponsored, or by “hacktivists” organized around a ideological ideas.

"There are all these organized criminal groups — and ‘groups’ is the operative word,” Sartin said. “These are not 17-year-old kids. It’s organized criminal groups that are pooling skills, resources and infrastructure for buying, selling and trading stolen data.”

“The bad guys are getting faster as we aren’t getting any better at detecting what they’re doing,” he added.

In fact, while hackers are managing to break into systems more quickly — often within a matter of days or even hours — companies aren’t getting any better at detecting when their systems have been compromised. It often takes months for firms to realize that they’ve been attacked, and many are notified by law enforcement or another outside group, as was the case in the most recent retail breaches.

There’s no one-size-fits-all approach for protecting corporate security systems, Sartin said. But 92 percent of all breaches are related to nine types of attacks — and specific industries often face just two or three specific types of attacks, he said. Roughly one-third of all attacks targeting retailers, for example, are aimed at the point-of-sale system — the culprit in the Target and Neiman Marcus attacks. For companies, identifying which attacks affect their industries the most allows them to make an efficient game plan, Sartin said.

So, too, is knowing what kind of information hackers may want.

In a separate survey of over 10,000 U.S. companies, PwC found that while 69 percent of chief executive officers say they are either “concerned” or “very concerned” about cybersecurity issues, only 26 percent have identified which types of data they hold are the most attractive to hackers.

“It’s hard to have the strategy if you’re not sure what you’re trying to protect,” Burg said.

Burg said industries in which companies have worked together to set high levels of compliance, such as financial services and health care, are best equipped to deal with breaches. In those industries, companies alert each other to potential dangers.

But spreading that to other industries may be difficult. Congress has considered legislation making it easier for companies to share information. But those efforts have been held up by consumer privacy concerns.

In the absence of such legislation, however, the Obama administration in February released a framework to instruct companies on how best to secure their data. Additionally, recent policies introduced by the Federal Trade Commission and the Justice Department have made it easier for companies to share information without having to worry about running afoul of antitrust laws.

Sharing data is invaluable to detecting and preventing large-scale attacks, Sartin said.

“Understanding your threats and your threat profile is the only way to understand what security measures make sense in the real world for you,” Sartin said. “There’s strength in numbers.”
http://www.washingtonpost.com/blogs/...er-at-defense/





Eavesdropping on a Wireless Keyboard
Oona Räisänen

Some time ago, I needed to find a new wireless keyboard. With the level of digital paranoia that I have, my main priority was security. But is eavesdropping a justifiable concern? How insecure would it actually be to type your passwords using an older type of wireless keyboard?

To investigate this, I bought an old Logitech iTouch PS/2 cordless keyboard at an online auction. It's dated July 2000. Back in those days, wireless desktops used the 27 MHz shortwave band; later they've largely moved to 2.4 GHz. This one happens to be of the shortwave type. They've been tapped before (pdf), but no proof-of-concept was published.

I actually disposed of the keyboard before I could photograph it, so here's a newer Logitech S510 from 2005, still using the same technology:

Compared to modern USB wireless dongles, the receiver of the iTouch is huge. It isn't a one chip wonder either, and contains a PCB with multiple crystal oscillators and decoder ICs. Based on Google results, one of the Motorola chips is an FM receiver, which gives us a hint about the mode of transmission.

But because eavesdropping is our goal here, I'm tossing the receiver. Afterall, the signal is well within the 11-meter band of any home receiver with an SW band. For bandwidth reasons however, I'll use my RTL2838-based television receiver dongle, which can be tuned to an arbitrary frequency and commanded to just dump the I/Q sample stream (using rtl-sdr).

The transmission is clearly visible at 27.14 MHz. Zooming closer and taking a spectrogram, the binary FM/FSK nature of the transmission becomes obvious:

The sample length of one bit indicates a bitrate of 850 bps. A reference oscillator with a digital PLL can be easily implemented in software. I assumed there's a delta encoding on top of the FSK.

One keypress produces about 85 bits of data. The bit pattern seems to always correlate with the key being pressed, so there's no encryption at all. Pressing the reassociation button doesn't change things either. Without going too much into the details of the obscure protocol, I just mapped all keys to their bit patterns, like so:

nappis.txt (/home/windy) ×
w 111111101111011111101111101101011011100111111111001111111101 11101111110111110110101
e 111111101111011111101111110101011011100111111111001111111101 11101111110111111010101
1 111111101111011111101111110110110111001111111110011111111011 11011111111111011011011
2 111111101111011111101111110110111111100111111111001111111101 11101111110111111011011
3 111111101111011111101111111010111101001111111110011111111011 11011111101111111010111
7 111111101111011111101111111110110110110011111111100111111110 11110111111101111111101
8 111111101111011111101111101110111110011111111100111111110111 10111111011111011101111
9 111111101111011111101111101110110101100111111111001111111101 11101111110111111011011
0 111111101111011111101111110110111011001111111110011111111011 11011111101111110111011
u 111111101111011111101111111101011110011111111100111111110111 10111111011111111010111
i 111111101111011111101111111101010111001111111110011111111011 11011111101111111101010
6,8 40%

The bitstrings are so much correlated between keystrokes that we can calculate the Levenshtein distance of the received bitstring to all the mapped keys, find the smallest distance, and behold, we can receive text from the keyboard!

~/koodi/fm - zsh ×
2204 windy@pentti~/koodi/fm ) rtl_sdr - -f 27132000 -g 32.8 -s 96000 | sox -r .raw
-c 2 -r 96000 -e unsigned -b 8 - -t .raw -r 22050 - | ./fm |perl deco.pl
Found 1 device(s):
0: Realtek, RTL2838UHIDIR, SN: 00000013

Using device 0: ezcap USB 2.0 DVB-T/DAB/FM dongle
Found Rafael Micro R820T tuner
Tuned to 27132000 Hz.
Tuner gain set to 32.800000 dB.
Reading samples in async mode...
[CAPS]0wned


So, when buying a keyboard for personal use, I chose one with 128-bit AES air interface encryption.

Update: This post was mostly about accomplishing this with low-level stuff readily available at home. For anyone needing a proof of concept or even decoder hardware, there's KeyKeriki.

Update #2: Due to requests, my code is here: fm.c, deco.pl. Of course it's for reference only, as it's not a working piece of software, and never will be. Oh and it's not pretty either.
http://www.windytan.com/2013/03/eave...-keyboard.html





Tresorit Opens its End-to-End Encrypted File-Sharing Service to the Public
Josh Ong

Tresorit today officially launched its end-to-end encrypted cloud storage service after emerging from its stealth beta. The startup employs multiple layers of security that make its data extremely difficult to compromise.

One key advantage to Tresorit is that it doesn’t have a master key to your encryption, so, unlike what happened to secure email service Lavabit, it can’t be forced to provide access to your data. The startup is also based in Switzerland, which should help keep it out of reach of NSA court orders.

Tresorit uses AES-256 encryption on the client side before your data is uploaded. It also relies on a patented encryption method for sharing files with others. The service is available on PC, Mac, iOS and Android. A Windows Phone version is currently in closed beta, and a Linux client is in development.

It’s worth noting that NSA whistleblower Edward Snowden recently recommended the use of end-to-end encryption as an important method for avoiding widespread government surveillance.

In fact, the need for a service like Tresorit has become more apparent in light of the recently exposed Heartbleed bug in OpenSSL. While Tresorit used OpenSSL as part of its encryption methods, it wasn’t susceptible to attack because it had other security protocol in place.

Despite Tresorit’s confident claims that your data is secure, it’s also important to remember the adage that no system is unhackable. Tresorit’s head of marketing Szabolcs Nagy asserts that the company has simply made it difficult enough that would-be attackers will move on. To prove this, the company has challenged hackers to break its security.

Tresorit set up a dummy infrastructure with identical specs as its production servers and provided hackers with admin credentials to prove that even Tresorit employes can’t access user data.

In conjunction with the launch, Tresorit is upping the bounty for its hacking challenge from $25,000 to $50,000. Since the challenge began over a year ago, more than 1,000 hackers, including participants from MIT, Stanford, Harvard and Caltech, have tried unsuccessfully to break in.

Of course, one downside to having client-side encryption is that if you lose or forget your password, Tresorit can’t get you back into your account. For the privacy-minded, that could certainly be considered a feature, but Tresorit is working on methods to reset access without compromising the security of the system.

Tresorit raised $1.7 million in funding during its limited beta with over 100,000 users.

The free version of Tresorit offers up to 16GB of data, while premium Pro and Business plans provide 20GB of storage and additional features for $12.99 per month and $19.49 per month, respectively.

If you’re interested in giving Tresorit a whirl, you can head here for a free, limited 30-day trial of the service’s premium features.
http://thenextweb.com/apps/2014/04/1...ervice-public/





Anonymous' Airchat Aims to Allow Communication Without Needing Phone or Internet Access

Anonymous is testing Airchat, a free communications tool for the world that uses only radio waves
Mary-Ann Russon

Online hacktivist collective Anonymous has announced that it is working on a new tool called Airchat which could allow people to communicate without the need for a phone or an internet connection - using radio waves instead.

Anonymous, the amorphous group best known for attacking high profile targets like Sony and the CIA in recent years, said on the Lulz Labs project's Github page: "Airchat is a free communication tool [that] doesn't need internet infrastructure [or] a cell phone network. Instead it relies on any available radio link or device capable of transmitting audio."

The idea is that people all over the world, including those in rural areas and developing countries, will one day be able to communicate for free without the need for a mobile phone network, phone line or internet access.

While the project is workable at the moment, it is simply a proof of concept at this stage and Anonymous has revealed Airchat in the hope to get more people involved in developing the technology as well as raising funds.

Interactive chess

Despite the Airchat system being highly involved and too complex for most people in its current form, Anonymous says it has so far used it to play interactive chess games with people at 180 miles away; share pictures and even established encrypted low bandwidth digital voice chats.

In order to get Airchat to work, you will need to have a handheld radio transceiver, a laptop running either Windows, Mac OS X or Linux, and be able to install and run several pieces of complex software.

Anonymous says that a cheap radio transmitter costs as little as $40 (£23.80) meaning the system should be affordable to most people or communities.

However because the system isn't working with a specific make or model of transmitter, connecting them to your laptop is a little tricky as there is no standard connector on these devices.

Decode

"Almost every single home in this world has a common AM and/or FM radio. In cases where not everyone is able to get [a] cheap radio transceiver, [they can] at least be able to decode packets being transmitted via a pirate FM station" Anonymous said.

The video above shows the Airchat tool in use, evening managing to pull up Twitter search results for the keyword "Ukraine". While it is clearly not as fast and graphically rich as a standard internet browser, for someone looking to get crucial information fast, it could prove a vital tool.

Anonymous says that Airchat has numerous use cases other than preventing government agencies like the NSA from spying on citizens, ranging from people living in countries where the internet has been shut down or censored, such as Twitter being banned in Turkey or the telecommunications network being shut down in Crimea by Russian forces.

NGOs and medical teams working in Africa or disaster zones who need to coordinate aid efforts or explorers at expedition basecamps who want to communicate from rural areas or with rescue teams would also find the solution useful.

Connecting the world

This is not the first time that Anonymous has tried to create free communications to connect the world.

Since the Arab Springs began in 2010, Anonymous has opened up communication channels in countries when they have been closed, creating internet access points and producing "care packages" that include information about everything from first aid to how to access dial-up internet, for example, in Syria in 2012.

The hacktivist collective has also worked together with the dissident group Telecomix to help activists access banned websites in Bahrain, Egypt, Libya, Jordan, and Zimbabwe.
http://www.ibtimes.co.uk/anonymous-a...access-1445888





Google’s Revamped Gmail Could Take Encryption Mainstream
Klint Finley

Encryption is the best way to protect your online communications from the prying eyes of the National Security Agency. So says NSA whistleblower Edward Snowden.

The rub is that email encryption systems like PGP — short for Pretty Good Privacy — are a real pain for people to use, especially if they’re not steeped in the minutiae of computing. That means few people use PGP, and those who do are in danger of using it incorrectly. But it looks like Google is trying to change that. According to Venture Beat, the search giant working to create a new version of Gmail that makes PGP encryption far easier to use.

Google didn’t respond to our request for comment on the story, and even if the rumors are true, the company is facing an extremely difficult task. But it’s in a better position to take encryption mainstream than anyone else, and such a project is just what the web needs.

The State of Crypto

PGP, first released in 1991, uses a form of encryption known as public-key cryptography. This means that if you use PGP, you create two encryption “keys,” which are basically big chunks of random numbers and letters that email software programs can use to scramble and descramble your messages. Your “public key” is what other people use to encrypt messages they send to you. That’s freely available to the world at large. Then there’s your “private key,” which lets you decipher these encrypted messages. Using your PGP keys, you can also “sign” a message to prove to someone that it was sent by you.

PGP is remarkably hard to crack, but it’s also hard to use in the correct way. Researchers at Carnegie Mellon University published a paper in 1999 showing that most people couldn’t figure out how to sign and encrypt messages using the current version of PGP. Eight years later, another group of Carnegie Mellon researchers published a follow-up paper saying that, although a newer version of PGP made it easy to decrypt messages, most people still struggled with encrypting and signing messages, finding and verifying other people’s public encryption keys, and sharing their own keys.

The easiest way to use PGP today is probably a plugin available for both Firefox and Chrome called Mailvelope. It makes it pretty easy to create a PGP key pair and decrypt messages, but there are some limitations. First, you need to download the plugin and either create new PGP keys or import existing ones. And the plugin and your keys will need to be installed on every computer that you plan to use.

And when you get it installed on all your machine, it doesn’t always play nicely with a tool like Gmail. Instead of just letting you type your message in Gmail’s own “New Message” interface, Mailvelope opens a separate window for you to type in, then sends the encrypted text back into Gmail. Mailvelope developer Thomas Oberndörfer tells us the plug-in does this because it’s impossible to know whether Google will save an unencrytped copy of your text while you’re typing. “That means all private data like message content and keys have to be completely isolated from Gmail,” he says.

Google, Mailpile, and the Rest

Since Snowden revealed so many of the ways that the NSA is eavesdropping on our online communications, several projects that try to solve such problems. Mailpile, for instance, is an open source e-mail client built from the ground up to handle encryption. The idea is that by being a core part of the application, rather than a plugin, the user experience will be much better. But although the Mailpile team is working hard to reproduce as many of Gmail’s features as possible — such as a fast search system and a conversation view — there’s always a question of whether normal users can be convinced to download the software to begin with.

Meanwhile, a new company called Keybase.io is trying to make it easier to find and verify other people’s public keys by tying them to Twitter profiles, personal websites and GitHub accounts to verify identities.

But Google may be in a better position to solve both the integration and key management issues. The company could build PGP tools directly into the Chrome browser as well as its mobile apps, so that users would be able to retain control over their private keys without having to download special software. And if public keys were associated with user profiles, the discovery and verification of keys could be baked right into Gmail’s address book, all but solving the discovery and verification issue for most users.

Baking features into Google’s already popular applications could go a long way towards getting more users to adopt the tools. But Brennan Novak, a usability designer at Mailpile, tells us that it will still be tricky for Google to manage the transfer of keys back and forth between different devices. And, of course, Google would need to open source the relevant bits of software before it can be trusted.

There’s also no guarantee that Google would do a better job than any of its predecessors at making PGP usable enough to be safe. Google has gained a reputation in recent years of prioritizing engineering over design and usability. But those who remember what search was like before Google, what web based email was like before Gmail, and what mapping software was like before Google Maps may disagree with this assessment.

That said, there are downsides good design can’t solve. If you lose your private key or forget your passphrase, you’d still be out of luck. There’s no way Google could recover it for you. Also, Google wouldn’t be able to scan and index the text of your e-mails. That’s a problem if you need to search for old emails not stored on your own machine. It could be a real issue for Google’s business model as well, which involves scanning the text of emails in order to place contextual advertising.

But if Google was willing to take that advertising hit — and it might, if it meant retaining access to other data, and providing users with more peace of mind — it could bring PGP to a much larger audience.

Beyond Email

The added issue that PGP is has own limitations. Even if you encrypt your e-mail, someone who intercepts the message will be able to tell who it was sent by and who it was sent to. On one hand, the fact that senders are exposed even in encrypted messages could help Google search mail that’s stored on a server. But it could be a real security issue for some people. “If you’re actually concerned that someone will know who you’re communicating with, that’s not something that PGP can help,” Rainey Reitman, the director of the activism team at the Electronic Frontier Foundation, told us last year.

She says under some circumstances, real-time communications tools like Off-the-Record plugin for the Pidgeon and Adium instant message clients, or an anonymous file uploading system like the Freedom of the Press Foundation’s open source project DeadDrop. Meanwhile, other projects are trying to create entirely new forms of secure communication. PGP creator Phil Zimmermann has teamed up with Ladar Levison of Lavabit — the email service Edward Snowden used — to create a new messaging protocol called Darkmail. Other projects along these lines include BitMessage, SecuShare and Briar.

But as Mailpile developer Bjarni Rúnar Einarsson told us last year: “Email is going to be with us for a long time. We need to do what we can to make it more secure.” And while we applaud Mailpile’s efforts to do that, Google is in an even better position to bring secure mail to the masses, should it choose to do so.
http://www.wired.com/2014/04/google-crypto-gmail/





Should Australians Prepare for Rubber-Hose Cryptanalysis?

Law enforcement peak body wants to make it easier to decrypt communications
Rohan Pearce

So Australians probably don't need to worry about getting their kneecaps broken if they don't hand over their private encryption keys just yet, but the Australian Crime Commission wants changes to the law in order to make it easier for law enforcement to decrypt secret communications.

Appearing yesterday before a Senate committee hearing into potential changes to the Telecommunications (Interception and Access) Act 1979, the ACC's acting CEO, Paul Jevtovic, suggested that some participants in the telco industry are "designing products that support organised crime activity and frustrate law enforcement".

"[i]t is our view if you are manufacturing things like that that you should have an obligation to assist the country in defending itself against organised crime and encryption communications is a classic example of that," Jevtovic.

Pushed by Greens Senator Scott Ludlam, who is chairing the inquiry, Jevtovic acknowledged lawful uses for encryption, but added that "unfortunately organised crime takes what is good technology which helps society, they take it for their own purposes."

"And when we can identify organised crime as having access to it that's when I think industry should be able to help us," the acting ACC CEO added.

A written submission to the inquiry by the ACC advocated for changes to the TIA Act to include an "Obligation imposed on telecommunications service providers to assist law enforcement, including with the decryption of communications."

"The ACC is supportive of measures which require telecommunication service providers, including ancillary service providers, to assist law enforcement with accessing communications where authorised, including offences for not assisting with decrypting communications," as was recommended by a previous parliamentary inquiry, the submission states.

In a number of European nations, not assisting law enforcement organisations with the decryption of data is a criminal offence. For example, the UK's Regulation of Investigatory Powers Act 2000 can require the disclosure of a decryption key necessary to access information "in the interests of national security", "for the purpose of preventing or detecting crime" or "in the interests of the economic well-being of the United Kingdom".

In the US, now-defunct encrypted email provider Lavabit was last year forced to hand over private SSL keys to the FBI, potentially jeopardising the private communications of the service's 400,000 customers.

The Lavabit case drew ire from civil libertarians: "When the court ordered Lavabit to turn over its private encryption keys, it undermined the businesses and technologies we rely on to keep our information safe," an ACLU blog entry argued.

In addition to seeking rules that would force telcos to retain and offer law enforcement access to so-called 'metadata', Judith Lind, executive director, strategy and specialist capabilities at the ACC, told the inquiry that the organisation also wants "assistance from industry and ancillary providers at very much a technical level".

"So sharing knowledge about how their apps work, how their networks work to enable our technicians then to work out how and whether interception can occur," Lind said. "So we're seeking assistance at that level as well as the actual access to the data and services."
http://www.computerworld.com.au/arti...ryptanalysis_/





Activists Want Net Neutrality, NSA Spying Debated at Internet Governance Conference
John Ribeiro

A campaign on the Internet is objecting to the exclusion of issues like net neutrality, the cyberweapons arms race and surveillance by the U.S. National Security Agency from the discussion paper of an Internet governance conference this week in Sao Paulo, Brazil.

A significant section of the participants are also looking for concrete measures and decisions at the conference rather than yet another statement of principles.

The proposed text “lacks any strength,” does not mention NSA’s mass surveillance or the active participation of Internet companies, and fails to propose any concrete action, according to the campaign called Our Net Mundial.

Former NSA contractor Edward Snowden leaked information about the surveillance programs of the U.S., which allegedly included real time access to content on servers of Internet companies like Facebook and Google.

The Global Multistakeholder Meeting on the Future of Internet Governance, also called NETmundial, released Thursday a document to guide the discussions starting Wednesday among the representatives from more than 80 countries .

An earlier document leaked by whistle-blower site WikiLeaks proposed international agreements for restraining cyber weapons development and deployment and called for the Internet to remain neutral and free from discrimination. WikiLeaks said the document was prepared for approval by a high-level committee.

Dilma Rousseff, the president of host country Brazil, has been a sharp critic of surveillance by the U.S. after reports that her communications were being spied on by the NSA.

Though the Brazil discussion document does not directly mention NSA surveillance, it refers to the freedom of expression, information and privacy, including avoiding arbitrary or unlawful collection of personal data and surveillance.

The meeting’s call for universal principles partly reflects a desire for interstate agreements that can prevent rights violations such as the NSA surveillance, wrote Internet governance experts Milton Mueller and Ben Wagner in a paper. The Tunis Agenda of the World Summit on the Information Society also called for globally applicable public policy principles for Internet governance.

“But there have been so many Internet principles released in recent years that it is hard to see what the Brazil conference could add,” Mueller and Wagner wrote.

Neelie Kroes, vice president of the European Commission, wrote last week in a letter to NETmundial that she continued to strongly believe “that the outcomes of NETmundial must be concrete and actionable, with clear milestones and with a realistic but ambitious timeline.” She identified a number of areas where “concreteness” could be achieved, including the globalization of the Internet Corporation for Assigned Names and Numbers (ICANN).

The U.S. National Telecommunications and Information Administration said in March it plans to end its 16-year oversight of ICANN. The move appeared to be in response to criticism of U.S. control of the Internet. ICANN’s president Fadi Chehadé has also called for greater accountability for his organization.
http://www.pcworld.com/article/21461...onference.html

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

April 19th, April 12th, April 5th, March 29th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 30th, '11 JackSpratts Peer to Peer 0 27-07-11 06:58 AM
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 03:17 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)