P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 05-08-15, 07:53 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - August 8th, '15

Since 2002


































"Until Apple fixes the bug, Mac users don't have any good options." – Dan Goodin






































August 8th, 2015




New Attack on Tor can Deanonymize Hidden Services with Surprising Accuracy

Deanonymization requires luck but nonetheless shows limits of Tor privacy.
Dan Goodin

Computer scientists have devised an attack on the Tor privacy network that in certain cases allows them to deanonymize hidden service websites with 88 percent accuracy.

Such hidden services allow people to host websites without end users or anyone else knowing the true IP address of the service. The deanonymization requires the adversary to control the Tor entry point for the computer hosting the hidden service. It also requires the attacker to have previously collected unique network characteristics that can serve as a fingerprint for that particular service. Tor officials say the requirements reduce the effectiveness of the attack. Still, the new research underscores the limits to anonymity on Tor, which journalists, activists, and criminals alike rely on to evade online surveillance and monitoring.

"Our goal is to show that it is possible for a local passive adversary to deanonymize users with hidden service activities without the need to perform end-to-end traffic analysis," the researchers from the Massachusetts Institute of Technology and Qatar Computing Research Institute wrote in a research paper. "We assume that the attacker is able to monitor the traffic between the user and the Tor network. The attacker’s goal is to identify that a user is either operating or connected to a hidden service. In addition, the attacker then aims to identify the hidden service associated with the user."

The attack works by gathering the network data of a pre-determined list of hidden services in advance. By analyzing patterns in the number of packets passing between the hidden service and the entry guard it uses to access Tor, the researchers were able to obtain a unique fingerprint of each service. They were later able to use the fingerprint to identify the service even though they were unable to decrypt the traffic it was sending. In a press release, the researchers elaborated:

The researchers’ attack requires that the adversary’s computer serve as the guard on a Tor circuit. Since guards are selected at random, if an adversary connects enough computers to the Tor network, the odds are high that, at least on some occasions, one or another of them would be well-positioned to snoop.

During the establishment of a circuit, computers on the Tor network have to pass a lot of data back and forth. The researchers showed that simply by looking for patterns in the number of packets passing in each direction through a guard, machine-learning algorithms could, with 99 percent accuracy, determine whether the circuit was an ordinary Web-browsing circuit, an introduction-point circuit, or a rendezvous-point circuit. Breaking Tor’s encryption wasn’t necessary.

Furthermore, by using a Tor-enabled computer to connect to a range of different hidden services, they showed that a similar analysis of traffic patterns could identify those services with 88 percent accuracy. That means that an adversary who lucked into the position of guard for a computer hosting a hidden service, could, with 88 percent certainty, identify it as the service’s host.

Similarly, a spy who lucked into the position of guard for a user could, with 88 percent accuracy, tell which sites the user was accessing.


The research is sure to interest governments around the world, including the US. On at least two occasions over the past few years, FBI agents have exploited software vulnerabilities, once Adobe Flash and once in Mozilla Firefox, to identify criminal suspects. Recently unsealed court documents also show the FBI seizing a Tor-hidden child porn site and allowing it to run for weeks so agents could gather evidence on visitors.

In an e-mail, Tor project leader Roger Dingledine said the requirements of the attack greatly limited its effectiveness in real-world settings. First, he said, the adversary must control one of the entry guards a hidden service is using. Such entry guards in theory are assigned randomly, so attackers would have to operate a large number of Tor nodes to have a reasonable expectation of seeing traffic of a given hidden service. Additionally, he cited research from last year arguing that researchers routinely exaggerate the risk of website fingerprinting on anonymity.

He went on to question the "classifier" algorithm that allowed the researchers to identify certain traffic as belonging to a Tor hidden service. It wouldn't be hard to thwart it, he said, by adding random padding to the data being sent.

"It's not surprising that their classifier basically stops working in the face of more padding," he wrote. "Classifiers are notoriously brittle when you change the situation on them. So the next research step is to find out if it's easy or hard to design a classifier that isn't fooled by padding.

The full text of Dingledine's e-mail is below:

This is a well-written paper. I enjoyed reading it, and I'm glad the researchers are continuing to work in this space.

First, for background, run (don't walk) to Mike Perry's blog post explaining why website fingerprinting papers have historically overestimated the risks for users:
https://blog.torproject.org/blog/cri...inting-attacks
and then check out Marc Juarez et al's followup paper from last year's ACM CCS that backs up many of Mike's concerns:
http://freehaven.net/anonbib/#ccs2014-critical

To recap, this new paper describes three phases. In the first phase, they hope to get lucky and end up operating the entry guard for the Tor user they're trying to target. In the second phase, the target user loads some web page using Tor, and they use a classifier to guess whether the web page was in onion-space or not. Lastly, if the first classifier said "yes it was", they use a separate classifier to guess which onion site it was.

The first big question comes in phase three: is their website fingerprinting classifier actually accurate in practice? They consider a world of 1000 front pages, but ahmia.fi and other onion-space crawlers have found millions of pages by looking beyond front pages. Their 2.9% false positive rate becomes enormous in the face of this many pages—and the result is that the vast majority of the classification guesses will be mistakes.

For example, if the user loads ten pages, and the classifier outputs a guess for each web page she loads, will it output a stream of "She went to Facebook!" "She went to Riseup!" "She went to Wildleaks!" while actually she was just reading posts in a Bitcoin forum the whole time? Maybe they can design a classifier that works well when faced with many more web pages, but the paper doesn't show one, and Marc Juarez's paper argues convincingly that it's hard to do.

The second big question is whether adding a few padding cells would fool their "is this a connection to an onion service" classifier. We haven't tried to hide that in the current Tor protocol, and the paper presents what looks like a great classifier. It's not surprising that their classifier basically stops working in the face of more padding though: classifiers are notoriously brittle when you change the situation on them. So the next research step is to find out if it's easy or hard to design a classifier that isn't fooled by padding.

I look forward to continued attention by the research community to work toward answers to these two questions. I think it would be especially fruitful to look also at true positive rates and false positives of both classifiers together, which might show more clearly (or not) that a small change in the first classifier has a big impact on foiling the second classifier. That is, if we can make it even a little bit more likely that the "is it an onion site" classifier guesses wrong, we could make the job of the website fingerprinting classifier much harder because it has to consider the billions of pages on the rest of the web too.

http://arstechnica.com/security/2015...sing-accuracy/





Shoring Up Tor

Researchers mount successful attacks against popular anonymity network — and show how to prevent them.
Larry Hardesty

With 2.5 million daily users, the Tor network is the world’s most popular system for protecting Internet users’ anonymity. For more than a decade, people living under repressive regimes have used Tor to conceal their Web-browsing habits from electronic surveillance, and websites hosting content that’s been deemed subversive have used it to hide the locations of their servers.

Researchers at MIT and the Qatar Computing Research Institute (QCRI) have now demonstrated a vulnerability in Tor’s design. At the Usenix Security Symposium this summer, they will show that an adversary could infer a hidden server’s location, or the source of the information reaching a given Tor user, by analyzing the traffic patterns of encrypted data passing through a single computer in the all-volunteer Tor network.

Fortunately, the same paper also proposes defenses, which representatives of the Tor project say they are evaluating for possible inclusion in future versions of the Tor software.

“Anonymity is considered a big part of freedom of speech now,” says Albert Kwon, an MIT graduate student in electrical engineering and computer science and one of the paper’s first authors. “The Internet Engineering Task Force is trying to develop a human-rights standard for the Internet, and as part of their definition of freedom of expression, they include anonymity. If you’re fully anonymous, you can say what you want about an authoritarian government without facing persecution.”

Layer upon layer

Sitting atop the ordinary Internet, the Tor network consists of Internet-connected computers on which users have installed the Tor software. If a Tor user wants to, say, anonymously view the front page of The New York Times, his or her computer will wrap a Web request in several layers of encryption and send it to another Tor-enabled computer, which is selected at random. That computer — known as the guard — will peel off the first layer of encryption and forward the request to another randomly selected computer in the network. That computer peels off the next layer of encryption, and so on.

The last computer in the chain, called the exit, peels off the final layer of encryption, exposing the request’s true destination: the Times. The guard knows the Internet address of the sender, and the exit knows the Internet address of the destination site, but no computer in the chain knows both. This routing scheme, with its successive layers of encryption, is known as onion routing, and it gives the network its name: “Tor” is an acronym for “the onion router.”

In addition to anonymous Internet browsing, however, Tor also offers what it calls hidden services. A hidden service protects the anonymity of not just the browser, but the destination site, too. Say, for instance, that someone in Iran wishes to host a site archiving news reports from Western media but doesn’t want it on the public Internet. Using the Tor software, the host’s computer identifies Tor routers that it will use as “introduction points” for anyone wishing to access its content. It broadcasts the addresses of those introduction points to the network, without revealing its own location.

If another Tor user wants to browse the hidden site, both his or her computer and the host’s computer build Tor-secured links to the introduction point, creating what the Tor project calls a “circuit.” Using the circuit, the browser and host identify yet another router in the Tor network, known as a rendezvous point, and build a second circuit through it. The location of the rendezvous point, unlike that of the introduction point, is kept private.

Traffic fingerprinting

Kwon devised an attack on this system with joint first author Mashael AlSabah, an assistant professor of computer science at Qatar University, a researcher at QCRI, and, this year, a visiting scientist at MIT; Srini Devadas, the Edwin Sibley Webster Professor in MIT’s Department of Electrical Engineering and Computer Science; David Lazar, another graduate student in electrical engineering and computer science; and QCRI’s Marc Dacier.

The researchers’ attack requires that the adversary’s computer serve as the guard on a Tor circuit. Since guards are selected at random, if an adversary connects enough computers to the Tor network, the odds are high that, at least on some occasions, one or another of them would be well-positioned to snoop.

During the establishment of a circuit, computers on the Tor network have to pass a lot of data back and forth. The researchers showed that simply by looking for patterns in the number of packets passing in each direction through a guard, machine-learning algorithms could, with 99 percent accuracy, determine whether the circuit was an ordinary Web-browsing circuit, an introduction-point circuit, or a rendezvous-point circuit. Breaking Tor’s encryption wasn’t necessary.

Furthermore, by using a Tor-enabled computer to connect to a range of different hidden services, they showed that a similar analysis of traffic patterns could identify those services with 88 percent accuracy. That means that an adversary who lucked into the position of guard for a computer hosting a hidden service, could, with 88 percent certainty, identify it as the service’s host.

Similarly, a spy who lucked into the position of guard for a user could, with 88 percent accuracy, tell which sites the user was accessing.

To defend against this type of attack, “We recommend that they mask the sequences so that all the sequences look the same,” AlSabah says. “You send dummy packets to make all five types of circuits look similar.”

“For a while, we’ve been aware that circuit fingerprinting is a big issue for hidden services,” says David Goulet, a developer with the Tor project. “This paper showed that it’s possible to do it passively — but it still requires an attacker to have a foot in the network and to gather data for a certain period of time.”

“We are considering their countermeasures as a potential improvement to the hidden service,” he adds. “But I think we need more concrete proof that it definitely fixes the issue.”
https://newsoffice.mit.edu/2015/tor-vulnerability-0729





What Happened When We Got Subpoenaed Over Our Tor Exit Node

We've run a Tor exit-node for years. In June, we got the nightmare Tor operator scenario: a federal subpoena (don't worry, it ended surprisingly well!)
Cory Doctorow

Tor, The Onion Router, is a privacy and anonymity network that bounces traffic around the Internet in nested cryptographic wrappers that make it much harder to tell who its users are and what they're doing. It's especially hated by the NSA and GCHQ.

Many people run Tor nodes, but only a few run "exit nodes" through which traffic exits the Tor network and goes out to the public, normal Internet. Having a lot of exit nodes, with high-speed connections, is critical to keeping Tor users safe and secure. We wanted to do our bit for allowing, for example, Bahranian and Chinese dissidents to communicate out of view of their domestic spy agencies, so we turned some of our resources over to Tor in 2012, including access to our blazing-fast Internet connection.
The nightmare scenario for Tor exit-node operators is that you'll get blamed for the stuff that people do using your node. In Germany and Austria, prosecutors have actually brought criminal action against Tor exit-node operators.

So we were a little freaked out in June when an FBI agent sent us a subpoena ordering us to testify before a federal grand jury in New Jersey, with all our logs for our Tor exit node.

We contacted our lawyer, the hard-fightin' cyber-lawyer Lauren Gelman, and she cooled us out. She sent the agent this note:

Special Agent XXXXXX.

I represent Boing Boing. I just received a Grand Jury Subpoena to Boing Boing dated June 12, 2015 (see attached).

The Subpoena requests subscriber records and user information related to an IP address. The IP address you cite is a TOR exit node hosted by Boing Boing (please see: http://tor-exit.boingboing.net/). As such, Boing Boing does not have any subscriber records, user information, or any records at all related to the use of that IP address at that time, and thus cannot produce any responsive records.

I would be happy to discuss this further with you if you have any questions.


And that was it.

The FBI agent did his homework, realized we had no logs to give him, and no one had to go to New Jersey. Case closed. For us, anyway. Not sure what went down with the grand jury.

I'm not saying that everyone who gets a federal subpoena for running a Tor exit node will have this outcome, but the only Tor legal stories that rise to the public's attention are the horrific ones. Here's a counterexample: Fed asks us for our records, we say we don't have any, fed goes away.

Only you can decide whether running a Tor exit-node fits within your risk-tolerance. But as you decide whether to contribute to the global network of civic-minded volunteers who provide bandwidth and computation to help keep Internet users free and safe, keep our story in mind along with all the scare stories you've heard.
http://boingboing.net/2015/08/04/wha...e-fbi-sub.html





Kim Dotcom Planning New Open-Source File-Sharing Service
David Murphy

Is your data safe on Mega? Not according to Mega's founder, the headline-grabbing Kim Dotcom. According to Dotcom, speaking in a Q&A session over at Slashdot earlier this week, he's basically been ousted from ownership of the service he created back in January of 2013. He no longer works for Mega, nor does he even own any shares of Mega.

And that explains why he's being a little more frank about Mega than he has been previously.

"I'm not involved in Mega anymore. Neither in a managing nor in a shareholder capacity. The company has suffered from a hostile takeover by a Chinese investor who is wanted in China for fraud. He used a number of straw-men and businesses to accumulate more and more Mega shares. Recently his shares have been seized by the NZ government. Which means the NZ government is in control. In addition Hollywood has seized all the Megashares in the family trust that was setup for my children. As a result of this and a number of other confidential issues I don't trust Mega anymore," Dotcom wrote.

Mega hasn't taken Dotcom's allegations lightly, however. According to a company representative, only 13 percent of its total shareholdings are subject to freezing orders right now—six percent owned by Dotcom's wife, which was frozen by a New Zealand court in November of last year, and seven percent frozen last August.

"Mega is not a party to either of the above court proceedings," a representative said.

"More than 75 percent of shareholders have supported recent equity issues, so there has not been any 'hostile takeover', contrary to Mr Dotcom's assertion. Those shareholders who have decided not to subscribe to recent issues have been diluted accordingly. That has been their choice."

According to Dotcom, there's a bit more to the story that still has to hit the public eye.

"I will issue a detailed statement about the status of #Mega next week. Then you can make an educated decision if you still want to use it," Dotcom tweeted this past week.

Regardless of who's right on this one, Dotcom is looking to possibly compete against the very company he founded in the near future. Once his non-compete clause is up toward the end of this year, Dotcom said that he's going to create a non-profit, open-source competitor to Mega.

"I want to give everyone free, unlimited and encrypted cloud storage with the help of donations from the community to keep things going," Dotcom wrote.

If he goes forward with his plan, this will be the third file-sharing site Dotcom has built: MegaUpload, Mega, and whatever he calls his new open-source idea. Here's hoping it can survive the legal scrutiny his previous two attempts attracted.
http://www.pcmag.com/article2/0,2817,2488905,00.asp





Foxtel to Try and have File Sharing Websites Blocked

Foxtel will take legal action after receiving advice on how it can use newly passed anti-piracy legislation to have file sharing websites blocked.

If the Pay TV provider does launch a case it will aim to block piracy websites like The Pirate Bay, where thousands of Australians download popular series such as Game of Thrones, which Foxtel has exclusive rights to in Australia, news.com.au reports.

A Foxtel spokesman said the the Copyright Amendment (Online Infringement) Bill gives copyright holders "similar rights in relation to foreign websites which steal their content to those they would have if the sites were based in Australia."

The spokesman said Foxtel had previously been powerless to take action against file sharing websites because they were not based in Australia.

"Foxtel and other rights holders are currently assessing what action can and should be taken to give effect to the legislation," the spokesman said.

In order to have a site blocked the Pay TV provider must prove that the site's primary function is to host copyrighted content.

Whether people will be significantly impacted by the blocking of the site must also be considered as well as whether blocking a site is in the public interest.

If the bid is successful internet service providers would be forced to block their users access to such sites.

However, this will not guarantee that Australians are unable to download copyrighted material because similar measures introduced abroad have always been evaded.
http://www.9news.com.au/national/201...bsites-blocked





Memo to MPAA: Congress Didn’t Pass SOPA

Studios' suit demands "entire Internet" block, filtering infringing sites, EFF says.
David Kravets

Remember 2012, when there was that giant Internet backlash against the Stop Online Piracy Act that, amazingly, Congress listened to and hence the content-industry-backed legislation died a loud death?

Well apparently the Motion Picture Association of America didn't get the memo that SOPA failed. SOPA provided for DNS-redirecting of blacklisted sites. It gave the Justice Department the power to seek court orders from search engines like Google not to render search results of infringing websites.

What's more, the failed proposal also would have codified that content owners like the MPAA and Recording Industry Association of America have the legal backing to seek court orders demanding that judges require financial institutions and ad networks to stop doing business with sites that content owners prove are dedicated to infringing activity.

According to Mitch Stoltz, an attorney for the Electronic Frontier Foundation, the MPAA is invoking SOPA-like powers to take down sites dedicated to infringing motion pictures:

...the studios are asking for one court order to bind every domain name registrar, registry, hosting provider, payment processor, caching service, advertising network, social network, and bulletin board—in short, the entire Internet—to block and filter a site called MovieTube. If they succeed, the studios could set a dangerous precedent for quick website blocking with little or no court supervision, and with Internet service and infrastructure companies conscripted as enforcers. That precedent would create a powerful tool of censorship—which we think should be called SOPA power, given its similarity to the ill-fated SOPA bill. It will be abused, which is why it’s important to stop it from being created in the first place.

The MPAA suit against MovieTube and its dozen affiliated sites, demands:

4. Ordering that third parties providing services used in connection with any of the MovieTube Websites and/or domain names for MovieTube Websites, including without limitation, web hosting providers, back-end service providers, digital advertising service providers, search-based online advertising services (such as through paid inclusion, paid search results, sponsored search results, sponsored links, and Internet keyword advertising), domain name registration privacy protection services, providers of social media services (e.g., Facebook and Twitter) and user generated and online content services (e.g., YouTube, Flickr and Tumblr), who receive actual notice of this Order, shall within three (3) days of receipt of this Order, cease providing or disable provision of such services to (i) Defendants in relation to the Infringing Copies; and/or (ii) any and all of the MovieTube Websites.

5. Ordering that in accordance with 15 U.S.C. § 1116(a), 17 U.S.C. § 504(b) and this Court's inherent equitable power to issue provisional remedies ancillary to its authority to provide final equitable relief, Defendants and their officers, servants, employees, agents and any persons in active concert or participation with them, and banks, savings and loan associations, payment processors or other financial institutions, payment providers, third-party processors and advertising service providers (including but not limited to AdCash, Propeller Ads Media, MGID and Matomy Media Group), who receive actual notice of this Preliminary Injunction Order, shall immediately locate and restrain any accounts connected to Defendants' operations or the MovieTube Websites; and shall not allow such funds to be transferred or withdrawn or allow any diminutions to be made by Defendants from such accounts, pending further order of this Court or notification otherwise by Plaintiffs.


Kate Bedingfield, an MPAA spokeswoman, strongly defended the suit in an e-mail to Ars. "The MPAA has filed a civil suit against MovieTube.cc and a ring of affiliated sites whose explicit purpose is to illegally distribute and stream stolen copies of the latest movies and television programs for a profit," she said. "Shutting down sites like MovieTube.cc helps protect the livelihoods of millions of film and TV workers worldwide and ensure the continued growth of a legal and vibrant creative marketplace."

Stoltz said there's a larger issue at hand:

If the court signs this proposed order, the MPAA companies will have the power to force practically every Internet company within the reach of US law to help them disappear the MovieTube websites. Regardless of whether those sites are engaged in copyright infringement or not, this is a scary amount of power to confer on the movie studios. And it looks even worse at scale: if orders like this become the norm, Internet companies large and small will have to build infrastructure resembling the Great Firewall of China in order to comply.

Stoltz added that US judges apparently haven't gotten the SOPA-didn't-pass memo, either:

So far this year, entertainment companies have used these SOPA-like orders to take down a site that promised to stream the recent boxing match between Floyd Mayweather and Manny Pacquiao, and another to make Blu-ray ripping software disappear. Another case would have forced the content delivery network CloudFlare to filter its service for any sites that had the word "grooveshark" in their names. CloudFlare and EFF were successful in getting that order modified to take away the filtering requirement.

A hearing has been set for August 18 in New York federal court. The operators of the MovieTube websites have not been located.
http://arstechnica.com/tech-policy/2...dnt-pass-sopa/





The TPP Copyright Chapter Leaks: Canada May Face Website Blocking, New Criminal Provisions & Term Extension
Michael Geist

KEI this morning released the May 2015 draft of the copyright provisions in the Trans Pacific Partnership (copyright, ISP annex, enforcement). The leak appears to be the same version that was covered by the EFF and other media outlets earlier this summer. As such, the concerns remain the same: anti-circumvention rules that extend beyond the WIPO Internet treaties, additional criminal rules, the extension of copyright term, increased border measures, mandatory statutory damages, and expanding ISP liability rules, including the prospect of website blocking for Canada.

Beyond the substantive concerns highlighted below, there are two key takeaways. First, the amount of disagreement within the chapter is striking. As of just a few months ago, there were still many critical unresolved issues with widespread opposition to (predominantly) U.S. proposals. Government ministers may continue to claim that the TPP is nearly done, but the parties still have not resolved longstanding copyright issues.

Second, from a Canadian perspective, the TPP could require a significant overhaul of current Canadian law. If Canada caves on copyright, changes would include extending the term of copyright, implementing new criminal provisions, creating new restrictions on Internet retransmission, and adding the prospect of website blocking for Internet providers. There is also the possibility of further border measures requirements just months after Bill C-8 (the anti-counterfeiting bill) received royal assent.

Given the extensive debate on copyright during the 2012 reforms, the TPP upsets the balance the Canadian government struck, mandating reforms without public consultation or debate. The government has granted itself the power to continue to negotiate the TPP during the election period, but all the major parties should publicly declare where they stand on these issues.

Further discussion on key provisions are posted below.

Copyright Term

Unsurprisingly, the U.S.wants all TPP countries to ensure that their copyright term of protection is at least life of the author plus 70 years. That would require countries such as Canada, Japan, New Zealand, and Malaysia to extend their terms by 20 additional years beyond the international standard found in the Berne Convention. The length of term within the TPP is currently in square brackets, suggesting that countries have still not reached a final decision (though expectations are that Canada will cave on the issue).

The Importance of the Public Domain

The general provisions section of the IP chapter contains a notable dispute between Canada and the U.S. over the public domain. There is an article that emphasizes the importance of taking into account the interests of rights holders, service providers, users and the public. Canada and Chile have proposed additional language to acknowledge “the importance of preserving the public domain.” The U.S. and Japan oppose the reference.

Limitations and Exceptions

The copyright provisions include an article on limitations and exceptions that references “criticism; comment; news reporting; teaching, scholarship, research, and other similar purposes; and facilitating access to published works for persons who are blind, visually impaired, or otherwise print disabled.” There is also a footnote recognizing the Marrakesh Treaty and one that acknowledges that commercial uses may be legitimate purposes under exceptions and limitations. This article is consistent with current Canadian law.

Internet Retransmission

The U.S., Singapore, and Peru support a provision granting rights holders stronger rights over Internet retransmission of television signals. The provision states:

No Party may permit the retransmission of television signals (whether terrestrial, cable, or satellite) on the Internet without the authorization of the right holder or right holders of the content of the signal

Canada – along with Vietnam, Malaysia, New Zealand, Mexico, Chile, Brunei, and Japan – all oppose the provision.

Anti-Circumvention Rules

The DMCA’s anti-circumvention rules (often referred to as digital lock rules) make it into the chapter with restrictions that extend beyond those required by the WIPO Internet treaties. Earlier opposition to mandatory criminal penalties for some circumvention has disappeared as the countries now agree that it is a requirement. The TPP permits some exceptions (there are some found in Canadian law), subject to strict limitations.

In addition to the anti-circumvention rules, there are also provisions on rights management information. Canada currently stands alone in opposing mandatory criminal penalties for rights management information violations (for example, making available copies of works knowing that rights management information has been removed). If Canada caves on the issue, the digital lock and rights management information provisions in the Copyright Act would require amendement by adding new criminal penalties.

ISP liability

The liability of Internet service providers is currently the subject of a lengthy addendum that is complicated by different approaches in the varying TPP countries. The primary approach is to create a legal requirement for ISPs to cooperate with anti-infringement activities in return for limits on liability. The key requirements include a notice-and-takedown system similar to that found in the United States. However, there are also flexibilities included for other countries and a complete carve-out for Canada.

Given that the Canadian government invested significant political capital in the new notice-and-notice system, Canada and the U.S. have proposed an annex to the IP chapter that exempts countries from the ISP requirements provided they have rules that look a lot like the current Canadian copyright rules. These include a notice-and-notice system, a provision creating liability for those that enable infringement, and a search engine content removal provision. It is worth noting that several countries, including Chile, Vietnam, Brunei, and Peru oppose the concept of an annex to address the legal system of one country.

There is potentially one critical additional requirement that would be added to Canadian law – website blocking. The provision currently states that the country would “induce”

Internet service providers carrying out the function referred to in paragraph 2(c) to remove or disable access to material upon becoming aware of a decision of a court to the effect that the person storing the material infringes copyright in the material.

The word “induce” is bracketed, suggesting that there is still some disagreement on the legal requirement associated with the issue. It is not clear what “induce” means in this context, but it seems likely that the U.S. is pushing Canada to create a new website blocking requirement in return for acceptance of the notice-and-notice system.

Copyright Misuse and Abuse

Virtually all countries support a provision that allows for compensation where a rights holder has abused the enforcement powers. The proposal states:

Each Party shall ensure that its judicial authorities shall have the authority to order a party at whose request measures were taken and who has abused enforcement procedures to provide the party wrongfully enjoined or restrained adequate compensation for the injury suffered because of such abuse. The judicial authorities shall also have the authority to order the applicant to pay the defendant expenses, which may include appropriate attorney’s fees.

The lone holdout? The U.S., which opposes the provision.

Statutory Damages

There remains a significant dispute over the inclusion of statutory damages. The U.S. wants all countries to be required to have them. Many TPP countries, including New Zealand, Japan, Mexico, Australia, Brunei, and Malaysia, oppose a mandatory requirement. Canada already has statutory damages within the law.

Border Measures

There remains considerable disagreement over the border measures provisions. Having just established new rules in Bill C-8 (the anti-counterfeiting bill), Canada is clearly opposed to reopening the issue. It has therefore proposed adding language clarifying that the rules do not apply to “grey market” goods (ie. goods legally sold first in another country and exported elsewhere) or to in-transit shipments that are destined for another country. There are still many proposals in this section with some attempts to find compromise among the various TPP parties.
http://www.michaelgeist.ca/2015/08/t...erm-extension/





New Nightmare Exploit Cracks Cloud-Based File Sharing Services Wide Open
Maria Deutscher

File sharing providers such as Dropbox Inc. and Box Inc. have managed to maintain an impressive security record in spite of safeguarding vast amounts of corporate data that represents a massive target for hackers. But while their backend infrastructure may be protected, the local clients through which users synchronize their data to that backend are an entirely different story.

That’s the revelation from a keynote held at the annual Black Hat security conference this week by researchers with threat intelligence outfit Imperva Inc., who revealed to have developed a tool that exploits that sharing mechanism to provide unhindered access to documents stored in file lockers. The vulnerability lies in the way that the services verify changes to data.

Dropbox, Box and most of the other major providers assign a cryptographic token to the device from which a user accesses their account that serves as a placeholder for their login credentials to guard against interception. Whenever new files or updates are synchronized to the backend, the key is rechecked to confirm the source of the changes.

That provides a much more practical alternative to having workers re-enter their usernames and passwords every time the client on their local machine connects to their cloud-based folder. The problem is that top providers allow tokens to be shared among devices in order to accommodate the new platforms on which users spend more and more of their time, which means that all a hacker has to do is get their hands on on a copy.

And as Imperva has discovered, that can be accomplished with a only few temporary changes to the configuration of the targeted machine that are minor enough to escape detection by common virus scanners. The main trick is convincing the user to let the changes be executed, which its researchers achieved through old-fashioned social engineering in the form of a deceptive browser plugin.

Once the attacker has their hands on the token, the synchronization mechanism can be diverted to replicate files to a folder under their control or inject malicious code into documents to infect the user’s device. That’s an especially worrying prospect since the malware can simply be deleted after a successful installation, which makes it much harder to identify the source of the breach.

But the worst part is that the token is not refreshed with password changes, which means that the exploit sidesteps one of the main defense mechanisms with which large organizations protect their users from attack. That leaves organizations to discover breaches that after the fact, something that CIOs simply can’t afford.

As a result, users of Box, Dropbox, Microsoft Corp.’s OneDrive and Google Drive can expect major security updates to their clients in the coming weeks and months. Until then, however, hackers will no doubt do their best to seize this newly found opportunity to try and compromise the world’s many cloud-driven organizations.
http://siliconangle.com/blog/2015/08...ces-wide-open/





Hackers Exploit ‘Flash’ Vulnerability in Yahoo Ads
Dino Grandoni

For seven days, hackers used Yahoo’s ad network to send malicious bits of code to computers that visit Yahoo’s collection of heavily trafficked websites, the company said on Monday.

The attack, which started on July 28, was the latest in a string that have exploited Internet advertising networks, which are designed to reach millions of people online. It also highlighted growing anxiety over a much-used graphics program called Adobe Flash, which has a history of security issues that have irked developers at Silicon Valley companies.

“Right now, the bad guys are really enjoying this,” said Jérôme Segura, a security researcher at Malwarebytes, the security company that uncovered the attack. “Flash for them was a godsend.”

The scheme, which Yahoo shut down on Monday, worked like this: A group of hackers bought ads across the Internet giant’s sports, news and finance sites. When a computer — in this case, one running Windows — visited a Yahoo site, it downloaded malware code.

From there, the malware hunted for an out-of-date version of Adobe Flash, which it could use to commandeer the computer — either holding it for ransom until the hackers were paid off or discreetly directing its browser to websites that paid the hackers for traffic.

“Attacking Yahoo’s visitors would be enormously profitable for criminals,” said Vadim Kotov, a malware researcher at Bromium Labs, a software company, who was not involved with uncovering this attack. “So it makes sense that you’d see this particular type of attack there.”

Attacks on advertising networks have been on the rise, Mr. Kotov and other researchers say. Hackers are able to use the advertising networks themselves, built for targeting specific demographics of Internet users, to find vulnerable machines.

While Yahoo acknowledged the attack, the company said that it was not nearly as big as Malwarebytes had portrayed it to be.

“We take all potential security threats seriously,” a Yahoo spokeswoman said in statement. “With that said, the scale of the attack was grossly misrepresented in initial media reports, and we continue to investigate the issue.”

“In terms of how many people were served a malicious ad, only Yahoo would really know,” Mr. Segura said. But he added: “This is one of the largest attacks we’ve seen in recent months.”

Neither company could say exactly how many people were affected.

After news of the attack was revealed, Adobe asked users to update Flash so their computers would no longer be vulnerable.

“The majority of attacks we are seeing are exploiting software installations that are not up-to-date on the latest security updates,” said Wiebke Lips, a spokeswoman for Adobe.
http://bits.blogs.nytimes.com/2015/0...-in-yahoo-ads/





Firefox Ad Exploit Found in the Wild
Daniel Veditz

Yesterday morning, August 5, a Firefox user informed us that an advertisement on a news site in Russia was serving a Firefox exploit that searched for sensitive files and uploaded them to a server that appears to be in Ukraine. This morning Mozilla released security updates that fix the vulnerability. All Firefox users are urged to update to Firefox 39.0.3. The fix has also been shipped in Firefox ESR 38.1.1.

The vulnerability comes from the interaction of the mechanism that enforces JavaScript context separation (the “same origin policy”) and Firefox’s PDF Viewer. Mozilla products that don’t contain the PDF Viewer, such as Firefox for Android, are not vulnerable. The vulnerability does not enable the execution of arbitrary code but the exploit was able to inject a JavaScript payload into the local file context. This allowed it to search for and upload potentially sensitive local files.

The files it was looking for were surprisingly developer focused for an exploit launched on a general audience news site, though of course we don’t know where else the malicious ad might have been deployed. On Windows the exploit looked for subversion, s3browser, and Filezilla configurations files, .purple and Psi+ account information, and site configuration files from eight different popular FTP clients. On Linux the exploit goes after the usual global configuration files like /etc/passwd, and then in all the user directories it can access it looks for .bash_history, .mysql_history, .pgsql_history, .ssh configuration files and keys, configuration files for remina, Filezilla, and Psi+, text files with “pass” and “access” in the names, and any shell scripts. Mac users are not targeted by this particular exploit but would not be immune should someone create a different payload.

The exploit leaves no trace it has been run on the local machine. If you use Firefox on Windows or Linux it would be prudent to change any passwords and keys found in the above-mentioned files if you use the associated programs. People who use ad-blocking software may have been protected from this exploit depending on the software and specific filters being used.
https://blog.mozilla.org/security/20...d-in-the-wild/





Browser Add-On Prevents Data Collection from Ads and Other Hidden Trackers

San Francisco - The Electronic Frontier Foundation (EFF) today released Privacy Badger 1.0, a browser extension that blocks some of the sneakiest trackers that try to spy on your Web browsing habits.

More than a quarter of a million users have already installed the alpha and beta releases of Privacy Badger. The new Privacy Badger 1.0 includes blocking of certain kinds of super-cookies and browser fingerprinting—the latest ways that some parts of the online tracking industry try to follow Internet users from site to site.

“It’s likely you are being tracked by advertisers and other third parties online. You can see some of it when it’s happening, such as ads that follow you around the Web that seem to reflect your past browsing history,” said EFF Staff Technologist Cooper Quintin, lead developer of Privacy Badger. “Those echoes from your past mean you are being tracked, and the records of your online activity are distributed to other third parties—all without your knowledge, control, or consent. But Privacy Badger 1.0 will spot many of the trackers following you without your permission, and will block them or screen out the cookies that do their dirty work.”

Privacy Badger 1.0 works in tandem with the new Do Not Track (DNT) policy, announced earlier this week by EFF and a coalition of Internet companies. Users can set the DNT flag—in their browser settings or by installing Privacy Badger—to signal that they want to opt-out of online tracking. Privacy Badger won’t block third-party services that promise to honor all DNT requests.

“With DNT and Privacy Badger 1.0, Internet users have important new tools to make their desires about online tracking known to the websites they visit and to enforce those desires by blocking stealthy online tracking and the exploitation of their reading history,” said EFF Chief Computer Scientist Peter Eckersley, leader of the DNT project. “It’s time to put users back in control and stop surreptitious, intrusive Internet data collection. Installing Privacy Badger 1.0 helps build a leaner, cleaner, privacy-friendly Web.”

To download Privacy Badger 1.0:
https://www.eff.org/privacybadger

For more on the new Do Not Track policy:
https://www.eff.org/dnt-policy
https://www.eff.org/press/releases/p...nline-tracking





What The Ad Blocker Debate Reveals
Jean-Louis Gassée

iOS 9 and OS X 10.11 (“El Capitan”) carry ad-blocking technology that delivers an experience that stands in stark contrast to current advertising and tracking practices. Users are beginning to notice…and advertisers aren’t happy about it.

On the last day of the June 2015 Developers’ Conference, Apple held a session (video here) to announce its “Content Blocking” feature:

Observers weren’t fooled by the last-day session placement and careful euphemism (“Content” means “Ads”). True to the If It Bleeds It Leads dictum, we were treated to the usual clamor, from accusations of short-sighted tactics — Apple is at war with Google and wants to monopolize mobile advertising with iAds; publishers will blacklist Safari on iOS9 — to predictions of calamity: content blocking will upend the Web, your favorite webiste is about to die, content creators are under attack:

“You realize that ‘bloat’ pays the salaries of editorial, product, design, video, etc etc etc, right?”

There were more moderate viewpoints, of course, but the congregants who assured us that Ad blocking in iOS 9 won’t kill the Web were fewer and quieter. Pageviews, you know.

Initially, a few things jumped out at me.

First, although Content Blocking is available for iOS and OS X 10.11 (a.k.a. El Capitan), the furor was concentrated almost entirely on mobile, a reflection of the dominant role of iDevices in Apple’s ecosystem. (A glance at the technical documentation shows us that Content Blocking Extensions are actually easier to develop for the Macintosh than they are for the iPhone and iPad).

Second, the conjectured Content Blockers won’t be Apple products; they’ll be created and offered (whether free or for a price) by independent developers. You may quibble with the use of “independent” — the App Store judges will intervene, as usual — but we should expect a flurry of creative ad-blocking code followed by a round of noisy arguments accusing developers of attempting to destroy barely solvent Web publishers.

Third, in their entre nous concentration on advertisers, developers, publishers, Apple vs Google, the kommentariat disregarded the benefits of Content Blocking for mere users, the unwashed masses who supply the industry with their life-giving fluids of money and personal data.

The absence didn’t last long. In two previous Monday Notes (News Sites Are Fatter and Slower Than Ever and 20 Home Pages, 500 Trackers Loaded: Media Succumbs to Monitoring Frenzy), my compadre Frédéric Filloux cast a harsh light on bloated, prying pages. Web publishers insert gratuitous chunks of code that let advertisers vend their wares and track our every move, code that causes pages to stutter, juggle, and reload for no discernible reason. Even after the page has settled into seeming quiescence, it may keep loading unseen content in the background for minutes on end.

In a blog post titled An hour with Safari Content Blocker in iOS 9, Mobile Software Developer Dean Murphy showed how a simple iOS 9 ad-blocker that he wrote made a dramatic Before and After difference:

With Content Blocking turned on, the page loaded in two seconds instead of eleven. Once loaded, network activity ceased, which means less strain on the battery.

Another developer, Paul Hudson, provides a calm explanation of what Apple actually announced, and proceeds to an example that blocks a daily newspaper he doesn’t seem to like (“How to write a content blocker extension in 10 minutes (and never see the Daily Mail again)”. No need to dive in if geeky JSON talk doesn’t float your boat, but Hudson’s conclusions are worth contemplating (emphasis mine):

“Safari content blocking is a huge innovation: the fact that the system can optimize the rules ahead of time rather than trying to interact with an extension is a huge win for performance. Then of course there’s privacy: no one needs to know what web pages you visit, which is just how it should be.”

Publishers, of course, blame the performance problems on mobile browsers (from TheVerge.com):

“… web browsers on phones are terrible. They are an abomination of bad user experience, poor performance, and overall disdain for the open web that kicked off the modern tech revolution. Mobile Safari on my iPhone 6 Plus is a slow, buggy, crashy affair, starved for the phone’s paltry 1GB of memory and unable to rotate from portrait to landscape without suffering an emotional crisis. Chrome on my various Android devices feels entirely outclassed at times, a country mouse lost in the big city, waiting to be mugged by the first remnant ad with a redirect loop and something to prove.”

Such blame-shifting didn’t sit well with informed users. One drew a diagram contrasting the almost 1000:1 ratio between the 8KB of actual content in the article quoted above, and the 6 MB of bloat that’s actually loaded:

Another blogger went into even greater detail by documenting the 263 HTTP requests and 9.5MB needed to load the page:

As the author calculates, if you have a 1GB/month mobile data plan, going to the site three times a day will exhaust your data budget — and place you in the caring hands of 22 flavors of spyware.

I empathize with smaller publishers who feel they can’t survive without using modern, sophisticated, and intrusive advertising and tracking tools. In a piece titled Content blockers, bad ads, and what we’re doing about it, Rene Ritchie, who heads the iMore site, explains how this technology has lead publishers like him to lose control (emphasis mine):

“When we do get good ads, as soon as they finish their allotted impressions, they go away, and the ad spot gets back-filled with “remnants” which get progressively worse and worse the more we refresh the site.

Yes, we’re well aware of how insane that sounds.

We also have no ability to screen ad exchange ads ahead of time; we get what they give us. We can and have set policies, for example, to disallow autoplay video or audio ads. But we get them anyway, even from Google. Whether advertisers make mistakes or try to sneak around the restrictions and don’t get caught, we can’t tell. It happens, though, all the time.”

You can’t blame the browser, it’s the way the system has evolved in the Web advertising race to the bottom. Back when physical newspapers were still vital, advertising space was limited and thus prices were well-behaved and constant. No such thing on the Web, where the “ad inventory” tends to infinity. As a result, prices fall, sites need more ads to stay afloat, and they must consent to exploitative practices.

A few days ago, Charles Arthur addressed the subject on his site The Overpsill: The adblocking revolution is months away (with iOS 9) – with trouble for advertisers, publishers and Google. The post makes a tart statement regarding the entitlement of today’s Web publishers, some of whom cast themselves as part of the august but beleaguered Fourth Estate:

“Print-based organisations were told they needed to evolve, and stop being such dinosaurs, because the web was where it was at…Why should web advertisers be immune from evolutionary or revolutionary change in user habits? …[A]ny argument that tries to put a moral dam in front of a technological river is doomed. Napster; Bittorrent; now adblocking.”

When Arthur was questioned about his responsibility, as a journalist himself, to accede to the trackers in order to ensure the future of “quality media and journalism”, his response was unequivocal (emphasis mine):

“Have I any responsibility to them? Well, not really. Certainly as a standard reader, here’s what happened: I accepted an invitation to read an article, but I don’t think that we quite got things straight at the top of the page over the extent to which I’d be tracked, and how multiple ad networks would profile me, and suck up my data allowance, and interfere with the reading experience. Don’t I get any say in the last two, at least?”

We come here to the crux of the matter: Trust.

We feel cheated and rightly so. As users, we understand that we’re not really entitled to free browsing; we pay our bills with our selves: When The Product Is Free, We Are the Product. The problem is that we feel betrayed when we find out we’ve been overpaying. We’re being exploited — and it’s not even done nicely. (Apply your favorite metaphor, here.)

Losing trust is bad for the bottom line – no economy can function well without it. When you lose the consumer’s trust, you’re condemned to a chase for the next wave of suckers. Even sites that get us to pay for access to their content play questionable advertising and tracking games.

Publishers who rise to condemn new (and still unproven) ad-blocking features on iOS and OS X ought to ask themselves one question: Who needs whom the most?

Apple’s move answers the question. No need to think it’s building ad-blocking technology to monopolize the field to the benefit of its iAd platform whose revenue can’t “move the needle” for a company where revenue and profits mostly come from hardware (see the last 10-Q report page 25). Apple’s “ulterior” motive is making everyday use of its products more pleasant, resulting in more sales: the usual ecosystem play.

It’ll be interesting to watch what happens on Android. Will Google help developers with ad-blocking tools to improve the mobile experience and protect privacy?
http://www.mondaynote.com/2015/08/03...ebate-reveals/





0-Day Bug in Fully Patched OS X Comes Under Active Exploit to Bypass Password Protection

Privilege-escalation bug lets attackers infect Macs sans password.
Dan Goodin

Hackers are exploiting a serious zero-day vulnerability in the latest version of Apple's OS X so they can install adware applications without requiring victims to enter system passwords, researchers said.

As Ars reported last week, the privilege-escalation bug stems from new error-logging features that Apple added to OS X 10.10. Developers didn't use standard safeguards involving additions to the OS X dynamic linker dyld, a failure that lets attackers open or create files with root privileges that can reside anywhere in the OS X file system. It was disclosed last week by security researcher Stefan Esser.

On Monday, researchers from anti-malware firm Malwarebytes said a new malicious installer is exploiting the vulnerability to surreptitiously infect Macs with several types of adware including VSearch, a variant of the Genieo package, and the MacKeeper junkware. Malwarebytes researcher Adam Thomas stumbled on the exploit after finding the installer modified the sudoers configuration file. In a blog post, Malwarebytes researchers wrote:

For those who don’t know, the sudoers file is a hidden Unix file that determines, among other things, who is allowed to get root permissions in a Unix shell, and how. The modification made to the sudoers file, in this case, allowed the app to gain root permissions via a Unix shell without needing a password.

As can be seen from the code snippet shown here, the script that exploits the DYLD_PRINT_TO_FILE vulnerability is written to a file and then executed. Part of the script involves deleting itself when it’s finished.

The real meat of the script, though, involves modifying the sudoers file. The change made by the script allows shell commands to be executed as root using sudo, without the usual requirement for entering a password.

Then the script uses sudo's new password-free behavior to launch the VSInstaller app, which is found in a hidden directory on the installer’s disk image, giving it full root permissions, and thus the ability to install anything anywhere. (This app is responsible for installing the VSearch adware.)


No good options

Privilege escalation vulnerabilities have become increasingly important to hackers in an age of security sandboxes and other exploit mitigations. Often attackers will combine an attack that exploits a vulnerability in the operating system kernel with a separate information disclosure or privilege-elevation bug that allows the first exploit to bypass the security measures.

Esser said the dyld flaw is present in the current 10.10.4 version of OS X, as well as a beta version of 10.10.5 he recently tested. He said his exploits didn't work against a beta version of 10.11, an indication Apple developers already knew of the vulnerability and have been testing a fix. As Ars said last week, it wouldn't be surprising if that fix found its way into the general release of 10.10.5. Given Monday's discovery that attackers are actively exploiting the weakness to hijack Macs, a more expedited patch seems even more likely now. Update: Esser has since said the vulnerability has been fixed in a later beta version of 10.10.5.

Until Apple fixes the bug, Mac users don't have any good options. One is to install a mitigation Esser created. While Esser is a respected security researcher and software developer, many people disapprove of updates that aren't explicitly sanctioned by the official developer. Ars advises readers to strongly investigate Esser's patch before installing it. Then again, navigating the Internet with a system known to be vulnerable to in-the-wild exploits is also risky. This post will be updated if researchers from Apple or elsewhere provide guidance or meaningful mitigation advice.
http://arstechnica.com/security/2015...o-hijack-macs/





Design Flaw in Intel Processors Opens Door to Rootkits, Researcher Says
Lucian Constantin

A design flaw in the x86 processor architecture dating back almost two decades could allow attackers to install a rootkit in the low-level firmware of computers, a security researcher said Thursday. Such malware could be undetectable by security products.

The vulnerability stems from a feature first added to the x86 architecture in 1997. It was disclosed Thursday at the Black Hat security conference by Christopher Domas, a security researcher with the Battelle Memorial Institute.

By leveraging the flaw, attackers could install a rootkit in the processors System Management Mode (SMM), a protected region of code that underpins all the firmware security features in modern computers.

Once installed, the rootkit could be used for destructive attacks like wiping the UEFI (Unified Extensible Firmware Interface) the modern BIOS or even to re-infect the OS after a clean install. Protection features like Secure Boot wouldnt help, because they too rely on the SMM to be secure.

The attack essentially breaks the hardware roots of trust, Domas said.

Intel did not immediately respond to a request for comment. According to Domas, the chip maker is aware of the issue and has mitigated it in its latest CPUs. The company is also rolling out firmware updates for older processors, but not all of them can be patched, he said.

To exploit the vulnerability and install the rootkit, attackers would need to already have kernel or system privileges on a computer. That means the flaw cant be used by itself to compromise a system, but could make an existing malware infection highly persistent and completely invisible.

Domas only tested the exploit successfully on Intel processors, but noted that x86 processors made by AMD should in theory be vulnerable as well.

Even if BIOS/UEFI updates are made available by computer manufacturers, their rate of adoption is likely to be very low, especially among consumers.

Unfortunately theres not much users can do, except try not to become infected by malware in the first place that could gain kernel privileges to deploy such a rootkit.
http://www.itworld.com/article/29658...cher-says.html





Regulators Investigating Harman Kardon After Remote Hack Of Jeep
Ashlee Kieler

Following a report last month that suggested certain Fiat Chrysler vehicles were susceptible to remote hacks, the auto maker issued a software patch and a subsequent recall. Now, federal regulators are taking over, opening an investigation not into the car manufacturer, but the company behind the radios that provide an entryway for would-be hackers.

The National Highway Traffic Safety Administration announced that it will probe Harman Kardon, the maker of the infotainment system used by two researchers to take control of a 2014 Jeep Cherokee from miles away, to determine if vehicles by other manufacturers could be at risk for remote hacks.

According to a notice from NHTSA, the investigation was opened to obtain information about the Harman-supplied Chrysler Uconnect units to determine the nature and extent of similarities in other infotainment products provided to other vehicle manufacturers.

“If sufficient similarities exist, the investigation will examine if there is cause for concern that security issues exist in other Harman Kardon products,” NHTSA states in the notice.

Regulators estimate that Harman has supplied infotainment systems of some kind for about 2.8 million vehicles.

Fiat Chrysler (FCA) issued a software patch for its Uconnect onboard system in late July, though at that time it didn’t directly acknowledge the Wired.com report of what it was like to be inside a hijacked Jeep.

Just days later, the company announced it would recall 1.4 million vehicles that include the Uconnect units.

In a notice to NHTSA regarding that recall, FCA detailed how software security vulnerabilities in the recalled vehicles could allow unauthorized third-party access to, and manipulation of, networked vehicle control systems.

“Unauthorized access or manipulation of the vehicle control systems could reduce the driver’s control of the vehicle increasing the risk of a crash with an attendant increased risk of injury to the driver, other vehicle occupants, and other vehicles and their occupants within proximity to the affected vehicle,” the notice states.

Customers affected by the recall will receive a USB device that they may use to upgrade vehicle software, which provides additional security features independent of the network-level measures.
http://consumerist.com/2015/08/03/re...-hack-of-jeep/





Hackers Turn Off Tesla Model S at Low Speed: FT

Cybersecurity researchers said they took control of a Tesla Motors Inc (TSLA.O) Model S car and turned it off at low speed, one of six significant flaws they found that could allow hackers to take control of the vehicles, the Financial Times reported.

Kevin Mahaffey, chief technology officer of cybersecurity firm Lookout, and Marc Rogers, principal security researcher at Cloudflare, said they decided to hack a Tesla car because the company has a reputation for understanding software than most automakers, the FT said. (on.ft.com/1DsTIQJ)

"We shut the car down when it was driving initially at a low speed of five miles per hour," the newspaper quoted Rogers as saying. "All the screens go black, the music turns off and the handbrake comes on, lurching it to a stop."

The hack will be detailed at cybersecurity conference Def Con in Las Vegas on Friday, the FT said.

Tesla is issuing a patch, which all drivers will have by Thursday, to fix the flaws, the FT said.

Tesla could not be immediately reached for comment outside regular U.S. business hours.

The hack on Tesla follows a similar attack on Fiat Chrysler's (FCAU.N) Jeep Cherokee last month that prompted the company to recall 1.4 million vehicles in the United States.

(Reporting by Sagarika Jaisinghani in Bengaluru; Editing by Anil D'Silva)
http://uk.reuters.com/article/2015/0...0QB1AN20150806





How the Stagefright Bug Changed Android Security

Google's latest security problem might actually make Android safer
Russell Brandom

It's been 10 days since Zimperium's Joshua Drake revealed a new Android vulnerability called Stagefright — and Android is just starting to recover. The bug allows an attacker to remotely execute code through a phony multimedia text message, in many cases without the user even seeing the message itself. Google has had months to write a patch and already had one ready when the bug was announced, but as expected, getting the patch through manufacturers and carriers was complicated and difficult.

But then, something unexpected happened: the much-maligned Android update system started to work. Samsung, HTC, LG, Sony and Android One have already announced pending patches for the bug, along with a device-specific patch for the Alcatel Idol 3. In Samsung's case, the shift has kicked off an aggressive new security policy that will deploy patches month by month, an example that's expected to inspire other manufacturers to follow suit. Google has announced a similar program for its own Nexus phones. Stagefright seems to have scared manufacturers and carriers into action, and as it turns out, this fragmented ecosystem still has lots of ways to protect itself.

""The early reports triggered a very, very strong response.""

It's still early, and most devices won't receive the patch until later this month, but Android security head Adrian Ludwig is optimistic that most Android users will be protected by existing mitigation systems, and expects patches to be deployed before attackers can break through. "The early reports triggered a very, very strong response," Ludwig told The Verge. "The OEMs are now really understanding and the ecosystem is really understanding how to react more quickly, because we all see that it's necessary."

At the same time, the wave of negative publicity around Stagefright seems to have spurred manufacturers into action. Samsung's VP of partner solutions Rick Segal says the move to rolling updates has been in the works at Samsung for six months. Enterprise customers have long lobbied for better security on the devices, and when a vulnerability in Samsung's Swiftkey keyboard was discovered earlier this summer, the company was impressed by the positive customer response to the quick patch. The widespread public alarm over Stagefright was enough to tip the scales on the new feature. "Really, it's the right thing to do," Segal told The Verge, "and you're not going to see any pushback from carriers or partners or anything because everybody knows it's the right thing to do."

That still doesn't mean patches will be immediate, but it means they'll arrive in weeks instead of months, giving attackers less and less time to exploit newly discovered bugs. At the same time, Android mitigation efforts are making vulnerabilities harder and harder to exploit. Even in its current form, Stagefright has had trouble getting around Android's Address Space Layout Randomization protections (commonly known as ASLR). The bug can still be used to trigger unauthorized code — a troubling result under any circumstances — but ASLR system has made it difficult to reliably run any specific piece of code across a range of devices, a difficulty acknowledged by Drake himself.

At its core, Stagefright works by corrupting a system's memory to change a program's control counter, the system that determines the next line of code to be executed. In broad strokes, corrupting that counter allows an attacker to smuggle code into queue to be executed, but exploiting that power consistently requires a good map of where the different system operations live. ASLR scrambles that map, leaving attackers with no reliable sense of where to find the code they want to smuggle into the queue.

That's particularly important since the initial demonstrations of Stagefright exploits stopped at the point of code execution. Once Drake was able to corrupt the control counter, the seriousness of the bug was established. There may still be a reliable way around the system — we'll have to wait for Drake's presentation later today to find out if he has one — but it's a serious problem for attackers coming at the vulnerability cold. "Nobody thinks these measures are perfect," Ludwig said, "but they definitely buy time while manufacturers get patches out."

"Any program that preloads video is potentially at risk"

In the meantime, the best mitigation for users is still to turn off the "automatically retrieve MMS" setting in settings, but that fix is also being attacked from a number of different angles. Google has promised a fix in an update to the Hangouts app next week, but some carriers have already taken the fix into their own hands. The German carrier Telekom has responded by shutting down all automatic delivery of Android MMS messages earlier today, requiring a manual download triggered by the user.

Unfortunately, MMS isn't the only way to exploit Stagefright, so users won't be entirely protected until the problem is fixed at the OS level. Researchers have already shown ways to exploit the vulnerability from a URL or even within an application, although in each case, the user has to manually retrieve the media for the patch to work, so the attack isn't considered as dangerous as the texting vulnerability. Still, it underscores the importance of the patch itself, even as mitigation efforts buy time against attackers.

The biggest question now is how many manufacturers will come along for the ride, and if any devices will be left behind by the new patching system. The first wave of Samsung patches didn't include the Galaxy S3 or S4, although they're among the most popular Android devices currently in use, and Samsung has announced its intention to patch the devices later in the month. At the same time, manufacturers like Huawei and Asus have yet to make a public statement on when a patch will be available.
https://www.theverge.com/2015/8/5/90...-protect-patch





U.K. Web Users Now Prefer To Do It With Smartphones
Natasha Lomas

U.K. web users now see their smartphone as the most important device for getting online, overtaking the previous most popular device, the laptop, according to a study of Brits’ digital habits.

The study found that a third (33 per cent) of U.K. Internet users now view their smartphone important device for going online, compared to 30 per cent who are still sticking with their laptop.

The finding comes from U.K. telco regulator Ofcom’s 2015 Communications Market Report — a serious stat-fest for those wanting to understand Brits’ digital habits.

Ofcom says the smartphone preference represents a “clear shift” on last year’s report when the proportion turning to their phones first was closer to a fifth (22 per cent), and a full 40 per cent still preferred their laptop.

Two-thirds (66 per cent) of people in the U.K. now own a smartphone, with Brits’ using their mobiles for nearly two hours per day.

Ofcom attributes the rise of ‘smartphone-first’ web use to increasing uptake of faster (4G/LTE) mobile networks — noting that 4G subscriptions leapt up last year from a base of 2.7 million to 23.6 million by the end of the year.

EE was the first U.K. carrier to launch LTE, back in October 2012, with the other three main carriers, O2, Vodafone and Three, getting into the game over the course of 2013. The U.K.’s carrier market has since seen some changes, with EE acquired by former mobile network operator BT, and a spot of carrier consolidation as Three bought O2.
http://techcrunch.com/2015/08/06/u-k...h-smartphones/





Why the Fear Over Ubiquitous Data Encryption is Overblown
Mike McConnell, Michael Chertoff and William Lynn

Mike McConnell is a former director of the National Security Agency and director of national intelligence. Michael Chertoff is a former homeland security secretary and is executive chairman of the Chertoff Group, a security and risk management advisory firm with clients in the technology sector. William Lynn is a former deputy defense secretary and is chief executive of Finmeccanica North America and DRS Technologies.

More than three years ago, as former national security officials, we penned an op-ed to raise awareness among the public, the business community and Congress of the serious threat to the nation’s well-being posed by the massive theft of intellectual property, technology and business information by the Chinese government through cyberexploitation. Today, we write again to raise the level of thinking and debate about ubiquitous encryption to protect information from exploitation.

In the wake of global controversy over government surveillance, a number of U.S. technology companies have developed and are offering their users what we call ubiquitous encryption — that is, end-to-end encryption of data with only the sender and intended recipient possessing decryption keys. With this technology, the plain text of messages is inaccessible to the companies offering the products or services as well as to the government, even with lawfully authorized access for public safety or law enforcement purposes.

The FBI director and the Justice Department have raised serious and legitimate concerns that ubiquitous encryption without a second decryption key in the hands of a third party would allow criminals to keep their communications secret, even when law enforcement officials have court-approved authorization to access those communications. There also are concerns about such encryption providing secure communications to national security intelligence targets such as terrorist organizations and nations operating counter to U.S. national security interests.

Several other nations are pursuing access to encrypted communications. In Britain, Parliament is considering requiring technology companies to build decryption capabilities for authorized government access into products and services offered in that country. The Chinese have proposed similar approaches to ensure that the government can monitor the content and activities of their citizens. Pakistan has recently blocked BlackBerry services, which provide ubiquitous encryption by default.

We recognize the importance our officials attach to being able to decrypt a coded communication under a warrant or similar legal authority. But the issue that has not been addressed is the competing priorities that support the companies’ resistance to building in a back door or duplicated key for decryption. We believe that the greater public good is a secure communications infrastructure protected by ubiquitous encryption at the device, server and enterprise level without building in means for government monitoring.

First, such an encryption system would protect individual privacy and business information from exploitation at a much higher level than exists today. As a recent MIT paper explains, requiring duplicate keys introduces vulnerabilities in encryption that raise the risk of compromise and theft by bad actors. If third-party key holders have less than perfect security, they may be hacked and the duplicate key exposed. This is no theoretical possibility, as evidenced by major cyberintrusions into supposedly secure government databases and the successful compromise of security tokens held by a major information security firm. Furthermore, requiring a duplicate key rules out security techniques, such as one-time-only private keys.

Second, a requirement that U.S. technology providers create a duplicate key will not prevent malicious actors from finding other technology providers who will furnish ubiquitous encryption. The smart bad guys will find ways and technologies to avoid access, and we can be sure that the “dark Web” marketplace will offer myriad such capabilities. This could lead to a perverse outcome in which law-abiding organizations and individuals lack protected communications but malicious actors have them.

Finally, and most significantly, if the United States can demand that companies make available a duplicate key, other nations such as China will insist on the same. There will be no principled basis to resist that legal demand. The result will be to expose business, political and personal communications to a wide spectrum of governmental access regimes with varying degrees of due process.

Strategically, the interests of U.S. businesses are essential to protecting U.S. national security interests. After all, political power and military power are derived from economic strength. If the United States is to maintain its global role and influence, protecting business interests from massive economic espionage is essential. And that imperative may outweigh the tactical benefit of making encrypted communications more easily accessible to Western authorities.

History teaches that the fear that ubiquitous encryption will cause our security to go dark is overblown. There was a great debate about encryption in the early ’90s. When the mathematics of “public key” encryption were discovered as a way to provide encryption protection broadly and cheaply to all users, some national security officials were convinced that if the technology were not restricted, law enforcement and intelligence organizations would go dark or deaf.

As a result, the idea of “escrowed key,” known as Clipper Chip, was introduced. The concept was that unbreakable encryption would be provided to individuals and businesses, but the keys could be obtained from escrow by the government under court authorization for legitimate law enforcement or intelligence purposes.

The Clinton administration and Congress rejected the Clipper Chip based on the reaction from business and the public. In addition, restrictions were relaxed on the export of encryption technology. But the sky did not fall, and we did not go dark and deaf. Law enforcement and intelligence officials simply had to face a new future. As witnesses to that new future, we can attest that our security agencies were able to protect national security interests to an even greater extent in the ’90s and into the new century.

Today, with almost everyone carrying a networked device on his or her person, ubiquitous encryption provides essential security. If law enforcement and intelligence organizations face a future without assured access to encrypted communications, they will develop technologies and techniques to meet their legitimate mission goals.
https://www.washingtonpost.com/opini...9f4_story.html





Germany Halts Treason Inquiry Into Journalists After Protests

‘For the good of media freedom’, Germany’s prosecutor general suspends investigation into reporters who said state planned to boost surveillance
Kate Connolly in Berlin

A treason investigation into two journalists who reported that the German state planned to increase online surveillance has been suspended by the country’s prosecutor general following protests by leading voices across politics and media.

Harald Range, Germany’s prosecutor general, said on Friday he was halting the investigation “for the good of press and media freedom”. It was the first time in more than half a century that journalists in Germany had faced charges of treason.

Speaking to the Frankfurter Allgemeine Zeitung, Range said he would await the results of an internal investigation into whether the journalists from the news platform netzpolitik.org had quoted from a classified intelligence report before deciding how to proceed.

His announcement followed a deluge of criticism and accusations that Germany’s prosecutor had “misplaced priorities”, having failed to investigate with any conviction the NSA spying scandal revealed by whistleblower Edward Snowden, and targeting instead the two investigative journalists, Markus Beckedahl and Andre Meister.

German government accuses news website of treason over leaks

In a scathing attack, the leading Green MP Renate Künast, who is also chair of the Bundestag’s legal affairs committee, called the investigation a “humiliation to the rule of law”. She accused Range of disproportionately targeting the two journalists, whilse ignoring the “massive spying and eavesdropping [conducted] by the NSA in Germany”.

Künast told the Kölner Stadt-Anzeiger: “Nothing happened with that. If it wasn’t for investigative journalism, we would know nothing.”

Wolfgang Kubicki, of the pro-business FDP party, also said he found it “disconcerting” that Range had ignored the NSA allegations while choosing to pursue the journalists. “Instead of intimidating journalists, the state prosecutor should resume the investigation proceedings into the NSA affair that he only recently abandoned,” he said.

But Jens Koeppen, head of the Bundestag committee Digital Agenda, and a member of the leading CDU party, said it was right to condemn the journalists. “If something is classified as confidential, that also applies to journalists, as well as would-bes,” he wrote on Twitter, referring to the fact that Netzpolitik.org is not a traditional newspaper.

In articles that appeared on netzpolitik.org in February and April, the two reporters made reference to what is believed to be a genuine intelligence report that had been classified as confidential, which proposed establishing a new intelligence department to monitor the internet, in particular social media networks.

The federal prosecutor’s investigation was triggered by a complaint made by Germany’s domestic intelligence agency, the Office for the Protection of the Constitution (BfV) over the articles, which it said had been based on leaked documents.

Beckedahl hit out at the prosecutor’s investigation against him on Friday on the state broadcaster Deutschlandfunk, calling it “absurd” and suggesting it was meant as a general warning to scare sources from speaking to journalists.

Much of the German media called the decision an attack on the freedom of the press. The news platform, which is financed by voluntary donations, was reported to have received thousands of euros as a sign of support.

In a statement, netzpolitik.org said the charges against its journalists and a source had been “politically motivated and targeted to crush the vital public debate about internet surveillance post-Snowden”.

It added: “Whistleblowers acting in the public interest need protection, not prosecution as traitors.” netzpolitik.org said that next week it was due to receive an award for its innovative and distinct journalism from the German government and industry, and said: “We will not allow ourselves to be intimidated by the investigations and we will continue our critical and independent journalism – including with original documents.”

In an act of solidarity, the research website Correctiv reported itself to the general prosecutor’s office on Friday, saying that it too was “guilty of treason”, at the same time as republishing the controversial documents originally published by netzpolitik.org.

“They should be investigating the whole lot of us!” said Correctiv’s editor-in-chief, Markus Grill. Meanwhile, German lawyers called for the abolition of the offence “journalistic treason”.

The president of the German Association of Lawyers, Ulrich Schellenberg, was reported by FAZ as saying there was a “fundamental public interest” in understanding the work of secret services, and therefore it was necessary to stop the “state proceeding against a critical journalism”.
http://www.theguardian.com/world/201...lance-protests





Facebook Solar-Powered Drone Is The Size Of A Boeing 737
Ben Sullivan

Aquila drone project will see Internet laser-beamed to unconnected parts of the world from 90,000ft in the sky

Facebook has unveiled a drone with the wingspan of a Boeing 737 that will provide internet access to parts of the world that are not yet connected to the web.

The drone, part of a project called Aquila, will begin flight trials this year, according to the social network.
60,000ft – 90,000ft

facebookWeighing in at 400kg, Aquila will fly between 60,000ft and 90,000ft as to avoid adverse weather conditions and commercial air routes.

“Aquila is a solar powered unmanned plane that beams down internet connectivity from the sky. It has the wingspan of a Boeing 737,” said Facebook CEO Mark Zuckerberg. “But weighs less than a car and can stay in the air for months at a time.”

Zuckerberg said that the laser mounted to the drone can transmit data at 10 gigabits per second, ten times faster than any previous system Facebook has tested. It can accurately connect with a point the size of a US 5 cents coin from more than 10 miles away.

“This effort is important because 10 percent of the world’s population lives in areas without existing internet infrastructure” said Zuckerberg. “To affordably connect everyone, we need to build completely new technologies.”

Jay Parikh, vice-president of engineering at Facebook, said: “Our mission is to connect everybody in the world.

“This is going to be a great opportunity for us to motivate the industry to move faster on this technology.”

The drone took 14 months to build, and will be airborne for 90 days at a time, constantly circling in a two-mile radius. The drone is the next step in Facebook’s Internet.org initiative, a program with the goal of connecting developing and unconnected parts of the world up to the Internet.

“Since we launched Internet.org, it’s been our mission to find ways to provide internet connectivity to the more than 4 billion people who are not yet online,” said Parikh. “Many of these people live within range of at least a 3G wireless signal, and our work in the last year with mobile operators across 17 countries has provided more than a billion people with access to relevant basic internet services.”
The race for the skies

Facebook’s announcement that it will soon begin trials comes as Google ramps up efforts on its Project Loon program, an initiative that sees giant balloons beam down Internet to regions without access.

Google has been working on Project Loon since 2013. The project uses high-altitude balloons placed in the stratosphere at an altitude of 20 miles to create an aerial wireless network with up to 3G-like speed.

This week, Sri Lanka announced it would be the first to use Google’s Project Loon to cover the country in access to the Internet. Whilst only in a preliminary discussion stage, Google would work with the existing Internet providers in Sri Lanka to enhance their Internet service.
http://www.techweekeurope.co.uk/e-in...a-plane-173832





Google Fiber Plans Service in San Antonio, its Biggest City Yet

Google will compete against AT&T, which plans gigabit service for San Antonio.
Jon Brodkin

Google today said it is beginning design work on a fiber network for San Antonio, Texas. With 1.4 million residents, it will be the biggest Google Fiber city so far.

San Antonio is the ninth metro area where Google has confirmed it will bring its $70-per-month gigabit fiber service. This includes three metro areas where Google Fiber is already available—Kansas City in Kansas and Missouri; Austin, Texas; and Provo, Utah—and six where it's planned.
Further Reading
AT&T’s newest fiber customers to pay $40 more than Google Fiber users

There's no $70 Google Fiber in Cupertino, so AT&T can charge what it wants to.

(Clarification: While the Kansas City and Atlanta metropolitan areas each have more than 1.4 million residents, Google noted that San Antonio itself has more residents than the other Google Fiber cities.)

Google did not say when fiber service will be ready in San Antonio. "Soon, we’ll enter the design phase of building our fiber network in San Antonio," Mark Strama, the head of Google Fiber in Texas, wrote in the announcement. "We’ll work closely with city leaders over the next several months to plan the layout of over 4,000 miles of fiber-optic cables—enough to stretch to Canada and back—across the metro area. This is no small task, and it will take some time, but we can’t wait to get started."

Google has rolled out fiber in phases in other cities, starting with neighborhoods where there's the most demand, so it's not clear when or if all of San Antonio will be wired up. San Antonio's City Council approved a long-term contract with Google Fiber in March 2014, but Google still only listed San Antonio as a "potential fiber city" until upgrading it to an "upcoming fiber city" today. Strama reportedly said today that it will take a few years to complete construction in San Antonio, but presumably parts of the city would get service much earlier than that.

Besides San Antonio, Google says it plans to bring fiber to Raleigh-Durham and Charlotte, North Carolina; Atlanta, Georgia; Nashville, Tennessee; and Salt Lake City in Utah.

"Potential" Google Fiber cities include Phoenix, Arizona; San Jose, California; and Portland, Oregon.

In San Antonio, Google will go head to head against AT&T, which plans to bring its gigabit service to the city. Time Warner Cable also offers Internet service in San Antonio and recently boosted its top download speeds to 300Mbps.
http://arstechnica.com/business/2015...gest-city-yet/





Where Broadband is a Utility, 100Mbps Costs Just $40 a Month

Small Oregon city upgrades network to fiber, destroys competition.
Jon Brodkin

There’s been a lot of debate over whether the United States should treat Internet service as a utility. But there’s no question that Internet service is already a utility in Sandy, Oregon, a city of about 10,000 residents, where the government has been offering broadband for more than a decade.

“SandyNet” launched nearly 15 years ago with DSL and wireless service, and this summer it's putting the final touches on a citywide upgrade to fiber. The upgrade was paid for with a $7.5 million revenue bond, which will be repaid by system revenues. Despite not being subsidized by taxpayer dollars, prices are still low: $40 a month for symmetrical 100Mbps service or $60 a month for 1Gbps. There are no contracts or data caps.

“Part of the culture of SandyNet is we view our citizens as owners of the utility,” City IT Director and SandyNet GM Joe Knapp told Ars in a phone interview. “We've always run the utility on a break-even basis. Any profits we do have go back into capital improvements and equipment upgrades and things like that.”

In a video feature produced by the Institute for Self-Reliance, Sandy Mayor Bill King said the city didn’t pay for the fiber network with taxes because “we didn’t feel it was right for everyone to have to pay for something that maybe not everyone was going to participate in.”

SandyNet operates a lot differently from private Internet service providers, which generally sell Internet access in multiple cities or states.

“There's a lot more overhead there and they've also got investors that they're trying to keep happy by making sure their stocks are performing well and all that,” Knapp told Ars. “I get their stance and where they're coming from, but for us as a small municipal provider it’s a completely different mindset.”

Instead of giving dividends to stockholders, SandyNet focuses on keeping prices low for residents. “We're able to operate very lean because my service footprint is Sandy and my staff all live and work in Sandy, so we're able to operate in a different manner than a lot of those companies are,” Knapp said.

While many states have laws that restrict municipal broadband projects in order to protect private providers from competition, Sandy officials don’t have that problem.

“There were some efforts in Oregon, probably over a decade ago, to try to stop municipal Internet providers, but the Oregon legislature said no. They made us a safe state to have this kind of thing in,” Knapp said.

SandyNet competes against Wave, a cable company, and Frontier, a DSL provider. Before the fiber upgrade, SandyNet’s market share was about 30 percent of homes in the city, Knapp said. That number has already risen dramatically and it expected to hit more than half of the city’s 3,700 households once the project is able to hook up everyone on the waiting list, which should happen by October at the latest. SandyNet also sells Internet service to local businesses.

It all started because City Hall couldn’t get DSL

While SandyNet is blowing past the competition, it was started in 2001 because private companies weren’t serving the city, which is less than 30 miles from Portland.

“We couldn't get a DSL line at City Hall and this was back in 2001,” Knapp explained in the Institute for Self-Reliance Video. “We literally called the phone company and said, ‘We want broadband,’ and they said, ‘Sorry, we don't have it.’”

The cable company at the time also wasn’t providing broadband, Knapp said.

“The mindset was, if that's what they're telling the city government, what are they telling our residents, and what are we going to do about this problem?” Knapp said.

SandyNet offered both a fixed wireless service and DSL, but it stopped providing DSL about five years ago.

“We could get better speeds on wireless, especially in far, outer reaching areas,” Knapp told Ars. “Also, it's an administrative burden to do line-share DSL; you're basically providing DSL over the phone company's wires through a wholesale agreement.”

Before the fiber project, SandyNet was offering a $25-per-month wireless service with download speeds of 5Mbps and uploads of 1Mbps. It was time for an upgrade.

“The fiber project for us was meeting the needs and desires of our SandyNet customers and the overall benefit of the community,” Knapp said. “It wasn't necessarily that they were unhappy with what the incumbents were doing at that point, it was just the evolution, these are the customers we're serving and this is what they want.”

The wireless service had to die in order to make way for the citywide fiber network.

“It’s a much more difficult system to operate than a fiber network,” Knapp said. “We had over 100 access points around the city for our customers to connect to, and it's a lot of maintenance to manage all those individual powered devices out in the field.”

It also didn’t make sense to have a $25-per-month service compete against the new fiber service that started at $40 a month, even if the slowest fiber service was 20 times faster downstream and 100 times faster upstream.

SandyNet is upgrading every wireless customer to fiber. “We said, ‘we'll give you a risk-free trial of the fiber network,’” said Knapp. “We did a free installation for them, we kept the rate at the same $24.95 for the first three months. They were able to try it out with no risk and we haven't had any of them cancel.”

Only about four percent of customers have opted for the $60 gigabit service. While the advent of 4K and 8K streaming video may change things, at the moment Knapp believes 100Mbps is enough for typical residents.

“We're pretty bad salesmen,” Knapp said. “We have some people who will ask for the gigabit service and we actually try to talk them down to 100Mbs… We tell them, ‘I would recommend trying the 100Mbps service and if it's not fast enough, it's a button click for us to turn it up to a gigabit.'... What we tell our citizens is, 'we want to keep our rates as low as possible for you and part of that is I don't want to sell somebody a product they don't need.'”

Getting enough bandwidth isn’t a problem. SandyNet buys 30Gbps of capacity, with physically redundant paths into Portland.

SandyNet also sells a $20-per-month phone service and is partnering with a company called yondoo to provide TV service over the fiber wires. Details on the TV packages are still being negotiated.

50 new homes being connected each week

Construction of SandyNet’s fiber network began in June 2014 after about three years of research and negotiation with construction companies. Sandy was able to get the revenue bond because “we had all that experience of 10-plus years running an ISP, and we were able to do some pretty accurate revenue projections,” Knapp said.

To break even, SandyNet calculated that it needed 35 percent of the community to subscribe. It blew past that and as a result had to borrow an additional $500,000 on top of the original $7.5 million to cover the extra construction costs. Debt service will be paid off over the next 20 years.

SandyNet now has more than 50 miles of fiber, all underground, and it passes every residential property in the 3.14-square mile city. SandyNet offered free installation to residents who signed up during construction, and about 2,000 took the service up on the offer.

The first customers were brought online in September 2014. When we spoke to Knapp on July 22, about 1,400 homes were hooked up, with new ones being added at the rate of about 50 per week. Residents that haven’t already signed up will have to pay a $350 one-time construction fee “to help offset the cost of getting fiber from the distribution network up to the side of the house,” Knapp said.

The last few feet of construction are the most difficult.

“What we found is… the last 50 feet to get to the house is the most difficult part. You've got to go around irrigation systems, you're tearing up people’s landscaping. It's not the most fun,” Knapp said.

The city is doing all this with a very small staff. While Sandy hired a construction company to build the network, Knapp and his staff of four other employees manage SandyNet and do IT support for the city government’s internal systems.

The fiber network has brought advantages beyond fast, cheap Internet service. For example, SandyNet wired up the traffic lights in town so they can be monitored and controlled remotely.

Nearby municipalities have asked SandyNet to hook them up.

“We’ve told them, ‘not right now,’” Knapp said.

But if revenue remains strong, SandyNet is expected to grow along with the city itself in the coming years. Sandy officials plan to expand the borders of the city into surrounding areas that are not currently part of any city or town, Knapp said.

“The city has a 40-year master plan; we know where the city will expand and what those areas will be zoned,” Knapp said. In areas “that are currently not inside the city limits but we anticipate them becoming residential centers in the city over the next 20 years, we'll start extending the fiber out in that direction.”
http://arstechnica.com/business/2015...yer-subsidies/

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

August 1st, July 25th, July 18th, July 11th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 05:10 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)