P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 13-08-14, 07:17 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - August 16th, '14

Since 2002


































"They’re honoring an FCC commissioner at the exact same time they’re trying to get approval for a merger. And that doesn’t look so good." – Carrie Levine


"I don't even know you are using it, and I don't even care." – Eijah



































August 16th, 2014




Lionsgate Granted Restraining Order, Asset Seizure Against Torrent Sites
Kyle Johnson

Lionsgate was granted restraining orders Monday night against six torrent websites, which could also have their bank accounts seized, for making the Expendables 3 available for download.

U.S. District Judge Margaret Meadow ruled quickly against the sites because the studio "has established that it will suffer irreparable harm in the absence of immediate relief," Deadline reports.

By making the leaked movie available several weeks ahead of theatrical release, the websites have "stripped Lions Gate of the critical right of first publication" and could both damage the studio's "goodwill among consumers" and interfere with "contractual relationships with third parties."

Meadow also ruled that Lionsgate is able to request banks and other institutions seize assets, leaving the sites unable to draw or deposit money from any accounts.

The targeted websites, limetorrents.com, hulkfile.eu, swantshare.com, dotsemper.com, billionuploads.com and played.to, will have until Friday afternoon to provide evidence that the restraining order shouldn't become permanent.

Lionsgate filed the lawsuit late last week against the six sites and 10 John Does behind the sites looking to stop hosting, reproducing and linking to copies of Expendables 3. The suit also requested the sites to "take all steps necessary to recall and recover all copies."

The film leaked in late July, three weeks ahead of the Aug. 15 release date and had more than 200,000 downloads within 48 hours.
http://thecelebritycafe.com/feature/...ndables-3-leak





T-Mobile to Throttle Customers Who Use Unlimited LTE Data for Torrents/p2p
Cam Bunton

In an internal memo to staff, it’s been revealed that T-Mobile is going to clamp down on users taking advantage of their unlimited 4G/LTE plans for peer-to-peer file sharing and other misuse of their data allowance.

It reads:

“T-mobile has identified customers who are heavy data users and are engaged in peer-to-peer file sharing, and tethering outside of T-Mobile’s Terms and Conditions (T&C). This results in a negative data network experience for T-Mobile customers. Beginning August 17, T-Mobile will begin to address customers who are conducting activities outside of T-Mobile’s T&Cs.”

If you’re on any plan other than the Unlimited High-Speed Data plans, you don’t need to worry. Since your 4G LTE data is already capped at a set amount. This only applies to those on the old $70 unlimited or new $80 Simple Choice plan. If you are on one of those plans and need to know what is considered ‘misuse’, section 18 in T-Mobile’s terms and conditions makes it clear.

The following applicable scenarios are considered “misuse” of data (among other, more serious offenses like hacking/spreading malware/committing fraud etc.):

Using the Service in connection with server devices or host computer applications, including continuous Web camera posts or broadcasts, automatic data feeds, automated machine-to-machine connections or peer-to-peer (P2P) file-sharing applications that are broadcast to multiple servers or recipients, “bots” or similar routines that could disrupt net user groups or email use by others or other applications that denigrate network capacity or functionality.

T-Mobile’s steps for addressing misuse are outlined in the document also, making it clear that “only” unlimited customers are affected. And that it’s not a case of being throttled without warning.

1. T-Mobile will contact customers to explain terms and conditions to them, and then advise them that data speed could be reduced until the next billing cycle IF they continue to misuse the data service.
2. When the customer is contacted, T-Mobile will apply a ‘Misuse Warning SOC’ to their account.
3. If behavior continues, the existing warning SOC is replaced by a ‘Misuse Throttle SOC’ and their data speeds get reduced.
4. These SOCs are visible to customer care and other staff who access the user’s account, to make it clear to them why they might be experiencing slower speeds.

As previously stated, these measures will be put in place from August 17th. It’s likely that the number of people engaged in this is relatively small. And yet, it’s clearly a big enough problem to warrant a special “dedicated team” to address it.

In short, if you’re downloading torrents or constantly broadcasting online using your unlimited plan, now would be a good time to stop.
http://www.tmonews.com/2014/08/t-mob...r-torrentsp2p/





Why BitTorrent is Selling Itself Like Potato Chips
Nancy Scola

BitTorrent — perhaps best known in the tech world for providing the Internet plumbing for Pirate Bay, a notorious site frequently used to illegally share copyrighted material — is now making a play for the mainstream.

Travelers on both coasts are being greeted by BitTorrent Inc. ads that are in line with traditional Madison Avenue marketing: "Your Data Belongs to You," reads one such billboard on New York City's TriBeCa neighborhood. Reads another, in San Francisco's SoMa, "People > Servers," using the mathematical symbol for "greater than."

Last October, the company ran a mysterious, tongue-in-cheek set of billboard ads that proclaimed, "Your Data Should Belong to the NSA." It was a subversive riff on the then-emerging privacy debate sparked by Edward Snowden revelations on government surveillance programs.

While the original ads did not initially identify BitTorrent, which also specializes in online data sharing, the new ones do right off the bat. Company officials say they realized that the time for bluntness has arrived.

More than a dozen years after getting its start as a grad school project, BitTorrent is making a push to sell itself to a mainstream audience, in light of the growing interest in law enforcement cellphone tracking, the recent Supreme Court case over who owns user data, even Anonymous's hacking efforts. "The Internet was designed to be a decentralized network," says Eric Klinker, BitTorrent Inc.'s chief executive. The way so much user data is collected in so few places "has made it trivially easy for governments and others" to tap into it, he says. The company is turning to billboards as a way of getting the public to re-embrace the Internet's original design.

"It's where we've always been," says Klinker, "and the world is starting to move in our direction."

Part of the timing, says the company, has to do with their new products, aimed at reducing the ability for users' communications to be snooped upon or hacked. They're meant to be free, consumer-friendly replacements for everyday tools. Two weeks ago, BitTorrent rolled out Bleep, a server-less chat tool that allows users to communicate directly with one another, that is still in pre-alpha release. And they also offer Sync, a tool for backing up your files without relying upon a centralized server.

BitTorrent works off the idea of the "swarm," where people volunteer their computers to share content that has been chopped up into pieces directly with one another, rather than running through any one server. In recent months, the company has formed formal partnerships with established players in the entertainment industry. In February, for example, BitTorrent announced that it was working with Hollywood studies and music artists like Lady Gaga and Moby to distribute movies and music. Still, BitTorrent has functioned at the margins of the Internet, far less known than Google, Twitter or Facebook.

The company is aiming to change that. Their potential success marks a remarkable evolution. The country's at a fascinating moment: A growing public wariness about surveillance is meeting a sense that there are steps individuals can to protect their data. And so we're seeing peer-to-peer file-sharing services being packaged and sold like potato chips, imported beer or a new BMW.

The ads are, at the moment, in San Francisco, at the corner of Harrison and Fourth streets and on Route 101 between the city and San Francisco International Airport. And in New York City, there are a pair of ads on Canal Street, near Sixth Avenue. The messaging may change, says the company, but the ads are scheduled to run through the end of the year.

"It's an extension of what we're calling our 'Distributed Manifesto,' " says Jascha Kaykas-Wolff, chief marketing officer of BitTorrent Inc. They're meant to highlight what the company does in ways that lead potential customers to their products, though only obliquely. The ads don't mention their specific offerings; the company, it says, still expects those made curious by the ads to go online and search for them.

"It's unfortunate," says Kayaks-Wolff, of world events that have made the public newly skittish, "but it provides an opportunity for us as a company."
http://www.washingtonpost.com/blogs/...-potato-chips/





Demonsaw Promises Free, Secure File Sharing
Jill Scharr

How do you securely share files on the Internet? One security expert says he has a new way to share files in a secure, private, anonymous and decentralized way. It's called Demonsaw and it's free.

Demonsaw's creator, a hacker going by the name Eijah, unveiled his service at the DEF CON hackers conference here yesterday. Eijah said he initially created Demonsaw for his own use, in order to share files with friends and family, then decided to make it available to the public.

Eijah said Demonsaw is almost entirely anonymous: Users don't need to login or register, and there's no data retention.

"I don't even know you are using it, and I don't even care," he told the DEFCON audience.

Demonsaw isn't quite peer-to-peer file-sharing, but it isn't cloud storage either. Instead, users can to go to demonsaw.com to download router or Web server software in addition to the client software, which turns their devices into part of the Demonsaw network.

To set up a network, you'll need to know the Internet Protocol (IP) address of the machine you'll be using as a client. You then set up a user profile, a passphrase, and the address of the router that will host your network. You then designate the folder on your computer that you wish to share with the network.

Now, others on your network will be able to browse and download from that folder, and you'll be able to browse any other folders on that network.

Each user profile can also designate an icon. If you do associate an icon with that user profile, then only people with the same icon will be able to exchange files with you. That's because Demonsaw derives an encryption key from the unique image, and adds another layer of encryption to every file exchanged through the network.

Even without an icon, Demonsaw still uses a modular security approach; the network is segmented and spread across many different servers, routers and clients. Demonsaw servers are essentially encrypted volumes that store all data securely. Eijah said that from an outside perspective, it looks as if the client computer is sending out only small HTTP requests.

Running Demonsaw software requires you to use the Microsoft .NET Framework (a piece of Windows software used to help developers create and run new Windows applications) version 4.5 or higher.

I tried out Demonsaw on my Windows 7 laptop at the conference, and as soon as I tried to run it, my Bitdefender Antivirus Plus 2013 program blocked it and flagged it as potentially malicious. I asked Eijah, and he said that probably occurred because he used a program called ConfuserEx to obfuscate Demonsaw's code. Bitdefender may have flagged this as an anti-disassembly feature, which malware often use to hide from security experts.

Once it's up and running, Demonsaw's interface is very stark, though if you've used other file-sharing programs, you'll be able to find your way around it. Eijah told me he's still working on a FAQ and other materials to make it more user-friendly.

Windows versions of the Demonsaw software are currently available from demonsaw.com. Eijah says that versions for Mac, Linux, Android and, later, iOS, are on the way, as well as ports for Chromecast and possibly Plex, to make it easier for people to display content in their homes.

I got to spend only a little time with Demonsaw, as I had other DEF CON panels to attend, but what I saw looked good. It's nowhere near as user-friendly as Google or Dropbox, but Demonsaw is free, encrypted and self-hosted. People who are looking for a secure file-sharing system will want to check this out.
http://www.tomsguide.com/us/demonsaw...ews-19295.html





Snowden-Endorsed File-Sharing Service SpiderOak to Set Up ‘Warrant Canary’
Brenda Barron

No, we’re not talking about the coal mines here.

SpiderOak, the file sharing and cloud backup provider that NSA whistleblower Edward Snowden recently endorsed, has announced it will implement a “warrant canary,” falling in line with several other companies who’ve done the same.

So what is a warrant canary, exactly? If the government approaches a company with legal demands and a gag order, that company can let its customers know, in a roundabout way, that something is up. A gag order means it can’t come right out and say what’s going on, but the company can stop letting people know that everything is just dandy.

The process is simple. SpiderOak will republish a page every six months that says, “Everything’s going smoothly so far” with three PGP signatures on it to verify its authenticity. If that page stops being updated, something’s amiss. And all three remote signatures are required for the update to post, so it would be difficult for an update to be forced. The company chose the six-month timeframe because that’s how long it will take SpiderOak to investigate a claim and determine if it’s real and if it can be fought in court.

The decision to put a warrant canary in place puts the company in line with several others like Apple, Pinterest, and Tumblr who’ve recently implemented their own protections.

SpiderOak was founded in 2007 by Ethan Oberman, who wrote a guest post here at VentureBeat back in April warning of the ineffectiveness of privacy policies and calling upon Congress to better protect data security.
http://venturebeat.com/2014/08/14/sn...arrant-canary/





A Portable Router that Conceals Your Internet Traffic

Def Con presentation unveils OPSEC tool for the rest of us—some assembly required.
Sean Gallagher

The news over the past few years has been spattered with cases of Internet anonymity being stripped away, despite (or because) of the use of privacy tools. Tor, the anonymizing “darknet” service, has especially been in the crosshairs—and even some of its most paranoid users have made a significant operational security (OPSEC) faux pas or two. Hector “Sabu” Monsegur, for example, forgot to turn Tor on just once before using IRC, and that was all it took to de-anonymize him. (It also didn’t help that he used a stolen credit card to buy car parts sent to his home address.)

If hard-core hacktivists trip up on OPSEC, how are the rest of us supposed to keep ourselves hidden from prying eyes? At Def Con, Ryan Lackey of CloudFlare and Marc Rogers of Lookout took to the stage (short their collaborator, the security researcher known as “the grugq,” who could not attend due to unspecified travel difficulties) to discuss common OPSEC fails and ways to avoid them. They also discussed their collaboration on a set of tools that promises to make OPSEC easy—or at least easier—for everyone.

Called Personal Onion Router To Assure Liberty (PORTAL), the project is a pre-built software image for an inexpensive pocket-sized “travel router” to automatically protect its owner’s Internet traffic. Portal provides always-on Tor routing, as well as “pluggable” transports for Tor that can hide the service’s traffic signature from some deep packet inspection systems.

Counter-surveillance for everyone

There are plenty of reasons why an average person should care about OPSEC today, Lackey explained in his introduction to the session. “We're not really talking about people hiding while doing lots of bad stuff,” he said. “There are a lot of reasons why you'd want to hide. Especially post-Snowden. Part of it is to avoid global dragnets—you want to make sure if someone is monitoring everything, you don't want to get caught up in that.”

Monitoring also could result in profiling based on “somebody living next door to you making a phone call," Lackey added, “which because of the way the software works could end up flagging or profiling you… but it’s also just an issue of ‘none of your damned business.’”

Even encrypted connections provide metadata about an individual’s activities, as do patterns in an individual’s Internet traffic—which Ars found when we monitored the Internet traffic of NPR’s Steve Henn. But there’s a great deal of traffic that remains unencrypted, as Rogers noted during the presentation.

“Before the Snowden leaks, about one percent of Internet traffic was SSL protected,” he said. “Now it’s about three percent.”

The tools in PORTAL aren’t rocket science, Rogers told the Def Con audience. “The difference is that we’re packaging [tools] together and showing you how you can use these tools so you don’t have to think about it, and you can avoid the problems caused by human error.”

Virtual private networks provide some privacy, Lackey and Rogers said, but they don’t provide real anonymity—some VPN providers (particularly those in the US) keep logs of traffic, and they don’t provide end-to-end protection. Tor protects traffic for much of the trip—at least until they reach the exit node used to access the website or Internet service being requested. But Tor has hazards as well—in its basic form, it alerts those doing the monitoring that Tor is being used and can result in the user being targeted or blocked.

While there are other Tor-based tools to help protect anonymity, such as the Tor Browser bundle and the TAILS “live” CD and USB-bootable operating system, these are prone to accidental errors—like not waiting for Tor to be ready for traffic or simple misconfiguration. TAILS is restrictive, because it isolates the user within a Linux environment without access to local storage—not a great option for people who want to work with the operating system and software they use for their work.

“TAILS is a great project and piece of software, but it makes security assumptions about hardware which are probably not true today,” Lackey told Ars in an email interview after Def Con.

Privacy in your pocket

That's where the “travel router” comes in. Lackey said that a customized, secure router that allows people to just connect with their existing device over Ethernet or Wi-Fi is the “sweet spot” for maintaining anonymity. It isolates encryption and obfuscation from the user’s computer and eliminates the risk of the user forgetting to turn protection on. “The big advantage of something like PORTAL is being able to isolate failures to a dedicated outboard device and with a conceptually simple UI/UX,” Lackey told Ars. “It's a physical device, and when it's present and connected in line, traffic must pass through it. It never has your sensitive information on it.”

There are other low-cost routers available for privacy, such as the PogoPlug Safeplug. But Safeplug only offers basic Tor protection—making it impractical for use in countries such as China, where Internet surveillance systems watch for and shut down Tor traffic. The same goes for Onion Pi, a Raspberry Pi-based Tor appliance.

Portal includes the full capabilities of Tor—including pluggable transports for Tor, which can conceal Tor traffic from many of the network monitoring tools that look for patterns in packet data. There is an ever-expanding collection of pluggable transports, including:

• Bananaphone, which turns Tor traffic into “natural language” streams of words.
• Obfs4 and Scramblesuit, which obfuscate Tor by encrypting everything in Tor Transport Layer Security packets, eliminating the plaintext headers that identify the traffic.
• Flashproxy, which wraps Tor traffic in WebSocket format, disguises it with an XOR cipher, and bounces it through short-lived JavaScript proxies running in other computers’ browsers.
• Format-Transforming Encryption (FTE), which encodes Tor traffic to look like another protocol, such as SSH—avoiding detection by “regular expression” network filtering.
• Meek, which disguises Tor as ordinary web traffic sent to Google, then forwards it through a third-party server.

The main drawback of PORTAL is that it currently isn’t a hardware product—it’s a Github download that must be “flashed” onto a TP-Link compatible pocket router. “The whole build process, management, etc. wasn't available at Def Con,” Lackey said. “Turning this into a tool directly usable by end users, or at least "power users" or sysadmins responsible for a group of users, is important, and something we're working on. Watch this space. Being able to flash your own devices is great, but for [more than] 95 percent of users today, they don't even want to do that much (nor should they be expected to!), so we're working on a solution.”
http://arstechnica.com/information-t...ernet-traffic/





How You Can Help Demolish the Great Firewall of China from the Comfort of Your Living Room
Aaron Sankin

Online censorship is like the weather, everyone complains, but no one ever does anything about it

Sure, that’s a gross over-generalization—lots of people around the world fight tirelessly every single day to combat the efforts of repressive governments that control the flow of digital information. Yet, for the vast majority of average Internet users, it seems like there’s little they can do to help a young person in China learn about what happened in Tiananmen Square in the summer of 1989.

This helplessness during a time when then over a quarter of the world’s population is blocked by their governments from having unfettered access to all of the information on the Internet is precisely what the creators of Lantern want to remedy.

Lantern is a piece of software that allows people in countries without Internet restrictions let someone in a place lacking those same kinds of freedoms see what the Internet looks like through their browsers.

The program was created late last year by Adam Fisk, a former engineer at the pioneering file sharing service Limewire, which was shut down by a federal judge in 2010. Fisk used his background in developing peer-to-peer technology to create a decentralized system of combatting censorship that governments are cannot block effectively.

“Up until now, censors have had the upper hand in being able to block these tools. Lantern uses peer-to-peer to get around that,” Fisk explained. “Individuals in uncensored regions can download and install it really easily and become these instant access points. It’s similar to a Facebook Like button in some sense, but actually having a tangible effect and giving access to people to need it.”

Lantern, which launched late last year, currently has around 25,000 users—mostly in China, but with a few thousand in Iran. Fisk expects that number to grow significantly as the company makes its first big push to increase the number of users in the “uncensored” world.

“It’s basically been spreading through word of mouth,” he noted. “In these [repressive] countries, if something works, people will find out about it and use it. Right now, in both China and Iran, really nothing else is working.”

If you download Lantern in an uncensored region, you can connect with someone in a censored region, who can then access whatever content they want through you. What makes the system so unique is that it operates on the basis of trust.

In order to use Lantern, you have to sign in with Google, and then information about your computer trickles through your network of real-world friends who are also using Lantern.

“In order for a censor to discover the IP addresses of your computer, they’d have to somehow convince you that they’re a friend,” Fisk explained. “It uses these real-world trust relationships to protect the IP addresses of these proxies because when you run Lantern in the uncensored world, you are a proxy.”

Even if you don’t know anyone living in Iran but still want to help LGBT youth in the country watch “It Gets Better” videos on YouTube, Lantern allows these connections to string together, one after another, to create a situation where a long lines of trusted connections act as bridges for online content.

However, that doesn’t mean a single government censor who downloads the software would be able to bring down the whole system. Fisk attests that the network is able to detect attempts to block information from passing through and seamlessly route around them. “Even in just loading a single web page, Lantern may load all the different files on it from different peers all around the world.”

Through a process called consistent routing, the amount of information any single Lantern user can learn about other users is limited to a small subset, making infiltration significantly more difficult.

Fisk knows that Lantern has to be careful because there have already been attempts by the Chinese government to prevent its citizens from using Lantern. Direct downloads of the program are already blocked. Most Chinese users have obtained the program through virtual private networks that allow people to use the web while severely limiting the information they give out about themselves, or by having the program emailed to them directly.

Last November, an Iranian satellite TV station reached out to do a story about Lantern. At that point, Lantern was still invite-only, but the station asked Fisk if he would make it so viewers of the show could immediately download it without having to have any special personal connections to current users. Fisk acquiesced and began allowing direct downloads to access a special “untrusted” portion of the Lantern network. Iranians began downloading the program en masse almost immediately and, soon after, people in China followed suit with nearly 20,000 people using the program in just a few weeks.

This activity attracted the attention of the Chinese government, which realized that traffic from Lantern looked a little different than other web traffic and managed to block its citizens’ access to the network. That blockage was only temporary, as Fisk’s team quickly altered Lantern’s traffic signature, making it considerably more difficult to detect.

Disguising Lantern’s traffic to look like other, unassuming types of traffic that censoring governments don’t block is actually a key part of its strategy. Lantern partners with other companies sympathetic to its mission to hide its traffic inside theirs.

Much like Tor, a tool that allows its users to (probably) surf the web with complete anonymously, Lantern is largely funded by the U.S. State Department. This funding arrangement has led to some fears that the NSA may have used that leverage to insert backdoors into the system.

“I think those concerns are definitely reasonable,” Fisk admitted. “But we’re working with people in the Department of Democracy, Human Rights, and Labor. These are men and women who are dedicated to the spread of…[free information] around the world…In my experience, the people we work with at the State Department are very different than the people across the river at the NSA in their agendas and their beliefs.”

He insisted that the project’s government backers have been very hands-off and, since the project is open source, anyone could go in and inspect the code themselves to see how it works and check for any backdoors that may have been put in place by government intelligence agents.

Lantern is currently in the midst of a fundraising campaign on Indiegogo.
http://www.dailydot.com/politics/lan...ne-censorship/





Father of PGP Encryption: Telcos Need to Get Out of Bed with Governments

Zimmermann’s Silent Circle working with Dutch telco to deliver encrypted calls.
Sean Gallagher

Phil Zimmermann, the creator of Pretty Good Privacy public-key encryption, has some experience when it comes to the politics of crypto. During the “crypto wars” of the 1990s, Zimmermann fought to convince the US government to stop classifying PGP as a “munition” and shut down the Clipper Chip program—an effort to create a government-mandated encryption processor that would have given the NSA a back door into all encrypted electronic communication. Now Zimmermann and the company he co-founded are working to convince telecommunications companies—mostly overseas—that it’s time to end their nearly century-long cozy relationship with governments.

Zimmermann compared telephone companies’ thinking with the long-held belief that tomatoes were toxic until it was demonstrated they weren’t. “For a long time, for a hundred years, phone companies around the world have created a culture around themselves that is very cooperative with governments in invading people’s privacy. And these phone companies tend to think that there’s no other way—that they can’t break from this culture, that the tomatoes are poisonous," he said.

A call for crypto

Back in 2005, Zimmermann, Alan Johnston, and Jon Callas began work on an encryption protocol for voice over IP (VoIP) phone calls, dubbed ZRTP, as part of his Zfone project. In 2011, ZRTP became an Internet Engineering Task Force RFC, and it has been published as open source under a BSD license. It’s also the basis of the voice service for Silent Circle, the end-to-end encrypted voice service Zimmermann co-founded with former Navy SEAL Mark Janke. Silent Circle, which Ars tested on the Blackphone in June, is a ZRTP-based voice and ephemeral messaging service that generates session-specific keys between users to encrypt from end to end. The call is tunneled over a Transport Layer Security-encrypted connection through Silent Circle’s servers in Canada and Switzerland. ZRTP and the Silent Circle calls don’t rely on PGP or any other public key infrastructure, so there’s no keys to hand over under a FISA order or law enforcement warrant.

Now, thanks largely to the revelations of NSA and GCHQ monitoring of telecommunications triggered by documents leaked by Edward Snowden, there’s a growing market demand for call privacy —and telecom companies, especially in Europe, have become more receptive to the idea of giving customers the power to protect their privacy. In February, Dutch telecommunications carrier KPN signed a deal to be the exclusive provider of Silent Circle’s encrypted voice call service in the Netherlands, Belgium, and Germany. The company started offering Silent Circle services to customers this summer.

That move was driven, Zimmermann said, by KPN’s chief information security officer, Jaya Baloo. “She decided she wanted to break ranks from the rest of the phone companies and get KPN to offer their customers privacy,” Zimmermann said. “So for the first time, you see a phone company offer real privacy. My hope is that other phone companies will find the tomatoes are not poisonous.”

Defense through dependency

Thanks in part to Janke’s connections, the service has been adopted by the Navy SEALS—not just for calling home, but for operational communications, as well as Canadian, British, and Australian special operations forces, members of the US Congress and US law enforcement. “About a year ago we had a visit from the FBI in our office,” Zimmermann said. “Mike Janke called and told, ‘The FBI was in our office today,’ and I said, ‘Oh no, it’s started already.’ And he said, ‘No, no, they were just here to ask about pricing.”

All of this plays into Zimmermann’s strategy to keep government agencies from pressing for backdoors into Silent Circle's service. “I thought what we need is, we needed to create the conditions where nobody was going to lean on us for backdoors because they need it themselves. If Navy SEALs are using this, if our own government develops a dependency on it, then they’ll recognize that it would be counter-productive for them to get a backdoor in our product. Now maybe it was an overabundance of caution, because they never asked for a backdoor in PGP, but that took years to get that propagated into government customers. We saw government customers take this up almost as soon as the product was ready—in fact before the product was ready they were asking about it. So we’ve created a situation where it’s difficult for them to even bring up the suggestion of a backdoor.”

That’s not to say that everything has gone smoothly. Zimmermann’s company had to abandon its secure email service in the wake of the shutdown of LavaBit. “We wiped out our entire secure email service—backups, and everything,” Zimmermann told the Def Con audience. “Some of our customers were pissed off, but for the most part they understood we were protecting their privacy.”

Giving NIST (and RSA) the finger

Doing business with US government customers generally requires the use of National Institute of Standards and Technology (NIST) standards for encryption. But by default, Zimmermann said, Silent Circle uses an alternative set of encryption tools.

“It wasn’t because there was anything actually wrong with the NIST algorithms,” Zimmermann explained. “After the Snowden revelations, we felt a bit resentful that NIST had cooperated with the NSA."

He continued, “So to express our displeasure at NIST, we offered alternative algorithms. We’re using a new elliptic curve (encryption algorithm) that we commissioned Dan Bernstein to do for us, we use a Twofish block cypher, and we use Skein as our hash function.”

Silent Circle does offer the NIST algorithms as an alternative. But he took the opportunity to use the controversy over the NIST standard’s now-deprecated random number generator standard—one that was crafted by the NSA to provide a way to break encryption—to get in a few digs about an old adversary. “We’re not using the stupid random number generator that NIST did at the behest of the NSA,” he said in response to an Def Con audience question. “I can’t imagine why anyone would use such a stupid random number generator. But apparently RSA did, and put it in their Bsafe subroutine library, which is closed source. It’s funny, back in the 90s, back when RSA started the criminal investigation against me by calling up the prosecutor and asking him to put me in prison, they said RSA was the most trusted name in cryptography…So, it’s ironic that we find today that they were paid $10 million to put an NSA-designed random number generator in their subroutine library.”
http://arstechnica.com/tech-policy/2...th-government/





What's the Matter with PGP?
Matthew Green

Last Thursday, Yahoo announced their plans to support end-to-end encryption using a fork of Google's end-to-end email extension. This is a Big Deal. With providers like Google and Yahoo onboard, email encryption is bound to get a big kick in the ass. This is something email badly needs.

So great work by Google and Yahoo! Which is why following complaint is going to seem awfully ungrateful. I realize this and I couldn't feel worse about it.

As transparent and user-friendly as the new email extensions are, they're fundamentally just re-implementations of OpenPGP -- and non-legacy-compatible ones, too. The problem with this is that, for all the good PGP has done in the past, it's a model of email encryption that's fundamentally broken.

It's time for PGP to die.

In the remainder of this post I'm going to explain why this is so, what it means for the future of email encryption, and some of the things we should do about it. Nothing I'm going to say here will surprise anyone who's familiar with the technology -- in fact, this will barely be a technical post. That's because, fundamentally, most of the problems with email encryption aren't hyper-technical problems. They're still baked into the cake.

Background: PGP

Back in the late 1980s a few visionaries realized that this new 'e-mail' thing was awfully convenient and would likely be the future -- but that Internet mail protocols made virtually no effort to protect the content of transmitted messages. In those days (and still in these days) email transited the Internet in cleartext, often coming to rest in poorly-secured mailspools.

This inspired folks like Phil Zimmermann to create tools to deal with the problem. Zimmermann's PGP was a revolution. It gave users access to efficient public-key cryptography and fast symmetric ciphers in package you could install on a standard PC. Even better, PGP was compatible with legacy email systems: it would convert your ciphertext into a convenient ASCII armored format that could be easily pasted into the sophisticated email clients of the day -- things like "mail", "pine" or "the Compuserve e-mail client".

It's hard to explain what a big deal PGP was. Sure, it sucked badly to use. But in those days, everything sucked badly to use. Possession of a PGP key was a badge of technical merit. Folks held key signing parties. If you were a geek and wanted to discreetly share this fact with other geeks, there was no better time to be alive.

We've come a long way since the 1990s, but PGP mostly hasn't. While the protocol has evolved technically -- IDEA replaced BassOMatic, and was in turn replaced by better ciphers -- the fundamental concepts of PGP remain depressingly similar to what Zimmermann offered us in 1991. This has become a problem, and sadly one that's difficult to change.

Let's get specific.

PGP keys suck

Before we can communicate via PGP, we first need to exchange keys. PGP makes this downright unpleasant. In some cases, dangerously so.

Part of the problem lies in the nature of PGP public keys themselves. For historical reasons they tend to be large and contain lots of extraneous information, which it difficult to print them a business card or manually compare. You can write this off to a quirk of older technology, but even modern elliptic curve implementations still produce surprisingly large keys.

Since PGP keys aren't designed for humans, you need to move them electronically. But of course humans still need to verify the authenticity of received keys, as accepting an attacker-provided public key can be catastrophic.

PGP addresses this with a hodgepodge of key servers and public key fingerprints. These components respectively provide (untrustworthy) data transfer and a short token that human beings can manually verify. While in theory this is sound, in practice it adds complexity, which is always the enemy of security.

Now you may think this is purely academic. It's not. It can bite you in the ass.

Imagine, for example, you're a source looking to send secure email to a reporter at the Washington Post. This reporter publishes his fingerprint via Twitter, which means most obvious (and recommended) approach is to ask your PGP client to retrieve the key by fingerprint from a PGP key server. On the GnuPG command line can be done as follows:

Now let's ignore the fact that you've just leaked your key request to an untrusted server via HTTP. At the end of this process you should have the right key with high reliability. Right?

Except maybe not: if you happen to do this with GnuPG 2.0.18 -- one version off from the very latest GnuPG -- the client won't actually bother to check the fingerprint of the received key. A malicious server (or HTTP attacker) can ship you back the wrong key and you'll get no warning. This is fixed in the very latest versions of GPG but... Oy Vey.

You can say that it's unfair to pick on all of PGP over an implementation flaw in GnuPG, but I would argue it speaks to a fundamental issue with the PGP design. PGP assumes keys are too big and complicated to be managed by mortals, but then in practice it practically begs users to handle them anyway. This means we manage them through a layer of machinery, and it happens that our machinery is far from infallible.

Which raises the question: why are we bothering with all this crap infrastructure in the first place. If we must exchange things via Twitter, why not simply exchange keys? Modern EC public keys are tiny. You could easily fit three or four of them in the space of this paragraph. If we must use an infrastructure layer, let's just use it to shunt all the key metadata around.

PGP key management sucks

Manual key management is a mug's game. Transparent (or at least translucent) key management is the hallmark of every successful end-to-end secure encryption system. Now often this does involve some tradeoffs -- e.g.,, the need to trust a central authority to distribute keys -- but even this level of security would be lightyears better than the current situation with webmail.

To their credit, both Google and Yahoo have the opportunity to build their own key management solutions (at least, for those who trust Google and Yahoo), and they may still do so in the future. But today's solutions don't offer any of this, and it's not clear when they will. Key management, not pretty web interfaces, is the real weakness holding back widespread secure email.

For the record, classic PGP does have a solution to the problem. It's called the "web of trust", and it involves individuals signing each others' keys. I refuse to go into the problems with WoT because, frankly, life is too short. The TL;DR is that 'trust' means different things to you than it does to me. Most OpenPGP implementations do a lousy job of presenting any of this data to their users anyway.

The lack of transparent key management in PGP isn't unfixable. For those who don't trust Google or Yahoo, there are experimental systems like Keybase.io that attempt to tie keys to user identities. In theory we could even exchange our offline encryption keys through voice-authenticated channels using apps like OpenWhisperSystems' Signal. So far, nobody's bothered to do this -- all of these modern encryption tools are islands with no connection to the mainland. Connecting them together represents one of the real challenges facing widespread encrypted communications.

No forward secrecy

Try something: go delete some mail from your Gmail account. You've hit the archive button. Presumably you've also permanently wiped your Deleted Items folder. Now make sure you wipe your browser cache and the mailbox files for any IMAP clients you might be running (e.g., on your phone). Do any of your devices use SSD drives? Probably a safe bet to securely wipe those devices entirely. And at the end of this Google may still have a copy which could be vulnerable to law enforcement request or civil subpoena.
(Let's not get into the NSA's collect-it-all policy for encrypted messages. If the NSA is your adversary just forget about PGP.)

Forward secrecy (usually misnamed "perfect forward secrecy") ensures that if you can't destroy the ciphertexts, you can at least dispose of keys when you're done with them. Many online messaging systems like off-the-record messaging use PFS by default, essentially deriving a new key with each message volley sent. Newer 'ratcheting' systems like Trevor Perrin's Axolotl (used by TextSecure) have also begun to address the offline case.

Adding forward secrecy to asynchronous offline email is a much bigger challenge, but fundamentally it's at least possible to some degree. While securing the initial 'introduction' message between two participants may be challenging*, each subsequent reply can carry a new ephemeral key to be used in future communications. However this requires breaking changes to the PGP protocol and to clients -- changes that aren't likely to happen in a world where webmail providers have doubled down on the PGP model.

The OpenPGP format and defaults suck

Poking through a modern OpenPGP implementation is like visiting a museum of 1990s crypto. For legacy compatibility reasons, many clients use old ciphers like CAST5 (a cipher that predates the AES competition). RSA encryption uses padding that looks disturbingly like PKCS#1v1.5 -- a format that's been relentlessly exploited in the past. Key size defaults don't reach the 128-bit security level. MACs are optional. Compression is often on by default. Elliptic curve crypto is (still!) barely supported.

Most of these issues are not exploitable unless you use PGP in a non-standard way, e.g., for instant messaging or online applications. And some people do use PGP this way.

But even if you're just using PGP just to send one-off emails to your grandmother, these bad defaults are pointless and unnecessary. It's one thing to provide optional backwards compatibility for that one friend who runs PGP on his Amiga. But few of my contacts do -- and moreover, client versions are clearly indicated in public keys.** Even if these archaic ciphers and formats aren't exploitable today, the current trajectory guarantees we'll still be using them a decade from now. Then all bets are off.

On the bright side, both Google and Yahoo seem to be pushing towards modern implementations that break compatibility with the old. Which raises a different question. If you're going to break compatibility with most PGP implementations, why bother with PGP at all?

Terrible mail client implementations

This is by far the worst aspect of the PGP ecosystem, and also the one I'd like to spend the least time on. In part this is because UX isn't technically PGP's problem; in part because the experience is inconsistent between implementations, and in part because it's inconsistent between users: one person's 'usable' is another person's technical nightmare.

But for what it's worth, many PGP-enabled mail clients make it ridiculously easy to send confidential messages with encryption turned off, to send unimportant messages with encryption turned on, to accidentally send to the wrong person's key (or the wrong subkey within a given person's key). They demand you encrypt your key with a passphrase, but routinely bug you to enter that passphrase in order to sign outgoing mail -- exposing your decryption keys in memory even when you're not reading secure email.

Most of these problems stem from the fact that PGP was designed to retain compatibility with standard (non-encrypted) email. If there's one lesson from the past ten years, it's that people are comfortable moving past email. We now use purpose-built messaging systems on a day-to-day basis. The startup cost of a secure-by-default environment is, at this point, basically an app store download.

Incidentally, the new Google/Yahoo web-based end-to-end clients dodge this problem by providing essentially no user interface at all. You enter your message into a separate box, and then plop the resulting encrypted data into the Compose box. This avoids many of the nastier interface problems, but only by making encryption non-transparent. This may change; it's too soon to know how.

So what should we be doing?

Quite a lot actually. The path to a proper encrypted email system isn't that far off. At minimum, any real solution needs:

• A proper approach to key management. This could be anything from centralized key management as in Apple's iMessage -- which would still be better than nothing -- to a decentralized (but still usable) approach like the one offered by Signal or OTR. Whatever the solution, in order to achieve mass deployment, keys need to be made much more manageable or else submerged from the user altogether.
• Forward secrecy baked into the protocol. This should be a pre-condition to any secure messaging system.
• Cryptography that post-dates the Fresh Prince. Enough said.
• Screw backwards compatibility. Securing both encrypted and unencrypted email is too hard. We need dedicated networks that handle this from the start.

A number of projects are already going in this direction. Aside above-mentioned projects like Axolotl and TextSecure -- which pretend to be text messaging systems, but are really email in disguise -- projects like Mailpile are trying to re-architect the client interface (though they're sticking with the PGP paradigm). Projects like SMIMP are trying to attack this at the protocol level.*** At least in theory projects like DarkMail are also trying to adapt text messaging protocols to the email case, though details remain few and far between.

It also bears noting that many of the issues above could, in principle at least, be addressed within the confines of the OpenPGP format. Indeed, if you view 'PGP' to mean nothing more than the OpenPGP transport, a lot of the above seems easy to fix -- with the exception of forward secrecy, which really does seem hard to add without some serious hacks. But in practice, this is rarely all that people mean when they implement 'PGP'.

Conclusion

I realize I sound a bit cranky about this stuff. But as they say: a PGP critic is just a PGP user who's actually used the software for a while. At this point so much potential in this area and so many opportunities to do better. It's time for us to adopt those ideas and stop looking backwards.
http://blog.cryptographyengineering....-with-pgp.html





Why One of Cybersecurity’s Thought Leaders Uses a Pager Instead of a Smart Phone
Andrea Peterson

In the computer and network security industry, few people are as well known as Dan Geer. A long-time researcher who is thought of as one of the industry's thought leaders, Geer is currently the Chief Information Security Officer at In-Q-Tel -- a non-profit venture capital firm that invests in technology to support the Central Intelligence Agency.

Speaking on his own behalf as the keynote at Black Hat USA last week, Geer laid out an ambitious plan to help secure the Internet and define privacy in the digital age, including mandating security breach disclosure, having the U.S. government buy and disclose all the zero day vulnerabilities it can find, and supporting an even stronger "right to be forgotten" than is currently being tried out by the European Union. His full keynote is available to watch on YouTube -- or to read via Black Hat's Web site.

The Switch spoke with him after his keynote to dig into a different topic that he touched on: His distrust of increasing data collection and how he tries to stay off the digital grid in his own life. This interview has been lightly edited for length and clarity.

One of the things I was very interested in from your talk was your personal approach to technology now -- as one of the sort of elders of the cybersecurity community you really seem to try to stay off the network as much as possible. Is that accurate?

I don't carry a cellphone. Honestly, it's a nuisance -- it would be very helpful because as you know things aren't about planning these days, they're about coordination. "Oh, did you see this? Get over here." It's about coordination rather than planning.

But on the other hand, I testified actually twice -- once at the FCC, once in a congressional committee -- that if you required location tracking, I was going to give one up. And to an extent, it's only putting my money where my mouth is. I said I would give it up, and went ahead and did it. So you say, "Well, you're cutting of your nose to cut your face, you're just being stubborn." But no, I meant it.

You no doubt have written about data retention laws and the like... The whole bit about data retention laws bothers me in many ways. On the other hand, if you're an optimist or you're in an position to control how data is used you'll be much more comfortable about having it. Does the name Alessandro Acquisti mean anything to you?

It does -- although probably more to you...

He's a professor at Carnegie Mellon. I think he's about as good as a designer of experiments in privacy -- in particular people's real opinions on privacy -- as anybody. He's really good at experimental design, which speaking as someone who was once trained as a statistician appeals to me. He runs very clever experiments, and those clever experiments include getting past the institutional review committee which is not exactly a walk in the park...

But he's done a bunch of things, and shown that if you give people fine-grained control over what their information is in public, people reveal more. They might say if you give me a lot of control, they'd reveal less -- but it doesn't work that way. People will reveal more if they have more control so to a certain extent is what he's verifying is sort of my own feeling: If I don't have control I don't want to reveal it.

Part of my personal opinion about all this is that I don't trust a situation where I have not only no control about its use, but no visibility about whether it is being used. Take electronic health records. We're obviously going towards it in a big way. But I ask you, who owns the electronic health records?

That's a good question.

I worked in Harvard's teaching hospitals for 10 years after getting out of college. And in 1974, I'm fairly certain that was the year -- this is by memory, but I'm fairly certain that was the year -- but in Massachusetts that's when who owned the medical record changed from being the individual to the institution. Before that, when I went to the window I could say "give me my record" and you would have to produce a stack of paper and when I took it and walked out, I had my records. There wasn't another.

That was changed, ostensibly, to combat insurance fraud -- people were taking records, removing parts of it and going to another institution to get more medicine or more whatever. Insurance fraud, okay? But the point was there was a record and you knew where it was -- because if I have it, you don't, and if you have it, I don't. But now electronic health records, where is that going to go? There are people who argue that in the world of electronic health records, it's natural for it to revert to the patients. I think that's probably true -- but let's think about this.

I have a practicing lawyer friend who argues in a world where malpractice suits are so ordinary, common, and frequent that if might not be the case. If you are a practitioner and it's 100 percent electronic records and you're worried about being sued, will you or will you not want a copy of that record in your files as well as wherever else I might be? Or are you willing to say, "I looked at Dan's records in this cloud at this time and it told me I should give you the transfusion" versus "I've got a copy of the record and this is what I used to make my decision and you know that my copy and this copy are not the same, so someone has modified it?"

So that's going back to what this guy is actually talking about doing: Founding a company that provides time-stamped delivery of medical records fragments so that someone can say, "no, this is what I had and I can prove it -- this third party over here can say, no that is what I transmitted to Dan's doctor on this date. We don't know what's in it because it was encrypted, but we can say it was the same bits because we stamped it in a certain way."

And I think he's right about that -- the integrity in electronic health records becomes perhaps more important than confidentiality. It may well be that we are at a moment in time when what changes under the pressure to provide observability. I'm using electronic health records, but it could just as easily be cars, or the smart grid or anything else. What changes is confidentiality for better or worse goes away. But that leaves the question of integrity.

I'm sure you've seen this, but the so-called CIA rule -- confidentiality, integrity and availability -- is a traditional triad about computer security concerns. Availability is not as big of one, but it has to do with "if I go looking for Dan's records will I actually get it." Integrity is "has anybody mucked with it?" And confidentiality is "has anyone whose never had any reason to know able to see it?" I think it's honest to say we may lose a certain amount of confidentiality control. It would be most unfortunate if we lost integrity control at the same time.

So what do we do when there's lots of fragments of my medical records and every practitioner I deal with wants their piece of it, or maybe the whole thing? Integrity actually is the big deal then, I would argue.

Arguably the same points could be made about tracking cars for insurance purposes...

And I understand why you would say you want to record everything with a car -- I understand that. Where's it has been? One of the things Tim O'Reilly suggested in his work on Algorithmic Regulation was, well you know you could make obeying the speed limit built into the car, but you could also make the speed limit dependent on how crowded the roads are -- so you could drive faster in the off hours. In rush hour, the car would drive slower. Yeah, we'd probably do what they do in London and adjust for congestion. But his point is that instead of regulating the prior conditions -- "you can't go faster than this, or you can't do that" -- regulate it on the run with an algorithm. Of course, that way lies wonderful things and terrible things. It's what do you think is probable? As for myself...

I'm not a Luddite. Luddites smash machines. But I am getting older, and it's easier to say "why do I care?" To continue with the cellphone conversation, it would be especially useful. A member of my family is mentally ill, I've been carrying a paging device but the pager companies are slowly going out of business because of this [points towards cellphone being used to record interview] for obvious reasons. But it's important for some people to be able to reach me for certain situations that occur, as you might guess. Maybe I'll give up and do this. GPS built into cars, or OnStar that you can't turn off. Do you care about that? As I'm sure you know, the most common reaction is "I live a good life, I have nothing to hide." Daniel Solove has a book about this in which he dismembers that argument, showing that just because you have nothing to hide doesn't mean that you want everything recorded.

This comes down to what do you expect as defaults. For me, the default I find easier to expect is "data doesn't exist" rather than "data exists but we handle it properly. Look, I mean nobody who contributed to the 1.2 billion passwords [reportedly in the hands of Russian cybercriminals] expected to do that. That's presumably a rare event -- reasonably rare -- but what default you are willing to accept or what default feels natural to you is really what it comes down to. For me the default of "the data doesn't exist" seems more natural to me than trusting everyone to not abuse it.

So you don't trust a world where data creation and collection is the default.

Trust, what's the definition of trust? You know I have a sort of personal definition of privacy and a personal definition of security. For me, trust is the availability of effective recourse. I don't guard myself if I have effective recourse, so I trust family members because of course you always have some recourse if it's family -- one way or another you do, maybe not today, maybe not with your grandmother, but you do. There's always something. But there's lots of situations now where I would have no effective recourse, so I don't trust it. If I don't trust it, what should be my default? The answer is probably the creation of data is something I should avoid if I can do so. That's an awfully long-winded answer to your initial question, but nuances matter.

Do you think that the new default as surveillance has become more ubiquitous is that everything is public to a certain extent?

Man, what is public these days? If I can read your newspaper from orbit, what is public? If I can tell where you are in your house by imaging through the wall, what is public? On and on and on. We're not there yet, but I figure we're within a few years of being able to figure out if you're in a room by sniffing out your DNA. Is that public? Or putting it differently, as that sphere enlarges, what remains private? Do you have to own a house to have privacy in it or not? If your landlord owns a house...

And mine does.

So does he have the right to examine the records of a smart meter and see if you're running the toaster and the washer and the air conditioning at full blast? The farm that we have, we have several people who live there who work there just because with horses if you have an emergency, like god forbid a fire, you have a very short time -- there's no time to call people and ask them to get their pants on to come help. It's now or never. So we have a certain number of people, six, two trainers, three groomers, and a vet in training. I did, in fact, put in a water meter of sorts -- a flow rate meter -- because one of our wells kept running dry and I wanted to know if there was a leak in the pipes and it was just seeping down into the ground or if someone was taking six hour showers or what. You might say, that's a little invasive. Turned out, one of the tenants had a bad leak in the bathroom and thought nothing of it. "You could have told me" was my reaction.

But the point is that after I ascertained that water was indeed not going back into the ground, I knew it had to be going somewhere. And I don't go visit other people's apartments at random -- I could but I don't. But sure, I put a meter on. And certainly your average electrical engineering student could create a device to determine if you're on your cell phone. Maybe that doesn't matter, but they could tell when you're on the phone. So they wait until you're on the phone, run up to the porch and steal your newspaper.

I'm making this up, but if it's observable does that mean it's public? That's sort of your question, and my question too. Just because it's observable without crossing the boundaries of your property, does that mean it's public? I think if we don't do something, that's where it's going. What was it, the 1920s [Olmstead] through the 1960s [Katz] where wiretaps went from not requiring warrants to, of course they do? It was very plausible -- this wire leaves your property, why wouldn't you expect it to be listened to on someone else's property? The decision that overruled that was "no, you have a reasonable expectation of privacy." But that phrase, reasonable expectation is open to interpretation. What's a reasonable expectation. As you know, it doesn't take much to have a parabolic antenna and we can listen to you in an open room.

Yep, someone else could be recording this interview right now.

Absolutely. Or there could be a ghost in your machine.

Yep.

But it's observable, because it's in public. I'd like to think that we stopped there for a little bit. We can always let go later. We can always say, "nope, your copyright is invalid because it was published three times" and it's now in the public domain. That happened once to the poem "Desiderata" -- "go placidly amid the noise and haste" -- it was published repeatedly in church newsletters and the courts said it was in the public domain. What is the public domain? That's really the question. Technology is changing what is public by changing what is observable, and that's what I'm getting at. And I don't know the answer, but I do know that if we don't answer it, things will continue.

So one of the points you made earlier was that it is actually very inconvenient for you not to have a smartphone. Clearly, I'm recording this interview on my smartphone -- actually I have two smartphones on me, and a laptop, and all other kinds of gadgetry...

Of course.

But it seems like the lifestyle choice you've made would be very difficult for a lot of people without your technical understanding or resources...

Yes, and maybe without my gray hair. I'm not asking how old you are, but young people such as yourself in a way can't do without social media. If you're a high school student for example and you don't play that game you will not be part of any circle of friends -- or probably not, maybe if you're going to a forestry high school or something, but you know what I'm saying: Generally speaking, it changes what is possible on the human scale that you almost have no choice and I understand that. Just to be clear, I'm not belittling that at all.

Your question was about lifestyle choice, and I said it in the talk: There's an old engineering rule about fast, cheap, and reliable -- choose two. If you're at NASA and you're sending something to the moon you need it to be fast and reliable, but you can throw away cheap. Throwaway medical instruments in an operating room need to have a different thing -- doesn't have to work for long, and since you're going to throw it away it would be nice if it's cheap, so you make your trade-offs.

That as a rule of thumb is mostly what engineering is about. You can have most things, but not everything. I think security engineering are about tolerable failure modes -- are about what the tolerable levels of failure are. Determine what failure modes are tolerable and what are not and I can design around not having the intolerable ones. But the cost of it will be some others, because you can't have them all. So when I say, not not fast, cheap and reliable but freedom, security, and convenience, choose two -- it's in that spirit as an engineer.

I was once trained as an electrical engineer, so that rings true to me for maybe you could say reasons of indoctrination. But I would say really everything in life is a trade-off. There's an economic argument that the cost to everything is the forgoing of an alternative. You buy a hundred dollar this you can't buy a hundred dollar something else. In this case, the forgone alternative is on the risk accumulation side. But it was a choice saying, "what do I need" versus "what do I want?"

I live in a world of old machinery -- with hat number two [as a farmer] on. And old machinery has an interesting characteristic compared to new machinery. New machinery doesn't break very often, but when it does you cannot fix it. The old machinery breaks all the damn time, but anybody with a few wrenches, a hammer, and a willingness to get dirty can fix it. One of my guys set an old tractor on fire -- burnt out the wiring harness. I have no instructions, but it's so straight forward. It was a freakin' mix, but it's fixable. Maybe your newspaper has covered the right to repair.

Yes, we've talked to the iFixit folks about how consumers' ability to repair items in their own lives has really changed.

Right. I'm in a sense flying in formation out of regular contact with those folks. As I said, I live with old machinery which breaks often, but any idiot can fix. Who fixes their own Prius? I haven't heard of anybody -- there's one guy I know who could and might well, but he also spent an entire summer working in a Prius shop because he wanted to know how it works.

Which is not a luxury everyone else has.

It's not a luxury everyone else has. In fact, when I said from the podium when I said that one way for a supplier to avoid liability would be to give consumers the right to recompile, well I was talking to someone a few days ago who sai,d "well, nobody wants to recompile, nobody knows how -- who do you think you are?" It was a good point, not denying that at all.

But if the choice is "here are the means to change it or repair it or whatever, you don't have to use them, if you do use them it will work, but if you want to do that, the following rules apply: you must bring it to the dealership, you must bring it on schedule, and if you have a collision we need to know about it." I'm making this all up, but lithium batteries don't take shocks very well so if you do have a collision with a bunch of lithium batteries in the back of your car, you probably ought to look at it.

You know, Jeff [Moss, the founder of Black Hat also known as The Dark Tangent] was talking in his opening remarks about "radical simplicity" -- I'm not quite sure what that means in plain English. Is that a movement or a term of art or something?

I don't know, quote, what it means, but let me guess: You can actually draw a line around something and say, "all of the moving parts in the system are inside this box -- I don't have to know about a cloud in Singapore, I don't have to know." After all, how did Target get taken over? Their air conditioning contractor -- who probably knows nothing about computers and shouldn't have to. If you go to the big banks in New York, I wish I could say which ones, but I probably shouldn't. But most of the ones I know, and that is a subset, are really bearing down on what they call counter party risk: If you have access to my data through some relationship, then an invasion of you is an invasion of me therefore I'm going to hold you to standards that are relevant to me. Even if they aren't relevant to you, if you want to do business with me you're going to have to do this.

And the banks are really enforcing this. If you're a trade clearance firm, what are you doing? The answer is making lists and comparing them and looking for good matches -- but no, there are all sort of other requirements because you won't be able to do business with your clients because they need to make sure your air conditioning contractor can't get into you like Target's got a hold of them. That's the complexity and maybe what this radical simplicity says: I should be able to ascertain the the moving parts are.

What is it that Leslie Lamport says? A distributed system is one where the failure of a machine you've never heard of stops you from being able to do your job.

Yes, I've had that problem many times...

I went to check out and they told me I couldn't because their computer wasn't working, and I was like "wait a minute, do you know what audience you have here?" I didn't say anything, but you know when you can't give money to the front desk at this conference [it] just seems highly coincidental.
http://www.washingtonpost.com/blogs/...a-smart-phone/





The Gyroscopes in Your Phone Could Let Apps Eavesdrop on Conversations
Andy Greenberg

In the age of surveillance paranoia, most smartphone users know better than to give a random app or website permission to use their device’s microphone. But researchers have found there’s another, little-considered sensor in modern phones that can also listen in on their conversations. And it doesn’t even need to ask.

In a presentation at the Usenix security conference next week, researchers from Stanford University and Israel’s defense research group Rafael plan to present a technique for using a smartphone to surreptitiously eavesdrop on conversations in a room—not with a gadget’s microphone, but with its gyroscopes, the sensors designed measure the phone’s orientation. Those sensors enable everything from motion-based games like DoodleJump to cameras’ image stabilization to the phones’ displays toggling between vertical and horizontal orientations. But with a piece of software the researchers built called Gyrophone, they found that the gyroscopes were also sensitive enough to allow them to pick up some sound waves, turning them into crude microphones. And unlike the actual mics built into phones, there’s no way for users of the Android phones they tested to deny an app or website access to those sensors’ data.

“Whenever you grant anyone access to sensors on a device, you’re going to have unintended consequences,” says Dan Boneh, a computer security professor at Stanford. “In this case the unintended consequence is that they can pick up not just phone vibrations, but air vibrations.”

For now, the researchers’ gyroscope snooping trick is more clever than it is practical. It works just well enough to pick up a fraction of the words spoken near a phone. When the researchers tested their gyroscope snooping trick’s ability to pick up the numbers one through ten and the syllable “oh”—a simulation of what might be necessary to steal a credit card number, for instance—it could identify as many as 65 percent of digits spoken in the same room as the device by a single speaker. It could also identify the speaker’s gender with as much as 84 percent certainty. Or it could distinguish between five different speakers in a room with up to 65 percent certainty.

But Boneh argues that more work on speech recognition algorithms could refine the technique into a far more real eavesdropping threat. And he says that a demonstration of even a small amount of audio pickup through the phones’ gyroscopes should serve as a warning to Google to change how easily rogue Android apps could exploit the sensors’ audio sensitivity.

“It’s actually quite dangerous to give direct access to the hardware like this without mitigating it in some way,” says Boneh. “The point is that there’s acoustic information being leaked to the gyroscope. If we spent a year to build optimal speech recognition, we could get a lot better at this. But the point is made.”

Modern smartphones use a kind of gyroscope that consists of a tiny vibrating plate on a chip. When the phone’s orientation changes, that vibrating plate gets pushed around by the Coriolis forces that affect objects in motion when they rotate. (The same effect is why the Earth’s rotation causes the ocean’s water to swirl or air currents to form into spinning hurricanes.)

But the researchers found that the same tiny pressure plates could also pick up the frequency of minute air vibrations. Google’s Android operating system allows movements from the sensors to be read at 200 hertz, or 200 times per second. Since most human voices range from 80 to 250 hertz, the sensor can pick up a significant portion of those voices. Though the result is unintelligible to the human ear, Stanford researcher Yan Michalevsky and Rafael’s Gabi Nakibly built a custom speech recognition program designed to interpret it.

The results, says Boneh, aren’t anywhere close to the kind of eavesdropping possible from the phone’s microphone–he describes the software in its current state as picking up “a word here and there.” But he says the research is only intended to show the possibility of the spying technique, not to perfect it. “We’re security experts, not speech recognition experts,” Boneh says.

Both iOS and Android devices use gyroscopes that can pick up sound vibrations, Boneh says. And neither requires any apps to seek permissions from users to access those sensors. But iOS limits the reading of the gyroscopes to 100 hertz, which makes audio spying far harder to pull off. Android allows apps to read the sensor’s data at twice that speed. And though Chrome or Safari on Android limit websites to reading the sensor at just 20 hertz, Firefox for Android lets websites access the full 200 hertz frequency. That means Android users visiting a malicious site through Firefox could be subject to silent eavesdropping via javascript without even installing any software.

Boneh says that Google has likely been aware of the study: The company’s staffers were included on the Usenix program committee. A Google spokesperson wrote in a statement that “third party research is one of the ways Android is made stronger and more secure. This early, academic work should allow us to provide defenses before there is any likelihood of real exploitation.”

The research isn’t actually the first to find that phones’ gyroscopes and accelerometers pose a privacy risk. In 2011, a group of Georgia Tech researchers found that a smartphone could identify keystrokes on nearby computers based on the movement of the phone’s accelerometers. And in another paper earlier this month, some of the same Stanford and Rafael researchers found that they could read a smartphone’s accelerometers from a website to identify the device’s “fingerprint” out of thousands.

In this case, the researchers say mobile operating system makers like Google could prevent the gyroscope problem by simply limiting the frequency of access to the sensor, as Apple already does. Or if an app really needed to access the gyroscope at high frequencies, it could be forced to ask permission. “There’s no reason a video game needs to access it 200 times a second,” says Boneh.

In other words: Don’t worry. With a small Android tweak from Google, it’s possible to keep DoodleJump and your privacy too.
http://www.wired.com/2014/08/gyroscope-listening-hack/





The Biggest iPhone Security Risk Could be Connecting One to a Computer

Design quirks allow malware to be installed on iOS devices and cookies to be plucked from Facebook and Gmail apps
Jeremy Kirk

Apple has done well to insulate its iOS mobile operating system from many security issues, but a forthcoming demonstration shows it's far from perfect.

Next Wednesday at the Usenix Security Symposium in San Diego, researchers with the Georgia Institute of Technology will show how iOS's Achilles' heel is exposed when devices are connected over USB to a computer or have Wi-Fi synching enabled.

The beauty of their attack is that it doesn't rely on iOS software vulnerabilities, the customary way that hackers commandeer computers. It simply takes advantage of design issues in iOS, working around Apple's layered protections to accomplish a sinister goal.

"We believe that Apple kind of overtrusted the USB connection," said Tielei Wang, a co-author of the study and research scientist at the institute.

Last year, Wang's team developed Jekyll, an iPhone application with well-masked malicious functions that passed Apple's inspection and briefly ended up on its App Store. Wang said although the research was praised, critics contended it might have been hard to get people to download Jekyll amid the thousands of apps in the store.

This time around, Wang said they set out to find a way to infect a large number of iOS devices and one that didn't rely on people downloading their malicious app.

Their attack requires the victim's computer to have malware installed, but there's a thriving community of people known as "botnet herders" who sell access to large networks of compromised computers.

Wang said they conducted their research using iOS devices connected to Windows, since most botnets are on that platform, but their attack methods also apply to OS X.

Apple requires a person to be logged into his account in order to download an application from the App Store. But Wang and the researchers developed a man-in-the-middle attack that can trick an Apple device that's connected to a computer into authorizing the download of an application using someone else's Apple ID.

As long as the application still has Apple's digital signature, it doesn't even need to still be in the App Store and can be supplied from elsewhere.

But Apple is pretty good at not approving malicious applications, so the researchers found another way to load a malicious app that didn't involve the App Store.

Apple issues developer certificates to those who want to do internal distributions of their own applications. Those certificates can be used to self-sign an application and provision it.

Wang's team found they could sneak a developer provisioning file onto an iOS device when it was connected via USB to a computer. A victim doesn't see a warning.

That would allow for a self-signed malicious application to be installed. Legitimate applications could also be removed and substituted for look-alike malicious ones.

"The whole process can be done without the user's knowledge," Wang said. "We believe that it is a kind of weakness."

Wang said Apple has acknowledged the team's research, some of which was shared with the company last year, and made some changes. An Apple spokeswoman in Sydney did not have a specific comment on the research.

One of Apple's changes involved displaying a warning when an iOS device is connected to a particular computer for the first time, advising that connections should only be made with trusted computers, Wang said. That advice is only displayed once.

To be sure, Apple has powerful ways to disable such attacks. It can remove applications from the App Store, remotely disable applications on a device and revoke developer certificates. And it's questionable if an attacker would see an economic benefit from infecting large numbers of iOS devices.

But state-sponsored hackers and cyberspies opt for stealthy, targeted attacks aimed at just a few users. This method could be of use if an attacker knows exactly who is using a specific, compromised computer.

They also found another weakness when an iOS device is connected over USB. The host computer has access to a device not only through iTunes but also via a protocol called Apple File Connection, which is used for accessing images or music files.

That protocol has access to files within iOS's application directories, which include secure, "https" cookies, according to their research paper. Cookies are small data files that allow Web services to remember that a person is logged in, among other functions.

Cookies are especially sensitive since they can be used to hijack someone's account. iOS prevents applications from accessing each other's cookies. But it doesn't stop a desktop computer from grabbing that information, Wang said.

The researchers recovered login cookies, including those for Facebook and Google's Gmail. Neither of those companies had a comment.

The best advice is to not connect your phone to a computer, especially if you think the computer might be infected with malware.

"Just avoid that," Wang said.

The study was co-authored by Yeongjin Jang, Yizheng Chen, Simon Chung, Billy Lau and Wenke Lee.
http://www.computerworld.com.au/arti..._one_computer/





Surveillance Court Judge Criticized NSA 'Overcollection' of Data

Decision offers scathing assessment of agency's management of Internet-surveillance program
Devlin Barrett

Newly declassified court documents show one of the National Security Agency's key surveillance programs was plagued by years of "systemic overcollection'' of private Internet communications.

A 117-page decision by Judge John Bates of the Foreign Intelligence Surveillance Court offers a scathing assessment of the NSA's ability to manage its own top-secret electronic surveillance of Internet metadata—a program the NSA scrapped after a 2011 review found it wasn't fulfilling its mission.

The newly declassified documents suggest another possible reason for its demise. The surveillance agency struggled to collect metadata, such as the "to'' and "from'' information of an email, without also collecting other information, such as the contents or partial contents of such communications, information that is supposed to be beyond what it legally is permitted to gather.

Judge Bates' memorandum is heavily redacted, so even the date or year it was written is unclear. In it, he repeatedly criticizes the NSA for "long-standing and pervasive violations of the prior [court] orders in this matter.''

Previously released documents show many of the problems came to light in 2009.

Some of the problems with Internet metadata previously were reported and have been part of a broad critique of the NSA's surveillance activities since the Sept. 11, 2001, terror attacks. The new document from Judge Bates offers the most detailed accounting—even with more than a dozen pages blacked out—of what those problems were.

Among the issues described in the judge's memorandum: a typographical error that would have led to two months of over-collection of data in previous court orders; NSA sharing information with other agencies that failed to limit the use of the data purely to counterterrorism purposes; and disseminating reports with information about legal U.S. residents without getting necessary approval to share that information.

"The most charitable interpretation possible is that the same factors identified by the government [redacted] remained unabated and in full effect: non-communication with the technical personnel directly responsible [redacted] resulting from poor management,'' the judge wrote.

The over-collection of data had occurred continuously since the program was first authorized by the court, and some assurances made by senior NSA officials about the limits placed on the program proved to be untrue, the judge found.

He also concluded that given the frequency with which NSA employees shared information about legal U.S. residents without authorization, "widespread ignorance of the rules'' seemed to have been a problem at the agency.

The judge's order ultimately reauthorized the program, with more stringent conditions than the government had sought.

The Office of the Director of National Intelligence, in a written statement Monday, said the Internet metadata program ended after a 2011 review concluded it "was no longer meeting NSA's operational expectations. Accordingly, after careful deliberation, the government discontinued the program, and the metadata collected pursuant to this program has been purged.''

The statement didn't address the specific criticisms raised in the documents.

The memorandum was one of a number of documents released as part of a declassification review in the wake of revelations by former NSA contractor Edward Snowden, and in response to Freedom of Information Act filings from the Electronic Privacy Information Center, an advocacy group.

Judge Bates has been the designated spokesman for the judiciary opposing several proposed changes to the structure of the Foreign Intelligence Surveillance Court, particularly the addition of a special advocate to represent privacy interests.
http://online.wsj.com/news/article_e...MDEwMzExNDMyWj





Report: British Spy Agency Scanned for Vulnerable Systems in 32 Countries
Mikael Ricknäs

British intelligence agency GCHQ used port scanning as part of the “Hacienda” program to find vulnerable systems it and other agencies could compromise across at least 27 countries, German news site Heise Online has revealed.

The use of so-called port scanning has long been a trusty tool used by hackers to find systems they can potentially access. In top-secret documents published by Heise on Friday, it is revealed that in 2009, GCHQ started using the technology against entire nations.

One of the documents states that full scans of network ports of 27 countries and partial scans of another five countries had been carried out. Targets included ports using protocols such as SSH (Secure Shell) and SNMP (Simple Network Management Protocol), which are used for remote access and network administration.

The results were then shared with other spy agencies in the U.S., Canada, the U.K., Australia and New Zealand. “Mailorder” is described in the documents as a secure way for them to exchange collected data.

Gathering the information is only the first step, according to Heise Online.

The documents also reveal “Landmark,” a program started by the Canadian spy agency CSEC to find what it calls ORBs (Operational Relay Boxes), which are used to hide the location of the attacker when it launches exploits against targets or steals data, Heise said. For example, during an exercise in February 2010, eight groups of three “network exploitation analysts” were able to find 3,000 potential ORBs, which could then potentially be used by CSEC.

"Shocking and sickening"

“It isn’t surprising [the intelligence organizations] were technically able to do this ... That they attack people they have no reason to attack and then install malware on their systems to attack even more systems is really shocking and sickening to see. On that I think we can all agree,” said Christian Grothoff, one of the co-authors of the Heise article, in an interview with IDG News Service.

At the Technische Universität München, he has led the development of TCP Stealth, which can help prevent Hacienda and similar tools from identifying systems. The development of TCP Stealth was started during a course on peer-to-peer systems and security that Grothoff taught last year.

TCP Stealth works by adding a passphrase on the user’s device and on the system that needs to be protected.

“For example, if you have remote administration of routers or servers you don’t want that access to be public. You typically have a small group of administrators that are authorized, so between them you share a passphrase and also add it where they want to connect,” Grothoff said.

If the passphrase is incorrect when the connection is started, the system simply doesn’t answer, and the service appears to be dead.

For this to work, operating systems and applications have to be upgraded to be able to use TCP Stealth. Linux has already been upgraded and there is a library application developers can use to add TCP Stealth to their software without having to recompile. Windows, Chrome OS and Mac OS haven’t been ported to TCP Stealth.

The hope is now that the technology will be standardized by the IETF (Internet Engineering Task Force). A first draft has already been filed with the organization. It was co-authored by Jacob Appelbaum with the Tor project and edited by Holger Kenn from Microsoft in Germany.

“I think there is a chance we can convince people this is necessary,” Grothoff said.
http://www.pcworld.com/article/24660...r-reveals.html





U.S. Firm Helped the Spyware Industry Build a Potent Digital Weapon for Sale Overseas
Barton Gellman

CloudShield Technologies, a California defense contractor, dispatched a senior engineer to Munich in the early fall of 2009. His instructions were unusually opaque.

As he boarded the flight, the engineer told confidants later, he knew only that he should visit a German national who awaited him with an off-the-books assignment. There would be no written contract, and on no account was the engineer to send reports back to CloudShield headquarters.

His contact, Martin J. Muench, turned out to be a former developer of computer security tools who had long since turned to the darkest side of their profession. Gamma Group, the British conglomerate for which Muench was a managing director, built and sold systems to break into computers, seize control clandestinely, and then copy files, listen to Skype calls, record every keystroke and switch on Web cameras and microphones at will.

According to accounts the engineer gave later and contemporary records obtained by The Washington Post, he soon fell into a shadowy world of lucrative spyware tools for sale to foreign security services, some of them with records of human rights abuse.

Over several months, the engineer adapted Gamma’s digital weapons to run on his company’s specialized, high-speed network hardware. Until then CloudShield had sold its CS-2000 device, a multipurpose network and content processing product, primarily to the Air Force and other Pentagon customers, who used it to manage and defend their networks, not to attack others.

CloudShield’s central role in Gamma’s controversial work — fraught with legal risk under U.S. export restrictions — was first uncovered by Morgan Marquis-Boire, author of a new report released Friday by the Citizen Lab at the University of Toronto’s Munk School of Global Affairs. He shared advance drafts with The Post, which conducted its own month-long investigation.

The prototype that CloudShield built was never brought to market, and the company parted ways with Gamma in 2010. But Marquis-Boire said CloudShield’s work helped pioneer a new generation of “network injection appliances” sold by Gamma and its Italian rival, Hacking Team. Those devices harness malicious software to specialized equipment attached directly to the central switching points of a foreign government’s national Internet grid.

The result: Merely by playing a YouTube video or visiting a Microsoft Live service page, for instance, an unknown number of computers around the world have been implanted with Trojan horses by government security services that siphon their communications and files. Google, which owns YouTube, and Microsoft are racing to close the vulnerability.

Citizen Lab’s new report, based on leaked technical documents, is the first to document that commercial spyware companies are making active use of this technology. Network injection allows products built by Gamma and Hacking Team to insert themselves into an Internet data flow and change it undetectably in transit.

The report calls that “hacking on easy mode,” in which “compromising a target becomes as simple as waiting for the user to view unencrypted content on the Internet.”

Attacks of that kind were the stuff of hacker imaginings until this year, when news accounts based on documents provided by former National Security Agency contractor Edward Snowden described a somewhat similar NSA program code-named QUANTUMINSERT.

“It has been generally assumed that the best funded spy agency in the world would possess advanced capability,” the Citizen Lab report says. “What is perhaps more surprising is that this capability is being developed by Western vendors for sale on the commercial market.”

Hacking Team and the company that now owns CloudShield denied any wrongdoing. Messages left with Gamma went unreturned.

The “custom payload” that Hacking Team uses to compromise YouTube injects malicious code into the video stream when a visitor clicks the play button. The user sees the “cute animal videos” he expects, according to Citizen Lab, but the malicious code exploits a flaw in Adobe’s Flash video player to take control of the computer.

Another attack, custom-built for use on Microsoft pages, uses Oracle’s Java technology, another common browser component, to insert a back door into a victim’s computer.

Security and privacy advocates have identified those vulnerabilities before, but the two companies regarded them as hypothetical. In response to a bug report in September 2012, which warned of a potential YouTube attack, Google’s security team responded that the use of unencrypted links to send video “is expected behavior.” Google closed the discussion with the tag “WontFix.”

‘Against our will’

After Marquis-Boire disclosed to them confidentially last month that their services are under active attack, Google and Microsoft began racing to close security holes in networks used by hundreds of millions of users.

“I want to be sure there’s no technical means for people to take a user’s data against our will,” Eric Grosse, Google’s vice president for security engineering, said in an interview. “If they want to do that, they need to use legal means and we pursue that.”

Google and Microsoft executives said they are accelerating previous plans to encrypt their links to users across a wider range of their services. Encryption scrambles e-mail, stored files, video and other content as it travels from their servers to a user’s computer or mobile device. That step, as far as security engineers know, effectively prevents most attacks in current use.

Since learning of Marquis-Boire’s findings in mid-July, Google has encrypted a “large majority” of YouTube video links, and Microsoft has changed default settings to prevent unencrypted log-ins on most live.com services.

“There’s a lot of products to update so we’re not at 100 percent yet but we’re actively engaged with all the teams,” Grosse said, acknowledging that Google Maps, Google Earth and other services still connect to users in ways that can easily be intercepted.

Grosse said comprehensive use of encryption should now be regarded as a basic responsibility of Internet services to their users.

“We’re probably already [encrypted] to a sufficiently high level that I would guess our adversaries are already having to scramble and shift to some other widely-used service that has not gone to SSL,” he said, referring to a form of encryption called the secure socket layer, which is indicated by a padlock icon on some browsers.

Matt Thomlinson, Microsoft’s vice president of security, said in a statement that his company “would have significant concerns if the allegations of an exploit being deployed are true.”

“We have been rolling out advanced security across our web properties to continue to help protect our customers,” he added.

In computer circles, any unencrypted data is known as “cleartext.” Marquis-Boire, expanding on a theme that other security researchers have emphasized since disclosures of National Security Agency programs began 14 months ago, said “the big take-away is that cleartext is just dead.”

“Unencrypted traffic is untrustworthy,” he said. “I would describe this as a sad reality of today’s Internet. The techno-Utopian, libertarian ideology of the ’90s didn’t foresee that the Internet would be as militarized as it is now. People with authority and power have decided to reserve the right to ‘own’ Internet users at the core. So in order to be safe you need to walk around everywhere wrapped in encryption.”

‘Lawful intercept’

The computer exploitation industry markets itself to foreign government customers in muscular terms. One Gamma brochure made public by WikiLeaks described its malware injection system, called FinFly ISP, as a “strategic, countrywide” solution with nearly unlimited “scalability,” or capacity for expansion. Hacking Team, similarly, says it provides “effective, easy-to-use offensive technology to the worldwide law enforcement and intelligence communities.”

In rare comments to the general public, the companies use the term “lawful intercept” to describe their products and say they do not sell to customers on U.S., European or U.N. black lists.

“Our software is designed to be used and is used to target specific subjects of investigation,” said Eric Rabe, a U.S.-based spokesman for Hacking Team, in an extended e-mail interview. “It is not designed or used to collect data from a general population of a city or nation.”

He declined to discuss details of the Citizen Lab report, which is based in part on internal company documents leaked to Marquis-Boire, but he appeared to acknowledge indirectly that the material was authentic.

“We believe the ongoing Citizen Lab efforts to disclose proprietary Hacking Team information is misguided, because, if successful for Citizen Lab, it not only harms our business but also gives the advantage to criminals and terrorists,” he said.

CloudShield’s founder, Peder Jungck, who oversaw the company’s relationship with Gamma before departing for a job with the British defense giant BAE Systems, did not respond to requests for comment.

Confidants of the CloudShield engineer, who has since left the company after becoming disillusioned with its surveillance work, identified him as Eddy Deegan, a British citizen. Deegan’s LinkedIn profile says he worked for the company as a professional services engineer during the period in question. Reached by telephone in France, Deegan declined to confirm or deny the identity of his external customer in late 2009.

“Nothing came of the work I was involved in at the time,” he said. “I asked, and was assured that nothing illegal was undertaken. I have no further comment.”

U.S. export restrictions, enforced by the Commerce Department, require a license for any foreign sale of technology described in the relevant statute as “primarily useful for the purpose of the surreptitious interception of wire, oral, or electronic communications.”

Jennifer Gephart, the media relations director for Leidos, which now owns CloudShield, declined to say whether the company had applied for an export license for the Gamma project. The transactions in question took place “prior to our company’s acquisition of CloudShield,” she said, but “to our knowledge” they were “handled in accordance with applicable regulations.”

Gephart confined her statement to the sale of CloudShield’s CS-2000 hardware. When asked about the company’s development of custom software to turn the device into a spyware delivery system, she declined to respond.

Robert Clifton Burns, who specializes in export controls at the law firm Bryan Cave, said that “surreptitious listening devices are covered and the software for that is also covered on the Commerce Control List.”

The regulations are complex and inconsistent, he said, and an authoritative legal judgment would require more facts. CloudShield might argue, he said, that malware injection is not “primarily useful” for surreptitious eavesdropping because it can also be used to track a target’s location, take photographs or steal electronic files. Although more intrusive, those attacks were not covered under the rules that applied in 2009.

The Gamma Group lists no e-mail address or telephone number on its Web site. No one responded to a lengthy note left on the company’s “Contact” page.

Muench, who has left his old job for a new position in France, read a LinkedIn message requesting an interview. He did not respond. In the past he has dismissed human rights concerns as unproven and defended Gamma’s products as vital for saving innocent lives. “The most frequent fields of use are against pedophiles, terrorists, organized crime, kidnapping and human trafficking,” he told the New York Times two years ago.

Security researchers have documented clandestine sales of Gamma and Hacking Team products to “some of the world’s most notorious abusers of human rights,” said Ron Deibert, the director of Citizen Lab, a list that includes Turkmenistan, Egypt, Bahrain and Ethiopia.

At CloudShield, executives knew the identity of at least one prospective customer for the system Deegan built. A former manager told The Post, with support from records obtained elsewhere, that CloudShield sent Deegan to Oman to plan a deployment for one of the country’s internal security services. The sale did not go through.

In its annual assessment of human rights that year, the State Department reported that Oman “monitored private communications” without legal process in order to “suppress criticism of government figures and politically objectionable views.”

‘A push market’

CloudShield did not see itself as a cloak and dagger company. It made its name for high-end hardware that could peer deeply into Internet traffic and pull out and analyze “packets” of data as they flew by.

The flagship product five years ago, the CS-2000, could not only look inside the data flow, but select parts of it to copy or reroute. That made it a good tool for filtering out unwanted data or blocking certain forms of cyberattack.

But hardware that could block data selectively could also rewrite innocent traffic to include malicious code. That meant the CloudShield product could be used for attack as well as defense, a former executive said.

CloudShield began pitching its product for offensive use, focusing on U.S. customers because of export controls.

“The basic motivations are pretty straightforward,” said one former senior manager there. “It was a push market. We were trying to sell boxes. It was a very conscious effort to target lawful intercept as a space where you could legitimately apply these kinds of technologies.”

Two former employees said that Muench, the Gamma executive, traveled to Sunnyvale in 2009 in hopes of striking a business relationship. Jungck, CloudShield’s founder and chief technology officer, said he could not export that kind of technology and sent Muench home.

But the leadership team reconsidered, and hit upon a plan. They believed that Deegan could do the work for Gamma without triggering U.S. export controls as long as CloudShield’s U.S. operations had nothing to do with it.

“I think we all had qualms in the beginning,” said one former executive who took part in the deliberations. “I think we rationalized a way in which we felt comfortable with it. Part of that rationalization was to keep it outside the U.S., limit it to that environment where that project was.”

What first appeared as an absorbing technical challenge for Deegan began to take a darker cast. His prototype system could inject any of “254 trojans,” or all of them, into a targeted computer. If it failed once, it would keep trying, up to 65,000 times.

He was proud of his technical accomplishments, he told confidants, but was no longer sure he had done the right thing. After meeting prospective customers in Oman, his qualms grew worse.

In the end, the Oman deal fell through, and other efforts, with other partners failed, too. CloudShield and Gamma parted ways, and Gamma found another hardware supplier. Deegan’s prototype, according to Marquis-Boire and a CloudShield insider, may have sped development of the flagship surveillance product that Gamma brought to market the following year.

Julie Tate contributed to this report.
http://www.washingtonpost.com/world/...390_story.html





Security Experts Call for Government Action Against Cyber Threats
Joseph Menn

Alarmed by mounting cyber threats around the world and across industries, a growing number of security experts see aggressive government action as the best hope for averting disaster.

Even though some experts are outraged by the extent of U.S. Internet spying exposed by former NSA contractor Edward Snowden, they are even more concerned about technologically sophisticated enemies using malware to sabotage utilities, wipe out data stored on computer drives, and steal defense and trade secrets.

Such fears and proposals on new laws and executive action to counter these threats were core topics this week in Las Vegas at Black Hat and Def Con, two of the world's largest gatherings for security professionals and hackers.

At Black Hat, the keynote speech by respected researcher Dan Geer went straight for national and global policy issues. He said the U.S. government should require detailed reporting on major cyber breaches, in the same way that deadly diseases must be reported to the Centers for Disease Control and Prevention.

Critical industries should be subjected to "stress tests" like the banks, Geer said, so regulators can see if they can survive without the Internet or with compromised equipment.

Geer also called for exposing software vendors to product liability suits if they do not share their source code with customers and bugs in their programs lead to significant losses from intrusion or sabotage.

"Either software houses deliver quality and back it up with product liability, or they will have to let their users protect themselves," said Geer, who works for In-Q-Tel, a venture capital firm serving U.S. intelligence agencies. Geer said he was speaking on his own behalf.

"The current situation - users can't see whether they need to protect themselves and have no recourse to being unprotected - cannot go on," he said.

Several of Geer's proposals are highly ambitious given the domestic political stalemate and the opposition of major businesses and political donors to new regulation, Black Hat attendees said. In an interview, Geer said he had seen no encouraging signs from the White House or members of Congress.

But he said the alternative would be waiting until a "major event" that he hoped would not be catastrophic.

Chris Inglis, who retired this year as deputy director of the National Security Agency, said disaster could be creeping instead of sudden, as broad swaths of data become unreliable.

In an interview, he said some of Geer's ideas, including product liability, deserved broader discussion.

"Doing nothing at all is a worse answer," said Inglis, who now advises security firm Securonix.

SOFTWARE FLAWS

Some said more disclosures about cyber attacks could allow insurance companies to set reasonable prices. The cost of cyber insurance varies, but $1 million in yearly protection might cost$25,000, experts say.

High-profile data breaches, such as at Target Corp and eBay Inc, have spurred demand for cyber insurance, but the insurers say they need more data to determine how common and how severe the intrusions are.

The ideas presented by Geer and other speakers would not give the government more control of the Internet itself. In that area, security professionals said they support technology companies' efforts to fight surveillance and protect users with better encryption.

Instead, the speakers addressed problems such as the pervasive number of severe flaws in software, which allow hackers to break in, seemingly at will.

Geer said the United States should try to corner the market for software flaws and outspend other countries to stop the cyber arms race. The government should then work to fix the flaws instead of hoarding them for offense, he said.

Black Hat founder Jeff Moss said he was reminded of the importance of data security while advising a government agency that had no way to tell which of its millions of records were accurate and which had been tampered with.

In the security industry, Moss said, "we're so day-to-day that we forget we're a piece of a bigger system, and that system is on the edge of breaking down."

Dire projections have led some professionals to despair, but others say the fact that their concerns are finally being shared by political leaders gives them hope.

Alex Stamos, who joined Yahoo Inc earlier this year as chief information security officer, said the Internet could become either a permanent tool of oppression or a democratizing force, depending on policy changes and technology improvements.

"It's a great time to be in the security industry," Stamos said. "Now is the time."

(Reporting by Joseph Menn; Editing by Tiffany Wu)
http://www.reuters.com/article/2014/...0G90MC20140809





Media Companies Spin Off Newspapers, to Uncertain Futures
David Carr

A year ago last week, it seemed as if print newspapers might be on the verge of a comeback, or at least on the brink of, well, survival.

Jeff Bezos, an avatar of digital innovation as the founder of Amazon, came out of nowhere and plunked down $250 million for The Washington Post. His vote of confidence in the future of print and serious news was seen by some — including me — as a sign that an era of “optimism or potential” for the industry was getting underway.

Turns out, not so much — quite the opposite, really. The Washington Post seems fine, but recently, in just over a week, three of the biggest players in American newspapers — Gannett, Tribune Company and E. W. Scripps, companies built on print franchises that expanded into television — dumped those properties like yesterday’s news in a series of spinoffs.

The recent flurry of divestitures scanned as one of those movies about global warming where icebergs calve huge chunks into churning waters.

The persistent financial demands of Wall Street have trumped the informational needs of Main Street. For decades, investors wanted newspaper companies to become bigger and diversify, so they bought more newspapers and developed television divisions. Now print is too much of a drag on earnings, so media companies are dividing back up and print is being kicked to the curb.

Setting aside the brave rhetoric — as one should — about the opportunity for a “renewed focus on print,” those stand-alone print companies are sailing into very tall waves. Even strong national newspapers like The Wall Street Journal and The New York Times are struggling to meet Wall Street’s demands for growth; the regional newspapers that make up most of the now-independent publishing divisions have a much grimmer outlook.

As it turns out, the journalism moment we are living in is more about running for your life than it is about optimism. Newspapers continue to generate cash and solid earnings, but those results are not enough to satisfy investors.

Even the most robust evangelism is belied by the current data. Robert Thomson, chief executive of News Corporation, espoused the “power of print” on Thursday even as he announced that advertising revenue at the company plunged 9 percent in the most recent quarter.

And remember that it was Mr. Thomson’s boss, Rupert Murdoch, who started the wave of print divestitures when his company divorced its newspapers last year, although it did pay out $2 billion in alimony, which gave the publications, including The Journal, a bit of a cash cushion. (News Corporation’s tepid earnings report came two days after Mr. Murdoch, who has swashed more buckles and cut more deals than almost anyone, was forced by the market to let go of his latest prey, Time Warner.)

The people at the magazine business Time Inc. were not so lucky, burdened with $1.3 billion in debt when Time Warner threw them from the boat. Swim for your life, executives at the company seemed to be saying, and by the by, here’s an anchor to help you on your way.

E. W. Scripps and Journal Communications put a twist on the situation just over a week ago by marrying, then promptly orphaning the print assets that each company owned. On Tuesday, when the embattled, post-bankruptcy Tribune Company officially introduced a separate publishing division so that it could concentrate on television, it handed the new company $350 million in debt as a parting gift.

Many industry observers sucked in their breath and wondered what Gannett, the last big operation featuring both newspapers and television, would do. We didn’t have to wait long. On Tuesday, Gannett said its print division would go it alone.

No debt was built into the arrangement, but the broadcast division hung onto two lucrative digital sites, CareerBuilder.com and Cars.com.

In the main, it’s been like one big, long episode of “Divorce Court,” with various petitioners showing up and citing irreconcilable differences with their print partners. It’s not that television is such a spectacular business — there are plenty of challenges on that front — but newspapers and magazines are clearly going to be smaller, less ambitious businesses and journalistic enterprises regardless of how carefully they are operated.

Even if the writing has been on the wall for some time, let’s play a bit of sad trombone for the loss of reporting horsepower that will accompany the spinoffs.

Newspapers will be working without a net as undiversified pure-play print companies. Most are being cut loose after all the low-hanging fruit, like valuable digital properties, have been plucked. Many newspapers have sold their real estate, where much of their remaining value was stored.

More ominous, most of the print and magazine assets have already been cut to the bone in terms of staffing. Reducing costs has been the only reliable source of profits as overall revenue has declined. Not much is left to trim.

The Tribune Company, which is to be run by Jack Griffin, who had a short, unsuccessful stint overseeing the pre-spin Time Inc., has cut $250 million in costs since 2011, according to the media analyst Ken Doctor. Businesses have to either peddle a growth story — a tough sell for any print enterprise — or produce a reliable cash flow that reaps dividends for shareholders. Gannett has eliminated more than a third of its employees since 2005.

And with ads declining at a steep rate, newspapers (and magazines) are trying to turn toward readers for digital revenue at the same time that they have denuded their products of much of their value. It’s a little like trashing a house by burning all the furniture to stay warm and then inviting people in to see if they want to buy the joint.

At Gannett newspapers, reader metrics will drive coverage and journalists will work with dashboards of data to guide reporting. After years of layoffs, many staff members were immediately told that they had to reapply for jobs when the split was announced. In an attempt to put some lipstick on an ugly pivot, Stefanie Murray, executive editor of The Tennessean, promised readers “an ambitious project to create the newsroom of the future, right here in Nashville. We are testing an exciting new structure that is geared toward building a dynamic, responsive newsroom.” (Jim Romenesko, who blogs about the media industry, pointed out that Gannett also announced “the newsroom of the future” in 2006.)

The Nashville Scene noted that readers had to wait only one day to find out what the news of the future looks like: a Page 1 article in The Tennessean about Kroger, a grocery store and a major advertiser, lowering its prices.

If this is the future — attention news shoppers, Hormel Chili is on sale in Aisle 5 — what is underway may be a kind of mercy killing.

So whose fault is it? No one’s. Nothing is wrong in a fundamental sense: A free-market economy is moving to reallocate capital to its more productive uses, which happens all the time. Ask Kodak. Or Blockbuster. Or the makers of personal computers. Just because the product being manufactured is news in print does not make it sacrosanct or immune to the natural order.

It’s a measure of the basic problem that many people haven’t cared or noticed as their hometown newspapers have reduced staffing, days of circulation, delivery and coverage.

Will they notice or care when those newspapers go away altogether? I’m not optimistic about that.
http://www.nytimes.com/2014/08/11/bu...n-futures.html





Comcast, Time Warner Cable Help Honor FCC’s Mignon Clyburn Amid Merger Review
Alex Byers

Comcast and Time Warner Cable are sponsoring a dinner honoring FCC Commissioner Mignon Clyburn at a time when the agency is weighing whether to approve a multibillion-dollar merger between the two companies.

Comcast will pay $110,000 to be a top-level “presenting sponsor” at the Walter Kaitz Foundation’s annual dinner in September, at which Clyburn is receiving the “diversity advocate” award, according to a foundation spokeswoman. Time Warner Cable paid $22,000 in May to the foundation for the same event, according to a Senate lobbying disclosure filed at the end of last month. The foundation supports diversity in the cable industry.

There are no rules preventing businesses from helping to honor regulators in this way, and both companies say they have supported the foundation for years.

But one watchdog is pointing out the appearance of a conflict.

“I think that the timing is curious,” said Carrie Levine, research director at Citizens for Responsibility and Ethics in Washington, which noted the corporate sponsorships in a blog post Monday. “They’re honoring an FCC commissioner at the exact same time they’re trying to get approval for a merger. And that doesn’t look so good.”

The contributions come as FCC and Justice Department officials review the $45 billion megadeal, which would give Comcast control of about 30 percent of U.S. pay-TV subscribers and about 40 percent of the country’s broadband market. The two firms are pitching the deal as a way to increase investment in cable and Internet technology, but public interest groups oppose the deal because they say the combined company will have too much control over the market.

Clyburn, a Democrat and former acting chairwoman of the FCC, is known as a major advocate for media industry diversity. Her office declined to comment on the Comcast and Time Warner Cable sponsorships of the foundation dinner.

Time Warner Cable’s contribution to the dinner is dated May 14, according to the company’s disclosure. The Comcast-Time Warner Cable merger was announced in February.

Comcast spokeswoman Sena Fitzmaurice said the company has supported the foundation for decades and said Clyburn’s role as an awardee has no bearing on its sponsorship.

“We absolutely dispute the notion that our contributions have anything to do with currying favor with Commissioner Clyburn or any honoree,” she said in a statement. “Such claims are insulting and not supported by any evidence. They are purely fiction. We have supported the organization year in and year out regardless of who the dinner honorees have been.”

Comcast has given similar amounts to the foundation’s annual dinner in recent years, according to figures provided by the company.

A Time Warner Cable spokesman also said the company has consistently donated to the foundation and said the firm was not concerned about the appearance of sponsoring a dinner honoring one of the regulators who oversees it.

“The [foundation] is the centerpiece of this industry’s efforts to not just recruit but to advance and train people of multi-ethnicity,” said spokesman Bobby Amirshahi. “The reality is the honoree was not a consideration for us as one of many companies that supported [the dinner.]”

This year, however, is the first time that a sitting FCC commissioner has been honored, according to the list of past award recipients on the foundation’s website. The foundation was launched in 1981.

The honorees are chosen by the foundation’s dinner committee, foundation spokeswoman Joy Sims said. That panel includes Comcast CEO Brian Roberts, as well as several other telecom executives. Companies including Cox Communications, Univision and Time Warner (which is separate from Time Warner Cable) are also sponsoring the dinner this year.

Comcast and Time Warner Cable, like the rest of the telecom industry, have robust lobbying operations in Washington and are working hard to win approval for their proposed merger.

Comcast itself, as well as media firms like Discovery and ESPN, have been honored at the dinner in the past. Tom Wheeler, the current chairman of the FCC, was honored at the inaugural dinner in 1984, though at the time he was chairman of the National Cable and Telecommunications Association and did not serve in government.
http://www.politico.com/story/2014/0...rn-109925.html





Neustar, Telcordia Battle Over FCC Contract to Play Traffic Cop for Phone Calls, Texts
Ellen Nakashima

Influential lawmakers are urging the Federal Communications Commission not to ignore national security as it prepares to choose a company to play the critical role of traffic cop for virtually every phone call and text message in North America.

At issue is the security of the most significant cog in the telecommunications network that most Americans have never heard of.

The Number Portability Administration Center, or NPAC, handles the routing of all calls and texts for more than 650 million U.S. and Canadian phone numbers for more than 2,000 carriers. If numbers are scrambled or erased, havoc could ensue. The FBI and other law enforcement agencies query the database every day, or 4 million times a year, in the course of criminal and intelligence investigations to determine which phone company provides the service for a particular number.

A major concern, national security experts say, is that a foreign government intent on learning which of its agents the United States has under surveillance might hack into the database to see what numbers the FBI or another security agency has wiretaps on.

Since 1997, a Sterling, Va., firm, Neustar, has held the exclusive contract to run that system, which was established to let customers change their carrier but keep their number.

Now, however, a rival firm owned by Sweden-based Ericsson AB is poised to win the lucrative contract — which last year brought in $437 million, or nearly half of Neustar’s 2013 revenue of $902 million. The firm, Telcordia Technologies, put in a bid substantially lower than Neustar’s, and an FCC advisory panel has recommended that the commission pick Telcordia.

In a letter sent Thursday to the FCC chairman, Rep. Mike Rogers (R-Mich.) and Rep. C.A. Dutch Ruppersberger (D-Md.), the chairman and ranking Democrat of the House Intelligence Committee, urged the commission to consult the FBI and other security agencies before picking a firm.

The lawmakers say they are concerned that the selection process “will not adequately address the inherent national security issues involved in this database.” Rogers and Ruppersberger, who said they are neutral on which company should win, urged the FCC to include security requirements in the award process.

Rep. Peter T. King (R-NY) sent a similar letter to commission Chairman Tom Wheeler on July 30, raising concerns about “any security vulnerabilities associated with a non-US vendor.”

The FCC declined to comment.

Neustar officials noted that Telcordia runs number portability systems in more than 15 countries, including India, Pakistan and Saudi Arabia. They expressed concern that Telcordia will be using computer code from its overseas systems to run the U.S. database. Neustar senior technologist Rodney Joffe said that is a security risk if a hacker from a foreign country detects a flaw in one of the foreign systems that it can exploit to penetrate the U.S. database.

But officials at Telcordia, which grew out of Bell Labs, the research division of American Telephone & Telegraph, said the software code used for the system will be entirely domestic. “We are not using any of the code used and deployed in foreign installations at all, zero,” said Chris Drake, chief technology officer at iconectiv, the Telcordia unit that handles number portability systems.

He said the firm began several years ago to work on the project in anticipation of winning the contract. He said it includes “state-of-the-art” cybersecurity protections, which he declined to elaborate on because of what he said were national security concerns.

He said Telcordia is willing to meet any requirements imposed by the FCC or law enforcement agencies.

Wiretap-related data will be held in “a separate infrastructure — a shadow database — that’s even more tightly controlled than the NPAC itself,” Drake said. He said wiretap data are encrypted and that no records will be kept of the numbers queried by law enforcement. “That’s really the most important point: Law enforcement components are dealt with in a special way,” he said.

Neustar said a major issue is that the bid specifications, written by an industry consortium, are deficient. They lack general security and national security requirements that Neustar has built into its system over the years based on working with law enforcement and with officials handling national emergencies, such as Hurricane Katrina , Joffe said.

They do not specify, for instance, that sensitive code be written only by U.S. citizens, which is a common requirement for federal contracts that affect national security.

Steven M. Bellovin, a Columbia University computer scientist who used to work at Bell Labs, said Neustar’s concerns are valid but there are measures that can be taken to alleviate them. He pointed, as an example, to the takeover of Sprint Nextel by Japan’s SoftBank Corp. The two firms agreed to appoint a new Sprint board member, to be approved by the federal government, to oversee national security compliance.

“The real issue is access to the network,” he said. “If they’re doing it right, that shouldn’t be an issue.”

Lisa Hook, Neustar president and chief executive, noted that Neustar, a Lockheed Martin spin-off, has run the system for 17 years without incident. “Even if the FCC is going to award the contract to someone else, it really needs to step back and get a security plan in place,” she said. “Understand the supply chain. Make sure all the infrastructure and code is maintained in the United States. All we’re saying is, ‘There are real national security issues here.’ ”

The FCC on Friday extended the public comment period on the Telcordia recommendation to Aug. 22.
http://www.washingtonpost.com/world/...f8a_story.html





The Cable Guys Have Become the Internet Guys
Peter Kafka

The cable TV business hit an important milestone last month: It turned into the Internet business.

Last quarter, for the first time ever, the biggest cable TV providers started selling more broadband subscriptions than video subscriptions, according to a new tally from Leichtman Research Group.

Not by much. The top cable guys now have 49,915,000 Internet subscribers, compared to 49,910,000 TV subscribers. And to be sure, most cable customers are getting both services.

Still, this is directionally important. The future for the pay TV guys isn’t selling you pay TV — it’s selling you access to data pipes, and pay TV will be one of the things you use those pipes for.*

This is also what animates many critics of the proposed Comcast-Time Warner Cable** deal: Not that the combined company will be dominant in cable TV — with 30 percent of the market — but that it will be even more dominant when it comes to broadband — with 40 percent of the market. (UPDATE: Since June, Comcast has been arguing it would only control 35 percent of the market if the Time Warner Cable deal closes).

* Some smart people suggest that the cable guys would not be unhappy if most of their business moved over to broadband instead of video, since there are much better margins — and almost no competition — for broadband.
http://recode.net/2014/08/15/the-cab...internet-guys/





Adtran Lays Groundwork for Superfast Broadband Over Copper

The first commercial networks are expected next year
Mikael Ricknäs,

Telecom equipment vendor Adtran has developed a technology that will make it easier for operators to roll out broadband speeds close to 500Mbps over copper lines.

The conventional wisdom is that copper is dying out and fiber is ascending. However, the cost of rolling out fiber is still too high for many operators, which instead want to upgrade their existing copper networks (and in some cases fiber simply can't be installed). So there is still a need for technologies that can make use of copper networks and complement fiber.

Adtran has developed what is calls FDV (Frequency Division Vectoring), which enhances the capabilities of two of these technologies -- VDSL2 with vectoring and G.fast -- by enabling them to better coexist over a single subscriber line, the company said.

VDSL2 with vectoring, which improves speeds by reducing noise and can deliver up to 150Mbps, is currently being rolled out by operators, while G.fast, which is capable of 500Mbps, is still under development.

The higher speeds are needed for applications such as 4K video streaming, IPTV, cloud-based storage and communication via HD video.

FDV will make it easier for operators to roll out G.fast once it's ready and expand where it can be used, according to Adtran.

The first G.fast deployments will happen in the middle of 2015, a spokeswoman for Adtran said via email. The underlying standard is expected to be adopted by the end of the year. Once that happens, chip makers and equipment makers like Adtran can develop products for commercial deployments, she said.

The technology increases the bandwidth by using more spectrum. G.fast will use 106MHz of spectrum, which compares to the 17MHz or 30MHz used by VDSL2.

The development of G.fast is currently at a point where vendors are trying to show they are the best alternative for future upgrades. Recently, rival Alcatel-Lucent demonstrated a prototype technology called XG-Fast, which is capable of 1Gbps for upload and download traffic, as well as 10Gbps in download speeds when using two copper pairs, it said.
http://www.itworld.com/networking/43...nd-over-copper





Internet Touches Half Million Routes: Outages Possible Next Week
Jim Cowie

There was minor consternation in Internet engineering circles today, as the number of IPv4 networks worldwide briefly touched another magic “power of 2″ size limit. As it turns out, 512K (524,288 to be exact, or 2-to-the-19th power) is the maximum number of routes supported by the default TCAM configuration on certain aging hardware platforms.

The problem is real, and we still haven’t seen the full effects, because most of the Internet hasn’t yet experienced the conditions that could cause problems for underprovisioned equipment. Everyone on the Internet has a slightly different idea of how big the global routing table is, thanks to slightly different local business rules about peering and aggregation (the merging of very similar routes to close-by parts of the Internet address space). Everyone has a slightly different perspective, but the consensus estimate is indeed just under 512K, and marching higher with time.

The real test, when large providers commonly believe that the Internet contains 512K routes, and pass that along to all their customers as a consensus representation of Internet structure, will start later this week, and will be felt nearly everywhere by the end of next week.

Enterprises that rely on the Internet for delivery of service should pay close attention to the latency and reachability of the paths to customers in the coming weeks, in order to identify affected service providers upstream and work around them while they perform appropriate upgrades to their infrastructure.

Here’s a plot of monthly routing table sizes from our peers, over the last several years. Note that there’s no good exact opinion about the One True Size of the Internet — every provider we talk to has a slightly different guess. The peak of the distribution today (the consensus) is actually only about 502,000 routes, but recognizably valid answers can range from 497,000 to 511,000, and a few have straggled across the 512,000 line already. The number varies from minute to minute as well, and this close to 512K, any minor event, such as a deaggregation by a large provider (fragmenting a network route into smaller ones for traffic engineering purposes) could push the global collective past the critical point. plot2

Putting This Event In Perspective: Don’t Panic

It’s important to put this all in proper perspective (and yes, friends from the media who cover Internet infrastructure issues, I’m especially hoping you read down to this paragraph).

This situation is more of an annoyance than a real Internet-wide threat. Most routers in use today at midsize to large service providers, and certainly all of the routers that operate the core infrastructure of the Internet, have plenty of room to deal with the Internet’s current span, because they were provisioned that way by sensible network operators.

Affected boxes cause local connectivity problems for the network service providers who still run them, so they will be identified quickly and upgraded as we pass the threshold. Their instability in turn causes some minor additional load on adjacent routers.

But the overall stability of the global routing system should be unaffected. In terms of a threat, this isn’t nearly in the same class as some poison-message scenarios we’ve described before, which combine router failure with contagion dynamics.

Origins Of The Problem

This has been coming for some time. The Internet keeps growing, which is what it does best. There’s very little indication that the current shortage of IPv4 space has done anything to dissuade new autonomous systems (enterprises, universities, service providers, etc.) from connecting to the Internet and expecting to route some space of their own.

Ironically, exhaustion may be speeding up the growth, as enterprises and service providers learn to use tricks like carrier-grade NAT to get their jobs done in tinier and tinier fragments of the remaining IPv4 space.

The routing table in every border router on Earth has to carry a route to each and every one of those tiny fragments, as free addressable space gets tighter and tighter. And every IPv4 route takes basically the same amount of memory in the router, whether it’s an enormous university-sized block of 64K IP addresses, or a little taste of 256 IP addresses (the smallest generally routable block). That relentless pressure has pushed the distribution of global routing table sizes up and up, as more and more people join the Internet, and find themselves fighting over smaller and smaller crumbs of IPv4 space. And that means that 512K is right around the corner for everyone on Earth, as early as next week. Here’s a plot of the distribution of routing table size, marching forward, from May 2014 (red) through July 2014 (purple) and up to today (blue). This wave only propagates one way. Someday, sooner than you think, we’ll be facing the 1024K routing table challenge.

The Good News

So far, as the first providers cross the 512K line, we’re not seeing real, serious evidence of increased Internet instability, at least not at the levels that would affect enterprises and service providers worldwide in meaningful ways. Some people who are downstream of affected equipment may be noticing early problems, if they find themselves learning 512K routes today thanks to a deaggregation event that injects thousands of transient routes.

Here we can see the percentage of the Internet that’s affected by routing instability on a daily basis, the kind of flickering change that we’d expect to see if routers everywhere were rebooting. Typically it’s 3 to 7 percent and obeys cycles based on human timescale: less on the weekends, when networking professionals leave the Internet alone, less during the December holidays. We see some increase in 2014, but in recent months and days, no clear trend higher in instability.

What Comes Next

This event won’t be over tomorrow; in fact, it has barely begun. As the routing table size distribution creeps to the right, the number of routers in the world who “see” 512K+ routes will steadily increase. Within a few weeks, nearly every piece of vulnerable gear will have been discovered, as 512K+ becomes the global consensus opinion. We don’t know how many machines that represents, and we don’t know what the net impact will be on local Internet connectivity before it all gets sorted out.

There is irony lurking here, of course, if you read the advisories. You can change the default configuration to reclaim more TCAM for IPv4 .. but only at the expense of support for IPv6, the “next generation” Internet addressing scheme that continues to struggle for widespread adoption. Sadly, this elderly gear was shipped at a time when the world was full of hope for the emergence of a real, live, flourishing IPv6 routing table. There’s far too much TCAM alloted to IPv6, as a result (in at least one case, 256K routes, when the current IPv6 routing table still requires fewer than 20K).

You can reclaim most of that precious router memory for IPv4, and you’ll be fine again .. at the expense of evicting your IPv6 routes from TCAM. That’s probably a decent bet, since anyone who failed to future-proof their deployment and is still running this older gear probably has very, very little IPv6 traffic on their network anyway. For IPv6 aficionados who are are tracking the continuing growth and robust good health of the “legacy” IPv4 Internet, that’s called “cold comfort.”
http://www.renesys.com/2014/08/inter...global-routes/

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

August 9th, August 2nd, July 26th, July 19th


Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 03:13 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)