P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 05-02-14, 09:14 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - February 8th, '14

Since 2002


































"This is a small city that I had never heard of. It beat Seattle, New York, San Francisco in building the Gig. People here are thinking big." – Toni Gemayel


"Fancy the details for a while, then publish em to enlarge my e-penis." – p0ke






































February 8th, 2014




GCSB Deleted Key Evidence – Dotcom
David Fisher

The spy agency which illegally monitored Kim Dotcom's communications has admitted deleting information needed in the upcoming $6 million damages hearing, according to the tycoon.

Dotcom last night tweeted the claim, saying: "The GCSB spy agency seems to have deleted evidence relevant to my case against the GCSB for illegally spying on NZ residents.''

He quoted Crown lawyers as saying "some communications have automatically aged off. We propose to include ... those communications which are still recoverable''.

Dotcom claimed lawyers acting for the GCSB told him the material had "aged off'' the system, suggesting it had automatically deleted.

He also posted a video of Prime Minister John Key, who is in charge of the agency, saying: "This is a spy agency. We don't delete things. We archive them.''

Mr Key's office said he was speaking specifically about allegations the GCSB deleted a video of him talking about Dotcom inside its top secret building.

"He stands by what he said,'' said a spokeswoman.

The claim that evidence was deleted has brought fresh calls for an independent inquiry into the agency, described today by the Labour and Green parties as operating outside the law.

Green Party co-leader Russel Norman said: "If it is true, then they are a rogue agency operating in contempt of the law and courts.''

Information sought as part of a court process is meant to be preserved - and doing otherwise was "basic contempt of court'', said Mr Norman.

He said Mr Key was attempting to distance himself from his statement in Parliament, saying the comments were made "in the most general terms''.

"He has misled the House.''

He said an independent inquiry into the GCSB would be part of an coalition negotiations after this year's election.

Labour associate spokesman on security issues Grant Robertson said he was concerned about the implications of Dotcom's claims.

"If true, it speaks of an agency that has operated where they don't believe they need to pay attention to the law.'' He said people would ask why they should "trust an agency like this if it's not going to comply with the law''.

He said Mr Key needed to "come clean'' about what he knew about the deleted information.

The inquiry into the GCSB by former Cabinet secretary Rebecca Kitteridge, the incoming Security Intelligence Service boss, referred to material being "aged off'' its systems.

The process was referred to when detailing how the GCSB dealt with failure to follow its own law or rules. She wrote "the information concerning the target will be deleted within GCSB if it has not already 'aged off' the system''.

Speaking in Auckland later, Mr Key said Dotcom was "completely and utterly wrong''.

"I can't talk specifically about Mr Dotcom's evidence because it's before the courts. But what I can say is the claims that he's making that there's some kind of inconsistency with how we treat things is quite wrong,'' he said.

"Essentially, legal documents that are created by GCSB are held in their system and archived for evidence. Raw intelligence has to actually, by law, age off the system if it's no longer relevant or required,'' he said.

"The great irony is, if you cast your mind back to the GCSB debate, there were many people arguing that the GCSB shouldn't hold on to data for as along as it does. Now these same people seem to be saying `ah well, we should be holding onto this data forever'. They're just trying to join dots that cannot be joined and confuse people.''
http://www.nzherald.co.nz/nz/news/ar...ectid=11196601





Adobe to Require New Epub DRM in July, Expects to Abandon Existing Users
Nate Hoffelder

When Adobe announced their new DRM a couple weeks ago some said that we would soon see compatibility issues with older devices and apps as Adobe forced everyone to upgrade.

At that time I didn’t think Adobe would make the mistake of cutting off so many existing readers, but now it seems that I could not have been more wrong on the issue.

The following video (found via The SF Reader) confirms that Adobe is planning to require that everyone (ebookstores, app and device makers) to upgrade to the new DRM by July 2014.

The video is a recording of a webinar hosted by Datalogics and Adobe, and it covers in detail aspects of how and when the new DRM will be implemented (as well as a lot of other data). If the embed link doesn’t work for you, here’s a link to the video on Youtube.

The tl;dr version is that Adobe is going to start pushing for ebook vendors to provide support for the new DRM in March, and when July rolls Adobe is going to force the ebook vendors to stop supporting the older DRM. (Hadrien Gardeur, Paul Durrant, and Martyn Daniels concur on this interpretation.)

This means that any app or device which still uses the older Adobe DRM will be cut off. Luckily for many users, that penalty probably will not affect readers who use Kobo or Google reading apps or devices; to the best of my knowledge neither uses the Adobe DRM internally. And of course Kindle and Apple customers won’t even notice, thanks to those companies’ wise decision to use their own DRM.

But everyone else just got screwed.

If you’re using Adobe DE 2.1, come July you won’t be able to read any newly downloaded DRMed ebooks until after you upgrade to Adobe DE 3.0. If you’re using a preferred 3rd-party reading app, you won’t be able to download any new DRMed ebooks until after the app developer releases an update.

And if you’re using an existing ebook reader, you’d better plan on only reading DRM-free ebooks until further notice.

One thing Adobe seems to have missed is that there are tens of millions of ebook readers on the market that support the older DRM but will probably never be upgraded to the new DRM. Sony and Pocketbook, for example, have released a number of models over the past 5 or so years, most of which have since been discontinued.

Do you really think they’re going to invest in updating a discontinued (but otherwise perfectly functional) device?

I don’t, and that’s just the tip of the iceberg. Not only will millions of existing readers be cut off, there are also hundreds of thousands of ebook readers sitting on store shelves which, as of July, will no longer support Adobe DRM.

And do you know what’s even better? All signs point to the ebook reader market having peaked in 2011 or 2012 (I have industry sources which have said this) so the existing and soon to be incompatible ereaders will probably outnumber the compatible models for the indefinite future (years if not decades).

If you look hard enough you can still buy many of the ebook readers released in 2010, 2011, and 2012 as new, and you can also find them as refurbs or used. They work just fine today (albeit a little slowly by today’s standards) but when July rolls around they will be little more than junk.

And that includes ebook readers owned by libraries and other cost conscious institutions.

If you’re beginning to grasp just how bad this move could be, wait a second because I’m not done.

Not only will readers be affected, but so will indie ebookstores. They’re going to have to pay to upgrade their servers and their reading apps. That cost is going to hit them in the pocketbook (potentially driving some out of business), and that’s not all.

Many if not most of the indie ebookstores are dependent on the various Adobe DRM compatible ebook readers on the market. They cannot afford to develop their own hardware so they rely on readers buying and using devices made by other companies including, Pocketbook, Sony, Gajah (a major OEM), and others.

Once those existing ebook readers are abandoned by Adobe the indie ebookstores will probably lose customers to one or another of the major ebook vendors.

In other words Adobe just gave Amazon a belated Christmas present. After all, everyone might hate Amazon but we also know we can trust them to not break their DRM.

Folks, the above scenario spells out all the reasons why I didn’t expect Adobe to completely abandon support for the older DRM. It is so obviously a bad idea that I thought they would avoid it.

With that in mind, I would also like to add an addendum and apply Tyrion’s Razor. Perhaps Adobe has internal data which says that this won’t be a serious issue. I seriously doubt it, but it’s possible.

P.S. But if this turns out to be the utter disaster I am expecting, I would like to take this opportunity to thank Adobe for on yet another occasion giving DRM a bad name.
http://www.the-digital-reader.com/20.../#.UvDsnuVal20





Why I’ll be Pirating Adobe’s Products From Now On

Edit: I’ve had some questions on software that I’ve been using in the interim. For Vector design, Sketch is a near perfect replacement. As for Photoshop, honestly, finding a perfect replacement has been hard, but one of the best applications I’ve been looking at is Pixelmator. If you have any suggestions for replacement software, with great support, leave a comment!

Adobe and I had a great relationship. I paid them money, they provided me with a fantastic product. While I was in college, my biggest investment was the Adobe Master Suite CS5. It was everything I’d ever need. And no, It wasn’t part of my tuition. I worked long hours to cover that bill, but I knew it was worth every penny. Photoshop, and Illustrator were my preferred applications, a systematic ménage à trois that produced some of my greatest work to date. But oh, the heartbreak, when I find out my product (relationship) is no longer supported.

It all started with my new Macbook Retina. A gleaming beacon of self-worth, and productivity (Yes, Mac IS more productive). I had to reinstall all of my Adobe Software Master Suite onto it, sans a cd-rom… So I did what many people do. I looked for a download of the software, but to my avail, there were none for the “Student/Teacher License”. I understand there is a difference in licensing, but all I wanted was the same product I had spent all my hard earned cash on, not an upgrade, no special treatment.

I called Adobe support, and after waiting on the line for a good 35 minutes, I was greeted by Adobe’s “International” support team, based out of India where they can pay pennies on the dollar for you to get support in broken english. It’s almost as good as real english… almost. After explaining my ordeal to the support tech, he guided me to Adobe’s website, asked me to sign into my account and hurriedly told me the download link would be under the “My Products” section. I kept him on the line while I checked, but couldn’t find the link. I asked him to walk me through it, and proceeded to put me on hold for 2 minutes. When he came back on, he said that I should instead go to the Adobe forums, and the download link will be in there. I followed suit and tried searching for Master Collection, CS5, Student, Teacher, and more keywords, but did not find anything. When I asked him what I should look up in their search, he put me on hold again. Finally he came back to tell me that my product was no longer supported, and my options were to either buy and upgrade or download it from “Somewhere else”. The conversation went like this:

Support:You can download it on the web somewhere.
Me: Is there a specific site you would recommend?
Support: No Sir, you will just have to search it.
Me: All I’ve found is torrents for cracked software.
Support: Yes Sir, that will work.
Me: So you’re telling me I should download an illegal copy of your software, and use my legal serial with it?
Support: Yes Sir, that will work.


I was floored. My bright and shining star for everything design, has instructed me to pirate their software. I couldn’t believe it. Why would a company as brilliant, professional, and steadfast as Adobe tell me to pirate their software? I guess you could argue that Adobe themselves weren’t “telling me” to pirate it, but they were giving me no options, and poor support. My next stop, calling Adobe sales department.

After a much shorter hold period (They wouldn’t want to make it difficult for you to pay them now, would they?) I was connected with a sales rep. His manor was very professional, and I could tell he was smart. “Thank god I got a smart sales guy. He’ll understand what to do.”, I said to myself. I explained my predicament and told him what I had talked with Adobe’s support team about. I even told him that the support representative told me that I could pirate their software and it would work with my serial. “Well, that’s not illegal as long as you have your serial applied to the product.”, the sales rep says. So I thought for a moment. Was pirating Adobe’s software truly illegal? I mean, the real purchase is the serial number. You can get the trial software on their site for free, but once you pay up, you receive a license. But, that did not settle my intrigue. I pushed the sales rep harder, but every response was another opportunity for him to get me to upgrade to CS6. I put it to the salesman like this:

Let’s say you purchase a car. Under warranty, something goes wrong with your car which is covered. You bring that car back to the place of purchase and you ask for them to fix your car. They tell you that they cannot fix your car, but they can sell you a new one.

Well, I lost the battle, but I have not lost the war. Situations like these prompt me to ponder, “What is GREAT support?” It’s a complicated struggle for Adobe, maintaining their software standards, fending off hackers and pirates, and providing genuine, gracious, support. I can’t blame the support rep from India for suggesting I pirate their software. I cannot blame the sales rep for pushing me to upgrade, and ignoring the root of my problem. They’re all pieces of the puzzle, and are all informed of only what they need to know. But to truly make an impact, you have to bring a human element back into the game. Most people genuinely want to help. When their hands are tied, they aren’t acting human any more. You have to allow your employees some sort of freedom to help customers in need, Adobe. You’re making it difficult for your employees to succeed, and for your customers to be humbled by your support. Take a walk on the wild side, Adobe, and image in your bank did this to you. “Your account is no longer supported, you’ll have to purchase our new package in order to access your funds.”

Edit 2: For any of you that read this and think I’m an idiot. Don’t mistake my anger for the true issue. Adobe representatives have no other resolution for my problem, so they say “it’s not supported”. Paying for a license as much as Adobe’s, I would expect better support.
http://parkerrr.com/ill-pirating-adobes-products-now/





Patent Troll CEO Explains Why Company Wants Names of EFF Donors

Personal Audio thinks its opponents want two shots to kill the same patent.
Joe Mullin

The patent-holding company that wants all podcasters to pay up is just looking for a fair shake.

The CEO and general counsel of Personal Audio LLC got on the phone with Ars Technica to explain why the company is asking for the identities of more than 1,300 donors who have chipped in to help the Electronic Frontier Foundation fight its podcasting patent. The subpoena seeking donor identities and a wide array of other information connected to EFF's fight against the patent was revealed by EFF in a Wednesday blog post. EFF has moved to quash the subpoena in court, saying that while some donors are very public about their support, they also have a First Amendment right to contribute anonymously.

The fundraiser in question was kicked off by EFF to pay for what's called an "inter partes review" at the US Patent and Trademark Office. EFF sought to raise $30,000, but Personal Audio's attempt to make patent demands against podcasters struck a nerve: to date, about $80,000 has been raised from more than 1,300 donors.

Personal Audio CEO and general counsel Brad Liddle explained this morning that the company is just trying to make sure its opponents don't get two bites at the apple while the fight over the patent goes forth. With the IPR petition moving forward at the patent office and litigation proceeding in Texas federal courts, Personal Audio apparently suspects that the same people are behind both.

"EFF insinuates the information we are seeking is not relevant to the Texas litigation," said Liddle in a brief interview with Ars. "But to the extent that other third parties have donated or assisted to the PTO proceeding—to the extent they've been working on the inter partes review—they should be bound by the result."

Much of the prior art that has been presented to the patent office has also been brought up in the Texas cases, said Liddle. He believes that if the Texas defendants are involved in the patent office proceeding, they shouldn't be allowed to present their same defenses all over again in federal court.

"If there's a corporation or a person that has assisted EFF in the PTO proceeding, there's an estoppel argument" that should stop them from using the same defenses again, he said. Personal Audio shouldn't have to "engage in duplicative validity challenges, in expensive litigation."

The defendants in the Texas lawsuits include the Discovery Channel-owned HowStuffWorks podcast, NBC, CBS, and Fox, as well as Ace Broadcasting (which produces Adam Carolla's podcast), and a smaller Internet radio company called TogiNet.

The inclusion of Lindale, Texas-based TogiNet appears to be a play to keep the larger defendants in Personal Audio's chosen venue, the Eastern District of Texas, which continues to be a popular venue for patent plaintiffs.

"If they want to find out whether the defendants in Texas donated, they can ask the defendants," pointed out EFF's Nazer—a point made in the group's motion to quash. "They don't have a reason to invade the privacy of more than 1,000 donors."

While the legal wrangling continues, old Internet shows dredged up by the EFF petition have gone a long way to set the historical record clear. Episodes of "Internet radio" shows date back to at least 1993, years before Personal Audio founder Jim Logan's filed patents connected to his failed "news-on-cassettes" business.

Given that there's no question Internet broadcasting pre-dated Logan's business, Ars asked if Liddle and his colleagues at Personal Audio felt that it was justifiable to keep pursuing small podcasters for royalty payments. "I'm not going to comment on that," he said.

Personal Audio's response to the EFF patent office petition is due next week, and the office is expected to decide whether or not to review the patents by early May. If it does institute a review, that process could take a year or more.

In addition to the Texas lawsuits, Personal Audio has sent out demand letters that have been recorded on EFF's "Trolling Effects" site. It's unclear how many letters it has sent.
http://arstechnica.com/tech-policy/2...f-donor-names/





Facebook File-Sharing App Pipe Shifting from Flash to WebRTC

A tool for sharing files among Facebook contacts has launched a new version built on Web standards instead of Flash. That will let it reach mobile devices in coming weeks.
Stephen Shankland

Pipe just launched a new version of its Facebook file-sharing app, illustrating that the shift away from Adobe Systems' Flash Player to Web standards is getting steadily easier.

The new Pipe app uses a newer standard called WebRTC for real-time communications on the Web, the company said Monday. That standard got its start for Skype-like video and audio chats, but it's got a data-sharing ability too. The brains of the new app run in JavaScript, the universal language of Web programming, with a boost from the AngularJS project that makes JavaScript more manageable.

Pipe lets people send files as large as 1GB to each other when sender and recipient are both online -- a peer-to-peer connection that Pipe merely facilitates. If the recipient is offline, Pipe has to store the file for a time, and the limit is 250MB. (Pipe will hold the file for three days before deleting it.) Previously, Pipe had a maximum size of 100MB.

The company plans a premium service later with larger files and longer-term storage, company co-founder Simon Hossell said.

Pipe only launched last June, but the Web's move away from Flash means the company's move to a different technology was only a matter of time.

Flash doesn't work on mobile devices, and mobile computing is an ever more important part of people's computing lives. Pipe apps for iOS and Android are "a few weeks away," the company said.

At the same time, new Web standards can't always be relied upon. The new version of Pipe only works on Chrome and Firefox for now.
http://news.cnet.com/8301-1023_3-576...ash-to-webrtc/





Students Warned About File-Sharing, Risks at Stake
Stephen Mays

Every student at the University of Georgia was told not to harbor file-sharing software on computers.

This is a preemptive measure the Enterprise Information Technology Services takes to protect students from violating copyright laws and endangering the safety of UGA’s network.

Despite the risks of copyright infringement, EITS does not track violations.

“We do not reach out to copyright owners,” said Laura Heilman, the security awareness training and education manager for EITS. “They reach out to us.”

Though many students may think EITS watches every keystroke and website visited on UGA’s network, that’s not the case.

“We’re not big brother,” Heilman said. “We’re not really looking at the uses so much as the volume [of data downloaded].”

Heilman said upon a copywright license owner’s notification, a letter is sent to the corresponding student. The letter details what the student is in possession of that violates copyright and how to proceed with removing the file and deactivating his or her peer-to-peer networks.

“Peer-to-peer folders (or technologies) are systems created to share files online,” according to the Office of Student Conduct. “The most popular types of distribution include music, movies and other digital media.”

File-sharing software, peer-to-peer networks and copyright laws are dense topics many students don’t have prior knowledge of.

“I think most students don’t understand the intricacies of copyright unless they’ve taken some sort of law class,” said Michael Bottone, a sophomore computer science major from Snellville.

“I think allowing for file sharing but not torrenting would be a good step in the direction in allowing legitimate sharing to happen while also stopping the avenue for most illegal downloads.”

Heilman said EITS has to hold such a strict standard on file-sharing networks due to the risks. It only takes a small amount of code, she said, to infect a computer on UGA’s network. She also said an actual file a student downloads does not always match.

“Imagine a virus unleashed on the network,” Heilman said. “We’re talking tens of thousands of computers that could potentially be infected.”

Heilman said there are some sites that are perfectly legitimate, sharing files that are part of the public domain. Though they account for roughly 2 to 4 percent of the total.

“A lot of students don’t understand that when they are participating in a torrenting site or file sharing service, that the contents of that shared folder are broadcast,” she said. “For the most part, when you download something, you’re expected to make it available to other people. By making it available to other people, you’re registered on a tracker, and that’s a direct link to you and to us if you’re on our network. So we take a preemptive stance and just say don’t do it here.”

Many of these shared files consist of music, videos and other forms of entertainment. Heilman said even if one person purchases a movie or song legally and then shares it, he or she is infringing copyright.

But streaming, she said, is fine.

“We’re concerned about you going out, getting something that’s copyright protected, putting it on our network and allowing other people to access it,” she said.
http://www.redandblack.com/uganews/c...a4bcf6878.html





German Court Finds Domain Registrar Liable for Torrent Site's Copyright Infringement

If left unchallenged this ruling will endanger the entire business model of registering domain names, the registrar said
Loek Essers

A domain name registrar can be held liable for the copyright infringements of a website it registered if it is obvious the domain is used for infringements and the registrar does nothing to prevent it, the Regional Court of Saarbrücken in Germany has ruled.

The court ruled in a case between Universal Music and Key-Systems, the German registrar of the domain name for h33t.com, a torrent tracker site.

Universal had wanted to prevent unauthorized distribution of Robin Thicke's album Blurred Lines, said Volker Greimann, Key-Systems' general counsel, in an email.

While Key-Systems argued that it was not responsible for the copyright infringement, the court ruled that the registrar had a duty to investigate after notification of infringing activity and had to take corrective action in case of obvious violations, Greimann said.

"The courts' definition of what is obviously violating is however extremely broad and the duty to act is expanded to deactivation of the entire domain even if only one file or link is infringing," he said.

"If left unchallenged, this decision would constitute an undue expansion of the legal obligations of each registrar based in Germany, endangering the entire business model of registering domain names or performing DNS addressing for third parties," he said, adding that Key-Systems is currently reviewing the decision and considering its options for an appeal.

Universal demanded that Key-Systems end the infringements by deactivating the domain, and asked it to sign a contract agreeing not to let the infringement happen again in the future, Universal's lawyer Mirko Brüß wrote in a blog post on Wednesday. He also published the verdict, dated Jan. 15.

When Key-Systems refused to heed Universal's demands, the music company sued the registrar.

During the lawsuit, Key-Systems argued that it was only providing a technical service, much as does DENIC, the central registry for all domains under the .de top level domain.

The Federal Court of Justice in Karlsruhe had already ruled that DENIC is generally not liable for rights violations, the verdict showed. DENIC could only be liable if the infringement is obvious and readily identifiable, even after it was notified that rights violations were taking place on a specific domain, the Federal Court of Justice ruled.

However, the Regional Court of Saarbrücken found that the rights violations of h33t.com were obvious and easy to identify, said Brüß. Since the album was still shared through h33t after several requests sent to the website's operator by Key-Systems to stop the infringing activity, the registrar had to act to stop the infringement, the court found.

If someone notifies the registrar of a clear violation of the law, it must examine the specific allegation immediately and close the domain if necessary, the court ruled.

If Key-Systems ignores this ruling it faces a maximum fine of €250,000 (US$339,000).
http://www.itworld.com/it-management...t-infringement





UK Government Tackles Wrongly-Blocked Websites
Mark Ward

The government is drawing up a list of sites inadvertently blocked by the filters it asked internet service providers (ISPs) to implement.

Many sites on the list are run by charities that aim to educate children and others about health, sex education and drugs issues.

The whitelist will be used to ensure the sites are not immediately blocked.

The list has emerged from a working group looking into accidental blocking and how to fix the problem.

'Master list'

The group is also looking into ways to set up a standard system that will let any site which thinks it has been wrongly blocked tell ISPs about the mistake so it can get on to the approved list.

"Research suggests the amount of inadvertent blocking is low," said David Miles, who chairs the working group on over-blocking for the government's UK Council for Child Internet Safety.

"However. if you are a charity and you deal with teenagers in distress that 1 or 10 matters to you."

Other reports have suggested that many innocuous sites such as TorrentFreak, a copyright and privacy news site, are being accidentally caught up in the filters ISPs are starting to use.

A spokesman for the Internet Service Provides Association said: "There's a growing realisation that filters are not perfect and will lead to some over-blocking,"

"There's a feeling that some sites sit in a grey area and more needs to be done for them."

The working group was set up in the wake of a Downing Street internet safety summit held in November 2013 which aimed to get ISPs doing more to filter inappropriate content.

The group first met in December and involved ISPs, charities, representatives from government, the [British Board of Film Classification] BBFC and mobile operators.

Since then, Mr Miles said he had been reviewing research on inadvertent blocking and visiting charities to find out how the steady introduction of web-based filters for adult and inappropriate material had hit visitor numbers.

UK mobile operators have run site blocking systems for some time

"We are building a master list of sites that the charities are helping us with and actively testing this right now," Mr Miles told the BBC.

Soon the list would be shared among ISPs that had introduced network-level filters to ensure that the educational sites were widely viewable.

The need for the list of sites wrongly blocked would become more pressing in 2014 as ISPs contacted established customers and asked them to choose whether to switch on the filters, he said.

Currently most big UK ISPs only ask new customers to make a choice about net filters.

"What we are seeing in the UK is quite unusual," said Mr Miles, who is also the European director for the Family Online Safety Institute.

"At the ISP level, on public wi-fi and via mobile operators, the UK will be subject to a substantial amount of network-level filtering all of a sudden."

"That new network-level filtering could increase the level of over-blocking," he said.

Getting systems in place now would help later in the year as more and more web browsing becomes subject to filtering, he added.

Eventually, Mr Miles said, standardised systems might emerge that let sites check if their content falls foul of the filters, or put in place a simple way for sites to inform all ISPs that they do not have inappropriate content.
http://www.bbc.co.uk/news/technology-25962555





Turkey Cracking Down on Internet Usage
Jacob Resneck

• Law limiting some Internet usage passes Parliament
• Critics have warned bill is anti-democratic
• Turkey has troubled relationship with social media

Turkey's Parliament voted Wednesday to empower authorities to censor the Internet at will, a move intended to silence dissent in a way many warn would turn the clock back on the country's democracy.

"One man can order a website to be closed, it's really anti-democratic," said lawyer Serhat Koc, an activist with Turkey's Pirate Party, which has been campaigning against the bill.

Turkey's Parliament approved on a show of hands the legislation that would allow its telecommunications authority to block websites without a prior court decision.

The bill would require Internet service providers to keep two years of every user's online history in what critics say would amount to a surveillance network on every Web user in the country.

Last month, police violently dispersed hundreds of demonstrators who rallied in Istanbul, Ankara and the coastal city of Izmir. Protesters flew banners and chanted slogans that brought a crackdown by riot police.

International press groups say the street protests demonstrate how much is at stake for a free society.

"I don't think people are standing up to water hoses and tear gas to look at cute cat videos," said Geoffrey King, Internet advocacy coordinator for the New York-based Committee to Protect Journalists. "The Internet has the potential to be the greatest enabler of human expression, it also has the potential to be the greatest tool of social control."

The move has brought concern that Turkey — considered a key NATO ally for the U.S. and Europe — is becoming more authoritarian.

Widespread protests last spring that started over saving Istanbul's Gezi Park left at least six dead and injured more than a thousand, shaking the foundations of the Islamist-rooted AK Party that has ruled for 11 years.

So far Washington has refrained from criticizing its ally directly but has registered concern about Internet freedom.

"As the Turkish government evaluates its approach to Internet freedom, we hope the highest standards of openness and free expression will be protected," said State Department spokeswoman Katherine Pfaff in comments published on CPJ's website.

The bill would require service providers to take down objectionable content within four hours and any page found in violation by the country's telecom authority or face fines up to 100,000 lira ($44,500). It would also close loopholes and technical workarounds that are popular in a country that has already blocked an estimated 40,000 sites since 2007.

"This is not just about blocking access to certain types of content. They are trying to build up a new infrastructure to surveil people and collect data about all Internet users from Turkey," said Yamak Akdeniz, law professor at Istanbul's Bilgi University. "This obviously has serious implications, unprecedented I would say."

Turkey's current laws are designed to protect minors from harmful content. Many of the sites blocked are pornographic but some alternative media outlets and video sharing sites have also been banned.

The bill was unveiled in December, a day after family members of top politicians were implicated in a wide-ranging corruption probe targeting the prime minister's family and inner political circle.

Prime Minister Recep Tayyip Erdogan has so far been able to contain the probe by sacking hundreds of prosecutors and judges and reassigning thousands of police, saying his government is the victim of an international plot to undermine Turkey's world standing.

But in recent weeks videotapes and audio recordings from wiretaps have surfaced online that implicate Erdogan and his associates in shady dealings with business groups.

King, of the Committee to Protect Journalists, said the Internet bill would give the government broad censorship powers.

"Clearly if these amendments were to pass they would be very useful in clamping down on embarrassing information coming out," he said

Business groups have also weighed in. The Turkish Industry and Business Association penned a letter this week to the Family and Social Policies Ministry that proposed the bill. It warned that the restrictions would undermine investment and undermine the separation of powers by allowing the executive to censor content at will.

Turkey is a very wired country. It has one of the highest Internet usage rates in Europe with about 33 million registered Facebook accounts in a country of about 76 million people.

With government pressure on traditional media outlets like newspapers and television stations well-documented, many people go online to share and receive news.

"The only reliable information to obtain for the Turkish people and people living in Turkey is through social media and alternative news sites," Akdeniz said. "Facebook groups and Twitter have become crucially important for many people including myself."

That's led top politicians like Erdogan to condemn sites like Twitter as a tool used for extremists. During the height of the Gezi Park protests last June the prime minister declared, "To me, social media is the worst menace to society."

But rather than crack down on social media Erdogan's AK Party initially chose to fight fire with fire. This summer the party reportedly enlisted around 6,000 online volunteers to boost its presence online.

But that has apparently not been enough, said Akdeniz, and the government may resort to cruder methods to control the online sphere.

"There are other countries like Iran, Syria and China that try to deploy similar controls and unfortunately Turkey is moving toward that direction," he said.
http://www.usatoday.com/story/news/w...rship/5220339/





Web Companies Give First Look at Secret Government Data Requests

Facebook, Microsoft, Yahoo and Google on Monday began publishing details about the number of secret government requests for data they receive, hoping to show limited involvement in controversial surveillance efforts.

The tech industry has pushed for greater transparency on government data requests, seeking to shake off concerns about their involvement in vast, surreptitious surveillance programs revealed last summer by former spy contractor Edward Snowden.

The government said last month it would relax rules restricting what details companies can disclose about Foreign Intelligence Surveillance Act (FISA) court orders they receive for user information. Several companies, including Google and Microsoft, sued the government last year, seeking the ability to disclose more of that data.

Microsoft General Counsel Brad Smith said on Monday the latest data showed that the info the government has asked online companies to turn over has not been as vast as some feared.

"We have not received the type of bulk data requests that are commonly discussed publicly regarding telephone records," Smith said. "This is a point we've publicly been making in a generalized way since last summer, and it's good finally to have the ability to share concrete data."

Between 15,000 to 15,999 Microsoft-user accounts were the subject of FISA court orders requesting content during the first six months of 2013, the company said.

Still, Smith cited media reports - based on Snowden's leaked

documents - that the government may have intercepted user information without tech companies' knowledge or cooperation, by tapping into communications cables that link Google and Yahoo datacenters.

"Despite the President's reform efforts and our ability to publish more information, there has not yet been any public commitment by either the U.S. or other governments to renounce the attempted hacking of Internet companies," he said on Microsoft's blog. "We believe the Constitution requires that our government seek information from American companies within the rule of law."

BREAKDOWN

Several Internet companies had previously disclosed an approximate number of national security letters, which typically seek customer data without court approval. Now, they have greater leeway also to disclose details around FISA orders.

Google said that between 9,000 and 9,999 of its users' accounts were the subject of such requests during the period, while Facebook said it received FISA content requests for between 5,000 to 5,999 members' accounts.

Yahoo said between 30,000 and 30,999 of accounts received FISA requests for content, which it said could include words in an email or instant message, photos on its Flickr photo-sharing service and address book or calendar entries.

The companies released the information on their respective blogs.

The various requests affected a fraction of a percent of the hundreds of millions of users each company says employ its online services, from email and search to social networking.

In terms of aggregate requests, Microsoft, Google and Facebook said they each received between 0 and 999 FISA content requests during the first six months of 2013.

The companies are required to report the number of requests in increments of 1,000, and can only report the data with a six-month delay, under the relaxed rules.

The three companies also said they had received between 0 and 999 "non-content" FISA orders between January and June 2013, seeking general information such as user names.

(Reporting by Alexei Oreskovic; Editing by Dan Grebler)
http://uk.reuters.com/article/2014/0...A121H920140203





Newly Wary, Shoppers Trust Cash
Hilary Stoutfeb

Like dieters vowing to trade cupcakes for carrots, a number of American shoppers are making a new pledge: cash only.

The drumbeat of disclosures about credit and debit card breaches at major retailers (and hints of more to come) has unnerved consumers to the point where chatter online and at the water cooler is filled with people promising to curb their plastic habits.

“This is CRAZY. First my Target card, now this,” wrote Lorraine McCullough on the Michaels Stores Facebook page last week after the arts and crafts chain said that it was investigating whether customer data had been exposed. “I am going to pay cash from now on.”

Similar sentiments poured in.

“Cash! Simple as that.”

“I am carrying cash for now on as well. The good old-fashioned way.”

“Yup, cash is the best way quite honestly.”

With Senate hearings on the recent Target breach and the security of consumer data scheduled for Monday and Tuesday, discussion about what consumers can do on their own is likely to grow even louder.

Shoppers Concerned About Information Security

A poll released last week by The Associated Press and GfK Public Affairs & Corporate Communications found that 37 percent of Americans had made an effort to use cash instead of credit or debit cards to pay for purchases as a result of the recent data thefts — almost as many as those who checked personal credit reports because of the thefts. (Just 29 percent said they had changed passwords or requested new cards.)

Even trying to use cash more often is a strange adjustment for a population that has become accustomed to the convenience of pulling out a little piece of plastic (the better to rack up rewards points) for purchases as small as a Diet Coke. Many people now swipe their cards with so little thought that they don’t even bother getting the receipt. Whereas cards were once reserved for big purchases, they have become acceptable for almost anything, including at formerly all-cash businesses like New York City taxis. More than a quarter of street food vendors now accept plastic, according to a recent study of food trucks and carts by Mobile Cuisine magazine, and 14 percent more say they will soon start.

With a variety of new forms of mobile payments — a television commercial for Chase’s QuickPay service shows how you can pay the teenage babysitter without cash — paper money has almost become an antiquated concept for some people, like a purse of gold coins.

Nicole McNamee of Germantown, Tenn., used a card for almost everything, even the $1 cups of coffee she routinely bought at the local McDonald’s drive-through. Then, last month, she and her husband learned of fraudulent charges — some $1,200 worth of purchases in Minnesota and California, including $400 at a Toys “R” Us on Christmas Eve — on her personal American Express card and a card he uses for his business.

Startled, they decided to follow the lead of some friends and take the pledge: cash only, whenever possible.

“We said, ‘Let’s just give ourselves a dollar value and pay cash for everything,’ ” Ms. McNamee said.

So far, they have found that the shift has had benefits beyond making them feel more secure. Since they can only spend what they have in their wallets, Ms. McNamee noted, “It has indirectly helped us keep on a better budget and save more.”

She starts the week with $100 and, when the money in her wallet is low, has found herself forgoing purchases she wouldn’t have hesitated to buy with a card. She has even come to value the spare pennies, nickels and dimes that she once tossed into random receptacles at home. Now she uses it for that McDonald’s coffee.

Still, despite the talk, no hard data exists to indicate whether significantly more consumers actually are using cash.

“We aren’t releasing that data,” said a Visa spokeswoman in response to a query about whether the company had noticed a recent dip in card use. A MasterCard spokesman declined to comment. An American Express spokeswoman said, “In general, card usage at merchants is in line with seasonal trends we’ve seen in prior years in December and January.” While a number of shoppers have said they are using cash at Target specifically, Target itself would not say whether this is true. “Unfortunately, I don’t have any details on payment types to provide,” a spokeswoman wrote in an email.

Financial advisers and consumer advocates say there are drawbacks to an all-cash existence beyond the nuisance and expense of running to the A.T.M. (A study this year by Tufts University, called “The Cost of Cash in the United States,” found that the average American spends 28 minutes each month, or 5.6 hours a year, traveling to banks or A.T.M.’s to get cash. It found that the average fee to use a non-network A.T.M. is now about $3.85 per transaction.)

For one thing, cash is not an option with online purchases. And vowing to use cash is a knee-jerk response that doesn’t necessarily make sense, some say. Adam Levin, chairman and co-founder of Credit.com and Identity Theft 911, which provides data protection and management services to businesses, said he told himself to use cash when he went to a Target shortly after Christmas. But the truth is, he said, “That is not realistic.”

“Cash has its own drawbacks in terms of possibly being mugged and of keeping track of your expenditures,” said Susan Grant, director of consumer protection at the Consumer Federation of America, an association of consumer organizations around the country. There is also the danger of losing it; Ms. McNamee noted that she has found loose bills in the laundry.

Besides, said Ms. Grant, “You don’t want to be carrying a bag of money into Best Buy to buy a flat-screen TV. People shouldn’t have to resort to that for peace of mind.”

Mr. Levin noted that it was important for people to build up a credit history. Still, he said: “People are right to be terrified about what’s going on with these breaches. I think we have to face facts, that breach unfortunately is the new normal. It’s the new black. It’s just where we’re going. We have to focus on monitoring and damage control.”

Rather than resorting to paying cash, it is more important for people to monitor their bank and card statements every day, he said, and to make sure never to use a debit card for an online purchase. “You’ve got to live your life,” he said, “but you’ve got to add one additional layer of vigilance.”

Already, some of the cash vows are proving to be about as temporary as those no-cupcake pledges. Kenyetta Kelley of Dothan, Ala., had her debit card information stolen after she used the card on Black Friday at Target. (She learned this when she tried to use it at another store soon after and it was declined.)

“I told myself I’m just going to start using cash more,” she said.
22
Comments

That worked well for a week or so. But then a new debit card arrived in the mail.

“When I got the new card, I just felt safe again,” she said. And now she’s back to using it routinely.

“It’s just more convenient,” she said. “I try to keep cash on me but it’s just always gone.”
http://www.nytimes.com/2014/02/03/bu...t-cash.html?hp





Don't Panic, But That Public Wi-Fi Comes from ... Inside Your House
Julio Ojeda-Zapata

When Comcast asked Ronaldo Boschulte to swap out his malfunctioning broadband modem and Wi-Fi router with an all-new model late last year, he didn't know the Internet device was a high-tech Trojan horse of sorts.

Comcast fessed up a bit later in an email to the Maple Grove man.

The new Xfinity-branded modem and Wi-Fi router also works as a public Wi-Fi hotspot.

This means any Comcast subscribers within range can gain access to the Internet, via the router, simply by tapping in their Xfinity credentials.

“I didn't know it had a hotspot” feature, the accountant said. “That was pretty much a surprise.”

Boschulte has plenty of company in this regard — and not all are thrilled about it. Some Xfinity subscribers, when made aware of this public-hotspot feature embedded in their home routers, have reacted with a mixture of apprehension and suspicion. Others say they like it.

Comcast residential customers by the hundreds of thousands across the country now have the new Xfinity routers with this public-hotspot feature, which makes their homes rough equivalents of coffee shops and other public venues that have long offered free Wi-Fi.

Modems that Comcast sets up for its small-business clientele also are capable of broadcasting two separate Wi-Fi signals — one for private use by the company staffers and visitors and another for public use by any Xfinity subscriber who happens to be nearby.

Nearly 200 Twin Cities businesses with such public Xfinity Wi-Fi are listed in a “hotspot finder” directory at hotspots.wifi.comcast.com. Comcast, based in Philadelphia, is the No. 1 cable and Internet service provider in the Twin Cities.

Xfinity neighborhood and small-business hotspots, when lumped together, are approaching the 1 million mark, Comcast executives said during an earnings call last week. That is up from about half a million over the past year.

And the Internet and cable-television behemoth makes no secret of its plan to raise that figure as it aspires to be a major U.S. Wi-Fi provider — with the assistance of customers and their residential and business facilities.

Such customers aren't required to broadcast such public Wi-Fi signals, the company stresses, and they can easily turn it off.

But Comcast hopes they won't.

STITCHED TOGETHER

Comcast's grand plan to stitch together vast urban webs of overlapping and interlocking Wi-Fi networks is a major branding exercise, for one thing. Every such public hotspot has the same moniker — “xfinitywifi” — that is readily detectable by any Internet-capable laptop computer or mobile device via their Wi-Fi control panels. Xfinity also makes available apps for this purpose.

Comcast users log on to any such network with their Xfinity usernames and passwords.

Comcast hopes this might spur those who aren't Xfinity subscribers to consider signing up. To seal the deal, it offers a couple of complimentary Wi-Fi sessions, and then gives them the option of buying day passes to continue testing the service.

In addition, Comcast is positioning its rapidly expanding Wi-Fi footprint as a kind of public utility for its customers.

When they're away from their own Wi-Fi networks, they have any number of others available as they move about their urban areas. If one Comcast subscriber is visiting the residence of another Xfinity user, he or she can simply log on to the home's public wireless signal and not trouble the homeowner with any requests for private Wi-Fi access. This is useful because it does not incur cellular-data charges.

Comcast's broad scatterings of neighborhood and small-business Wi-Fi networks can function as a single network — when someone logs on to one such network, they're automatically logged on to all of them, wherever they go.

TRUST CONCERNS

For all its potential practicality, the public-hotspot feature built into residential Xfinity routers isn't being met with universal acclaim.

Some people have privacy and security concerns, even though Comcast insists the public and private Wi-Fi networks are entirely separate and shielded from each other. Others worry that the public network will affect the private network's performance. Comcast says this isn't so.

No amount of reassurance has stopped some from turning the public-hotspot feature off. That is what Anthony Domanico, a St. Paul-based technology journalist, did, partly because of performance concerns.

Ditto for Ehren Stemme, an information-technology worker who lives in St. Paul. He said he has data-privacy concerns, partly because his spouse works in the health industry and needs to be extra careful about data security.

Stemme also laments having little control over the public-hotspot feature, other than being able to turn it on and off.

And Stemme has trust issues. Of Comcast, he said he doesn't “trust their (customer-service) team to provide accurate info.”

But Boschulte, the Maple Grove accountant, came to understand and appreciate the public Wi-Fi feature after getting over his initial surprise.

“I am fine with it,” Boschulte said. “I think it is a great idea how to expand their service. I think it is a great way to make the Internet and Wi-Fi available to a large audience.”

PUBLIC VENUES

Xfinity public hotspots could someday proliferate to the point where tablet-toting customers could forgo pricey cellular-data plans and rely solely on Wi-Fi, Boschulte believes.

“You get access to the world without paying the extra bills for mobile and data plans,” he noted.

In addition to neighborhood and small-business Wi-Fi, there is a third prong to Comcast's public wireless strategy — extra-powerful Wi-Fi transmitters set up in major public venues, like transit stations, shopping malls and sports stadiums.

For instance, Comcast has been anointed the official Wi-Fi provider for the San Francisco 49ers and that team's new Levi's Stadium, now under construction in Santa Clara, Calif. The partnership was announced this month.

No Twin Cities public venues are blasting out this extra-powerful wireless access, which is able to accommodate many more simultaneous connections than typical Wi-Fi networks. But such public wireless networks are likely to start appearing in the metro area by later this year, the company has said.

Comcast also has seized on the coming Winter Olympics to promote its Wi-Fi capabilities. For the duration of the event, it said, its nonresidential hotspots will be available to everyone, not just its subscribers. Comcast owns NBC, which will be televising the games.

This, it hopes, will earn it the loyalty of legions after the Winter Olympics have faded into history.

COMPETITION

Comcast isn't the only company promoting the concept of Wi-Fi sharing, though it is perhaps the most ambitious and successful in the United States to date.

A variety of other technology companies are promoting similar wireless-sharing, via public Wi-Fi hotspots and other approaches, but are hampered somewhat at the moment because of smaller U.S. footprints.

Spain-based Fon (fon.com) is one such company. Hugely popular in European cities, such as Madrid and Paris, it distributes compact residential Wi-Fi routers that serve as public wireless hotspots, much as the Comcast variants do.

Fon's newest router, or “Fonera,” is available for $49 on the Fon home page or on Amazon.com.

Fon has tried to cultivate a U.S. following with limited success. It is making another run by partnering with major U.S. wireless carrier AT&T and its tens of thousands of hotspots in this country.

Other companies with variations on this public-hotspot theme include Karma (yourkarma.com) and France's Free Mobile (free.fr).
http://www.dailydemocrat.com/busines...c-wi-fi-comes?





A City Wired for Growth
Edward Wyattfeb

For thousands of years, Native Americans used the river banks here to cross a gap in the Appalachian Mountains, and trains sped through during the Civil War to connect the eastern and western parts of the Confederacy. In the 21st century, it is the Internet that passes through Chattanooga, and at lightning speed.

“Gig City,” as Chattanooga is sometimes called, has what city officials and analysts say was the first and fastest — and now one of the least expensive — high-speed Internet services in the United States. For less than $70 a month, consumers enjoy an ultrahigh-speed fiber-optic connection that transfers data at one gigabit per second. That is 50 times the average speed for homes in the rest of the country, and just as rapid as service in Hong Kong, which has the fastest Internet in the world.

It takes 33 seconds to download a two-hour, high-definition movie in Chattanooga, compared with 25 minutes for those with an average high-speed broadband connection in the rest of the country. Movie downloading, however, may be the network’s least important benefit.

“It created a catalytic moment here,” said Sheldon Grizzle, the founder of the Company Lab, which helps start-ups refine their ideas and bring their products to market. “The Gig,” as the taxpayer-owned, fiber-optic network is known, “allowed us to attract capital and talent into this community that never would have been here otherwise.”

Since the fiber-optic network switched on four years ago, the signs of growth in Chattanooga are unmistakable. Former factory buildings on Main Street and Warehouse Row on Market Street have been converted to loft apartments, open-space offices, restaurants and shops. The city has welcomed a new population of computer programmers, entrepreneurs and investors. Lengthy sideburns and scruffy hipster beards — not the norm in eastern Tennessee — are de rigueur for the under-30 set.

“This is a small city that I had never heard of,” said Toni Gemayel, a Florida native who moved his software start-up, Banyan, from Tampa to Chattanooga because of the Internet speed. “It beat Seattle, New York, San Francisco in building the Gig. People here are thinking big.”

But so far, it is unclear statistically how much the superfast network has contributed to economic activity in Chattanooga over all. Although city officials said the Gig created about 1,000 jobs in the last three years, the Department of Labor reported that Chattanooga still had a net loss of 3,000 jobs in that period, mostly in government, construction and finance.

EPB, the city-owned utility formerly named Electric Power Board of Chattanooga, said that only about 3,640 residences, or 7.5 percent of its Internet-service subscribers, are signed up for the Gigabit service offered over the fiber-optic network. Roughly 55 businesses also subscribe. The rest of EPB’s customers subscribe to a (relatively) slower service offered on the network of 100 megabits per second, which is still faster than many other places in the country.

Some specialists say the low subscriber and employment numbers are not surprising or significant, at least in the short term. “The search for statistical validation of these projects is not going to turn up anything meaningful,” said Blair Levin, executive director of Gig.U, a high-speed Internet project that includes more than three dozen American research universities. Mr. Levin cited “Solow’s paradox,” the 1987 observation by Robert M. Solow, a recipient of the Nobel in economic science who wrote that “you can see the computer age everywhere but in the productivity statistics.”

Such is the case with many new technologies, Mr. Levin said. No one is going to design products that can run only on a one-gigabit-per-second network if no such networks exist, he said. But put a few in place, he added, and soon the supply of applications will drive a growing demand for the faster connections.

Chattanooga’s path to Gig City is part of a transformation that began long before most Americans knew the Internet existed. Named America’s most-polluted city in 1969 because of largely unregulated base of heavy manufacturing, Chattanooga has in the last two decades cleaned its air, rebuilt its waterfront, added an aquarium and become a hub for the arts in eastern Tennessee. In more recent years, an aggressive high-tech economic development plan and an upgrade of the power grid by EPB moved Chattanooga toward the one-gigabit connection.

In 2009, a $111 million federal stimulus grant offered the opportunity to expedite construction of a long-planned fiber-optic network, said David Wade, chief operating officer for the power company. (EPB also had to borrow $219 million of the network’s $330 million cost.) Mr. Wade said it quickly became apparent that customers would be willing to pay for the one-gigabit connection offered over the network.

Chattanooga has been joined in recent years by a handful of other American cities that have experimented with municipally owned fiber-optic networks that offer the fastest Internet connections. Lafayette, La., and Bristol, Va., have also built gigabit networks. Google is building privately owned fiber systems in Kansas City, Kan.; Kansas City, Mo.; and Austin, Tex., and it recently bought a dormant fiber network in Provo, Utah.

The systems are the leading edge of a push for ever-faster Internet and telecommunications infrastructure in a country that badly lags much of the world in the speed and costs of Web connections. Telecommunications specialists say that if the United States does not keep its networks advancing with those in the rest of the world, innovation, business, education and a host of other pursuits could suffer.

Even so, few people, including many who support the systems, argue that everyone in the country now needs a one-gigabit home connection. Much of the public seems to agree. According to Federal Communications Commission statistics, of the households where service of at least 100 megabits per second was available (one-tenth as fast as a gigabit), only 0.12 percent subscribed at the end of 2012. In Chattanooga, one-third of the households and businesses that get electric power from EPB also subscribe to Internet service of at least 100 megabits.

But just as few people a decade ago thought there would be any need for one terabyte of data storage on a desktop computer (more than 200 million pages of text, or more than 200 movies), even the most prescient technology gurus have often underestimated the hunger for computer speed and memory.

Fiber-optic networks carry another benefit, which is the unlikelihood that a potentially faster network will come along soon. Fiber optics can transmit data at close to the speed of light, and EPB officials say the technology exists for their network to carry up to 80 connections of 10 gigabits per second at once.

Those who use Chattanooga’s one-gigabit connection are enthusiastic. Mr. Gemayel, the Florida native who moved Banyan here from Tampa, first passed through Chattanooga in 2012, when he heard about an entrepreneurial contest sponsored by The Company Lab with a $100,000 prize. Banyan, which was working on a way to share real-time editing in huge data files quickly among far-flung researchers, won the contest. Mr. Gemayel returned to Tampa with his check.

But once there he discovered that his low-bandwidth Internet connection was hampering the development of his business. By the beginning of 2013, he had moved to Chattanooga.

Other companies have become Gig-related successes. Quickcue, a company that developed a tablet-based guest-management system for restaurants, began here in 2011 and over the next two years attracted about $3 million in investments. In December, OpenTable, the online restaurant reservations pioneer, bought Quickcue for $11.5 million.

Big technology dreams do not always pan out, of course, and Chattanooga is familiar with failed experiments. The city spent millions of dollars in the last five years to build a citywide Wi-Fi network, known as the “wireless mesh,” intended for use by residents and city agencies. It sits largely unused, and its utility has largely been usurped by 4G wireless service.

Few people here would say that the Gig has even begun to be used to its fullest. “The potential will only be capped by our selfishness,” said Miller Welborn, a partner at the Lamp Post Group, the business incubator where Banyan shares office space with a dozen other start-ups. “The Gig is not fully useful to Chattanooga unless a hundred other cities are doing the same thing. To date, the best thing it’s done for us is it put us on the map.”

For all the optimism, many boosters are aware there are limits to how far the Gig can take the city, particularly as it waits for the rest of the country to catch up.

“We don’t need to be the next Silicon Valley,” Mayor Andy Berke said. “That’s not who we’re going to be, and we shouldn’t try to be that. But we are making our own place in the innovation economy.”
http://www.nytimes.com/2014/02/04/te...attanooga.html





Cable Companies Want to Block Cities from Building Fiber Networks. Here’s How the FCC Could Intervene.
Brian Fung

Google Fiber promises to bring high-speed Internet to the masses at low cost — but only in certain cities. So around the country, local governments have taken it upon themselves to build their own fiber-optic networks that can deliver the same capabilities. Last week, this effort was met in Kansas with a bill written by cable lobbyists who sought to ban cities from building municipal broadband projects. While the state cable association has since agreed to amend the bill in the face of public criticism, the incident is a reminder that public infrastructure projects can be especially fraught when it comes to Internet service.

But what so far has been a fight between states, cities and established commercial incumbents may soon become an issue for federal regulators. A recent court decision has given the Federal Communications Commission a green light to intervene in these situations, industry analysts say. While it's too early to tell whether the FCC intends to exercise this power, mayors would find a powerful ally in Washington if they could convince the FCC to intervene.

Kansas isn't the only place where cable companies have thrown up barriers to publicly funded fiber optic networks. Colorado, for instance, has a law on the books that requires cities to pass a referendum if they want to start building a municipal network. The cable industry has campaigned against such ballot measures there in the past. In Seattle, cable companies lobbied to defeat mayor Mike McGinn, who was an advocate for public fiber.

The cable industry argues that public financing of city-wide networks costs taxpayers extra for what is already available commercially. This can be true if a city levies a new tax to pay for the fiber cables, if not enough subscribers sign on to make the network cost-effective or if it's unable to foot the bill through other means. But some cities have also circumvented this problem by offering public bonds instead of passing new taxes — and to many fiber supporters, the benefits of insanely fast Internet are enough to outweigh the criticisms.

When cities decide to offer fiber-optic Internet themselves, they wind up challenging the interests of cable companies and other existing providers. Assuming the financing and construction goes over without a hitch, the fiber-optic service — which is capable of speeds up to 100 times the national average — effectively becomes a public utility. Customers who sign on with the city must switch away from their old company, and in some cases get a better deal out of it. For $70 a month, residents of Chattanooga, Tenn., get access to speeds of 1 Gbps (1,000 Mbps). That's way faster than even the highest speeds in many communities served by commercial providers.

Cable companies understandably don't like that. But their policy of trying to thwart city-led efforts at building broadband now risks running afoul of the FCC. In an ironic twist, the federal court decision striking down the FCC's net neutrality regulations may have given the agency just the power it needs to stop them.

Until now, the FCC's authority to regulate broadband has been somewhat ambiguous. In its ruling last month, the D.C. Circuit said the agency isn't allowed to write prescriptive prohibitions on regulating Internet traffic. But it also explained that the FCC can regulate broadband in other ways, under a section of the Telecommunications Act known at Section 706. This is a perspective even the dissenting Judge Laurence Silberman endorsed in spite of his overall argument that Section 706 doesn't grant the FCC power to implement the net neutrality regulations at stake in the case.

"Section 706 is a grant of positive regulatory authority," Silberman wrote in his partial dissent. "The key words obviously are 'measures that promote competition in the local telecommunications market or other regulating methods that remove barriers to infrastructure investment.' Those are the words that grant actual authority."

In a footnote, Silberman went on to highlight municipal broadband projects explicitly, citing state laws (such the one Kansas is now considering) as "a paradigmatic barrier to infrastructure investment" that the FCC could use to invoke Section 706.

Even as industry watchers are trying to determine just how far the FCC can go with Section 706, its authority to act on city-level fiber-optic efforts seems clear.

"The Chairman’s message has been aggressively pro-competition," said Paul Gallant, a telecom analyst at Guggenheim Partners. "So I wouldn't be surprised to see him pick up the ball from Judge Silberman and consider preempting state laws that tell cities they can’t self-provide broadband.”

That said, don't expect the FCC to start intervening unilaterally in Kansas, Colorado or other states anytime soon. Politically, it'd be a hard sell. But in response to targeted, individual complaints brought by cities or fiber advocates? Given what we know about agency chairman Tom Wheeler, who's expressed a preference for the case-by-case approach on Internet policy, the FCC might be convinced to step in.

"If a municipality came to the FCC and petitioned it and said, 'The opinion said you guys can have a role in this, and we need help,' it would be really interesting to see then what the FCC's behavior would be," said Jeff Silva, an industry analyst at Medley Global Advisors.

That means the FCC could get drawn into the debate over municipal broadband — whether it wants to be or not.
http://www.washingtonpost.com/blogs/...uld-intervene/





Hearings on Community Broadband Services Bill Postponed
Bryan Lowry

Facing public backlash over a Senate bill that would outlaw community broadband services statewide, Sen. Julia Lynn, R-Olathe, announced on Monday the postponement of hearings set to take place this week.

Senate Bill 304 would prohibit cities and counties from building public broadband networks. The Commerce Committee, which Lynn chairs, was scheduled to have a hearing Tuesday, but Lynn released a statement that hearings have been postponed indefinitely.

“Based on the concerns I heard last week, I visited with industry representatives and they have agreed to spend some time gathering input before we move forward with a public hearing,” Lynn said in a statement.

“We’ll revisit the topic when some of these initial concerns have been addressed.”

Lynn elaborated while exiting a Senate Judiciary hearing. The senator said she has instructed “the parties” involved with the bill to address the public’s concerns. The bill was introduced by John Federico, a cable industry lobbyist.

“I’m just letting the parties work out their differences to make sure anything … that I decide to bring forward is ready for a hearing because the last thing I want to do is create confusion,” she said.

“I think there is some confusion out there about the bill, and I don’t want to waste anybody’s time.”

She said that some of the controversy stemmed from the public’s misunderstanding of the legislative process.

“They were making assumptions that it was kind of a done deal as presented,” Lynn said about phone calls and e-mails she received.

“I think because there was uncertainty about a couple of the provisions in the bill.”

One of the main fears of the bill’s opponents was that it would prevent other towns in the state from arranging partnerships like the one Kansas City has with Google. The company has provided the city with a high-speed network to public buildings in exchange for expedited permits and discounts, but no tax money was spent.

“I think what the intention is that we don’t use taxpayer money to compete against private industry,” Lynn said.

The senator said the bill will be tweaked to address the public’s concerns, but said there was no set timeline.

“If I’m not satisfied that it’s ready to go forward, then I’m not bringing it back,” Lynn said.
http://www.kansas.com/2014/02/03/326...broadband.html





Cable co. Blames “Misinformation” for Failure of Municipal Internet Ban

Kansas cable lobby will try again, now facing opposition from Google.
Jon Brodkin

The legislation in Kansas that would have made it nearly impossible for cities and towns in that state to offer broadband service to residents was originally scheduled for debate in the Senate today.

That hearing ended up being canceled after public outcry forced the bill's author, the Kansas Cable Telecommunications Association (KCTA), to rethink its tactics.

But that doesn't mean the bill is going away forever. Cox, a member of the cable lobby group, blamed the early struggle on "misinformation" but said there will be "continued discussion."

"Cox Communications was prepared to participate in Kansas legislative hearings regarding Government Owned Networks," the company said in a statement sent to Ars. "With approximately 22 other states having some type of restriction on the use of taxpayer dollars for these kinds of facilities, we thought it a relevant topic worthy of our involvement given our significant investment in the communities we serve and our public-private partnerships. There was enough misinformation regarding the legislation that made it appropriate for the committee to defer action at this time. We look forward to a continued discussion with all parties on this issue."

The KCTA said yesterday that it requested the cancellation of today's Senate Commerce Committee hearing to "allow time to meet with the interested parties about the legislation." (As it turned out, a snowstorm canceled all of today's legislative sessions.)

The bill would have forbidden municipalities from providing "video, telecommunications, or broadband service" except in areas where nine out of 10 residents have no service at all. The bill also placed restrictions on government partnerships with broadband companies.

Google says private sector can’t solve broadband problem alone

This drew opposition from within Kansas and from a variety of tech companies and consortiums who sent a letter to state lawmakers. The letter was signed by Google, which chose Kansas City as the first site for its Google Fiber service. Signees also included Alcatel-Lucent, the American Public Power Association, the Atlantic Engineering Group, Calix, CTC Technology & Energy, the Fiber to the Home Council, the National Association of Telecommunications Officers and Advisors, OnTrac, the Telecommunications Industry Association, and the Utilities Telecom Council.

"We, the private-sector companies and trade associations listed below, urge you to oppose SB 304 because this bill will harm both the public and private sectors, stifle economic growth, prevent the creation or retention of thousands of jobs, hamper work force development, and diminish the quality of life in Kansas," the letter said. "The private sector alone cannot enable the United States to take full advantage of the opportunities that advanced communications networks can create in virtually every area of life. … SB 304 would prevent municipalities from working with private broadband providers, or developing themselves, if necessary, the advanced broadband infrastructure that will stimulate local businesses development, foster work force retraining, and boost employment in economically underachieving areas."

The KCTA denied that its bill is just an attempt to protect cable companies from competitors. “Let me be clear that this legislation was not introduced to prevent other private telecommunications providers from building or expanding their services in Kansas communities," KCTA President John Federico said in yesterday's announcement. "This bill was intended to provide safeguards to all telecommunication providers against government-subsidized competition."

Last week, Federico told Ars that the cable lobby intends to change how the bill defines "unserved areas" to make it a little less restrictive. But the lobby's belief that it shouldn't have to face competition from government-run networks remains as strong as ever.

“Taxpayer dollars are a scarce resource, and legitimate questions about municipal projects that compete with private providers should be addressed in communities where private Internet Service Providers have already invested risk capital to bring high-speed broadband, and other telecommunication services, to Kansas consumers," Federico also said this week.

The lobby will probably have at least some support in the legislature. Two Republican Kansas legislators are part of the leadership of the American Legislative Exchange Council, which says that taxpayer-funded broadband networks "could erode consumer choice by making markets less attractive to competition because of the government’s expanded role as a service provider."

The council's board of directors includes Sen. Susan Wagle, vice chair of the Kansas Senate's Commerce Committee, and Ray Merrick, speaker of the state House of Representatives.
http://arstechnica.com/tech-policy/2...-internet-ban/





F.C.C. Says It Will Double Spending on High-Speed Internet in Schools and Libraries
Edward Wyattfeb

The Federal Communications Commission will double the amount of money it devotes to adding high-speed Internet connections in schools and libraries over the next two years, in an effort to meet President Obama’s promise to provide broadband service for an estimated 20 million American students in 15,000 schools, officials said Saturday.

Financing for the new spending will come from restructuring the $2.4 billion E-Rate program, which provides money for “advanced telecommunications and information services” using the proceeds of fees paid by telecommunications users. The proportion that goes to broadband service in schools and libraries will increase to $2 billion a year from $1 billion.

Mr. Obama referred to the changes during his State of the Union address last week. The changes will not require any additional taxes or assessments, according to an F.C.C. official who spoke on the condition of anonymity because a formal announcement was being planned for this week.

The E-Rate program is part of the Universal Service Fund, which also provides money to connect rural areas and low-income people to phone and Internet service using money raised through fees on consumers’ phone bills. The commission’s chairman, Tom Wheeler, is expected to announce details of the plan on Wednesday at an event for Digital Learning Day, which promotes the use of technology in education.

Most of the redirected spending in 2014 will come from funds left over from previous years. Next year, much of the money will come from changes to the E-Rate program, including the elimination of programs that pay for outdated technologies, like paging services, dial-up Internet connections and email programs that are available free elsewhere.

The spending will be used to increase available broadband speeds and provide wireless networks in schools, which are increasingly in demand for students using tablets and laptop computers.

A 2010 survey conducted for the F.C.C. by Harris Interactive found that roughly half of schools receiving E-Rate funds connected to the Internet at speeds of three megabits per second or less — too slow to stream many video services. The commission wants to give all schools access to broadband connections of 100 megabits per second by 2015, and connections of up to one gigabit per second by the end of the decade. Another survey, by the American Library Association, found that 60 percent of libraries reported their speeds failed to meet their patrons’ needs some or most of the time.
http://www.nytimes.com/2014/02/02/us...pagewanted=all





FCC Chief Tells Sprint Chair He is Skeptical on T-Mobile Deal
Alina Selyukh and Sinead Carew

Federal Communications Commission Chairman Tom Wheeler expressed his skepticism about a potential merger between Sprint Corp and T-Mobile US Inc in a meeting with Sprint Chairman Masayoshi Son on Monday, according to an FCC official briefed on the matter.

Son, chief executive at Tokyo-based SoftBank Corp, which acquired Sprint last year, met with the top U.S. telecommunications regulator alongside Sprint Chief Executive Dan Hesse.

Wheeler told Son and Hesse he was highly skeptical of the potential bid by No. 3 wireless provider Sprint to acquire No. 4 rival T-Mobile, according to the FCC official, who was not present at the meeting.

Wheeler said he would keep an open mind about the potential transaction, according to the official, and generally echoed comments made last week by antitrust chief William Baer, who gave long odds to a regulatory approval of mergers between any two of the top four wireless phone companies.

Sprint has been trying to convince U.S. regulators that the prospect of more U.S. mobile industry consolidation should not be dismissed without a fair review, according to a person familiar with the situation.

Sprint's argument is that the current market isn't serving U.S. consumers well because the two top players Verizon Communications Inc AT&T Inc are so dominant that it makes sustainable competition from the other two players very challenging, the person said.

A Sprint spokesman declined comment on Monday. The FCC does not comment on merger speculation. FCC chiefs routinely meet with industry executives in sessions that often cover a wide range of topics.

A SoftBank spokesman in Tokyo declined to comment. Separately, a person close to SoftBank said: "I'm not unduly surprised by the FCC chairman's skepticism. I feel it's a rather typical reaction."

Son's SoftBank has recently been in talks to acquire T-Mobile, itself controlled by Deutsche Telekom, sources have told Reuters, although the deal would face strong headwinds in getting regulators' approval at the Justice Department and the FCC.

Baer and Wheeler have both hailed the 2011 rejection of a merger between AT&T and T-Mobile as yielding a more competitive market that is better for consumers. Wheeler has routinely answered public questions about potential wireless mergers by saying that he is an "unabashed" defender of competition.

AT&T over the weekend announced a plan to cut prices on its large shared data plans in the latest sign that the rivalry between the top U.S. cellular players may be triggering increased discounting.

In Tokyo SoftBank shares rebounded after falling as much as 5.8 percent on Tuesday morning to 6,655 yen, their lowest in more than four months. At midday the stock was up 2.6 percent, defying a 2.6 percent drop in the benchmark Nikkei average.

(Reporting by Alina Selyukh in Washington and Sinead Carew in New York; Additional reporting by Maki Shiraki in Tokyo; Editing by Nick Zieminski, Cynthia Osterman and Kenneth Maxwell)
http://uk.reuters.com/article/2014/0...A1300D20140204





Democrats Introduce Open Internet Preservation Act To Restore Net Neutrality
Alex Wilhelm

Democrats in the House and Senate today introduced the Open Internet Preservation Act, a bill that would reinstate now-defunct net neutrality rules that were shot down last month.

Net neutrality, in its most basic form, is the idea that ISPs must treat all Internet data the same. Under its regime, ISPs are not allowed to selectively speed up or slow down information requested by their customers due to their selective gatekeeping of the services impacted. Or, more simply, Comcast can’t decide that a site you want to load, or a video you want to watch, should be slowed, and content that it prefers, accelerated.

With last month’s striking of the FCC’s net neutrality ruling, the D.C. Circuit Court of Appeals has changed the landscape of the Internet.

Those in favor of net neutrality view the regulatory scheme as key to a free, open, and level playing field. Its antagonists decry it as government regulation of something, namely the Internet, which has worked just fine thus far, thank you.

The Preservation Act — full text here — is short and merely “restores” what was “vacated” by the court’s decision. So it would take us back to where we were in December.

According to the National Journal, the act has all but no chance of becoming law because “Republicans are almost entirely united in opposition to the Internet rules, meaning the bill is unlikely to ever receive a vote in the GOP-controlled House.” The bill has been introduced in both the House and Senate, but passing merely half the bicameral Congress is roughly as useful as trying to reform your local DMV.

Other voices were quick to criticize its thrust. The American Enterprise Institute (AEI) dismissed it as “counter-productive,” saying that “insisting on stagnation in the essential infrastructure of the Internet is no way to promote innovation and the public interest.”

The gist of the AEI’s argument, if I can summarize, is that if ISPs are prevented from charging fees to content providers — think Netflix, et al. — it stunts their potential revenues and therefore investment into further infrastructure. This argument is somewhat circular, as the Internet has seen rapid and massive investment in its life while existing under the conditions that the FCC wanted to codify as the regulatory standard.

The bill’s sponsors see the world slightly differently than the AEI, focusing less on theoretical lost rents from a third-party non-participant to the customer-ISP relationship, and instead hitting on consumer protection and choice:

“The Internet is an engine of economic growth because it has always been an open platform for competition and innovation. Our bill very simply ensures that consumers can continue to access the content and applications of their choosing online.” Rep. Waxman

It has always been my view that forcing ISPs to treat all content equally is the correct way to ensure that all voices — the new, the established, the next, and the marginalized — have space. If YouTube had been forced to pay extra carrier fees early in its life, or Netflix for that matter, would they have become the giants they have? If we increase the marginal expense of reaching consumers in a random fashion by granting ISPs that power, we cede the advancement of technology to demonstrably conservative actors.

And there is a simple question of who decides. If ISPs can censor and slow at will, what can stop activist networks from pushing on those companies to halt what they do not approve of? If a religious group called Comcast complicit in hate speech for delivering content Internet users requested that they found blasphemous, what can Comcast do? We’re removing their shield of “we deliver all to all equally,” which could harm ISPs down the road.

That the parties exposing themselves to that sort of trouble are the precise parties calling for the end, now I suppose the continuance of the end, of net neutrality is ironic, and sad.
http://techcrunch.com/2014/02/03/dem...et-neutrality/





Verizon Using Recent Net Neutrality Victory to Wage War Against Netflix
David Raphael

I usually don’t post articles about current affairs. However, a recent series of events has inspired me to write about this.

Towards the end of January, the president of our company – iScan Online, Inc., was complaining that our service was experiencing major slowdowns. I investigated the issue, but I couldn’t find anything wrong with our production environment. We were stumped.

One evening I also noticed a slowdown while using our service from my house. I realized that the one thing in common between me and our president was that we both had FiOS internet service from Verizon.

Since we host all of our infrastructure on Amazon’s AWS – I decided to do a little test – I grabbed a URL from AWS S3 and loaded it.

40kB/s.

WTF.

I also noticed that our Netflix streaming quality is awful compared to just a few weeks ago.

Next, I remoted into our office – about a mile away from my house. I tested the same link –

5000kB/s.

WTF.

So I contacted Verizon support over their live chat.

Verizon had me do a speedtest.

75Mb/s.

He says “You have excellent Bandwidth – is there anything else I can help you with?”

I replied – “Yes. Why are these files slow…”

So he proceeded to walk me through various troubleshooting:
“reboot your router…”
“make sure your system has latest updates…
“change your wifi channel”

After about 30 minutes of this – I grew impatient. I explained to him that there was something limiting the speed on their side. He remoted into my system with a screen sharing tool, and I showed him my remote screen to the connection at the office. He kept on saying that bandwidth is different for different locations etc…

Frankly, I was surprised he admitted to this. I’ve since tested this almost every day for the last couple of weeks. During the day – the bandwidth is normal to AWS. However, after 4pm or so – things get slow.

In my personal opinion, this is Verizon waging war against Netflix. Unfortunately, a lot of infrastructure is hosted on AWS. That means a lot of services are going to be impacted by this.

PS> a number of folks have questioned the expertise of the support individual. I completely understand. I’m not a networking expert, but I did want to share 2 more pieces of data that I think are significant:

Traceroute from Residential Side:1
2
3
4
5
6
7
8
9
10 Tracing route to iscanonline.com [23.21.158.115]
over a maximum of 30 hops:

1 <1 ms <1 ms <1 ms 192.168.1.1
2 7 ms 7 ms 8 ms L100.DLLSTX-VFTTP-65.verizon-gni.net [173.74.57.1]
3 10 ms 6 ms 9 ms G0-5-2-0.DLLSTX-LCR-21.verizon-gni.net [130.81.190.204]
4 16 ms 9 ms 10 ms so-5-0-0-0.DFW9-BB-RTR1.verizon-gni.net [130.81.199.34]
5 10 ms 9 ms 9 ms 0.xe-3-3-0.BR2.DFW13.ALTER.NET [152.63.100.5]
6 9 ms 10 ms 9 ms 204.255.168.158
7 10 ms 9 ms 10 ms ae-1.r08.dllstx09.us.bb.gin.ntt.net [129.250.3.27]


Traceroute from Business line (1 mile away)1
2
3
4
5
6
7
8 traceroute to iscanonline.com (23.21.158.115), 64 hops max, 52 byte packets
1 192.168.1.1 (192.168.1.1) 18.036 ms 1.326 ms 2.318 ms
2 l100.dllstx-vfttp-93.verizon-gni.net (71.244.30.1) 5.870 ms 5.211 ms 5.193 ms
3 g0-5-0-2.dllstx-lcr-21.verizon-gni.net (130.81.138.12) 7.400 ms 67.679 ms 10.605 ms
4 so-5-0-0-0.dfw9-bb-rtr1.verizon-gni.net (130.81.199.34) 12.062 ms 6.652 ms 17.799 ms
5 0.xe-3-3-0.br2.dfw13.alter.net (152.63.100.5) 7.207 ms 7.858 ms 9.616 ms
6 204.255.168.158 (204.255.168.158) 7.435 ms 7.256 ms 10.366 ms
7 ae-1.r08.dllstx09.us.bb.gin.ntt.net (129.250.3.27) 7.365 ms 10.160 ms 9.083 ms
http://davesblog.com/blog/2014/02/05...ainst-netflix/





Iridium's Satellite Hotspot Will Get You Online Nearly Anywhere on Earth
Jon Fingas

Globalstar's Sat-Fi won't be the only game in town for satellite hotspots. Iridium has unveiled the Iridium Go, a hotspot that lets up to five WiFi-equipped devices hop on the internet, send texts and make phone calls from just about anywhere on the planet. It's built for outdoor adventurers with both a rugged design and an SOS mode that gets in touch with emergency services. However, software may be the Go's real ace in the hole; while it will ship with official Android and iOS apps, there's already a developer kit that lets third-party software take advantage of the satellite link on any platform. Iridium plans to ship the hotspot in the first half of the year. The firm isn't discussing exact pricing at this point, but it promises that Go will represent its "lowest cost" offering to date at below $800 -- for some people, it will be cheap enough to come along on that big summer hiking trip.
http://www.engadget.com/2014/02/04/iridium-go/





U.S. Mobile Data Traffic to Jump Nearly Eight-Fold By 2018: Cisco
Alina Selyukh

The volume of data crossing U.S. mobile networks will grow almost eight-fold by 2018, and demand for Internet-connected devices will also skyrocket, according to a report released on Wednesday that poses questions about U.S. spectrum policy.

U.S. consumers will download and upload more data on their smartphones in 2018 than they did on their laptops in 2013, according to a forecast by Cisco Systems Inc. Americans will continue to lead the world as earlier and faster converts to new smart devices and networks, the Cisco report said.

"It's more people, more connections, faster speeds on the networks and then more rich content, which in this case is video, video, video," Robert Pepper, Cisco's vice president for global technology policy, told Reuters.

Cisco, one of the leading makers of networking equipment, studies the use and speed of devices, connections and data for an annual forecast of mobile data traffic trends.

U.S. wireless networks will continue to experience a steep increase in so-called machine-to-machine communications, as Americans seek the convenience of devices that talk to each other and the Internet, like remotely operated thermostats or smart anti-theft sensors, according to Cisco's forecast.

Internet-linked devices will keep spreading at a fast clip, with some 271 million connections between gadgets and the Internet forecast for 2018 - an eight-fold jump from 35 million in 2013 that is driven by the predicted boom in wearable devices like activity-tracking wristband Fitbit, Pepper said.

By 2018, Cisco predicts that U.S. mobile data traffic will reach 2.7 exabytes a month - equal to the amount of data stored on some 675 million DVDs. In 2013, less than half an exabyte of data crossed U.S. networks on average per month.

The findings contribute to the growing concerns in the telecommunications industry that demand for data will soon far exceed the networks' capacity, and connection speeds will slow.

Though some have argued that technological advancements may prevent the crisis, wireless companies say they need more airwaves to evade the spectrum crunch.

"If we don't add more spectrum in the long term, what it means for cellular networks is congestion, particularly in the peak hours and particularly in urban areas," said Mary Brown, Cisco's director of government affairs.

The Federal Communications Commission is stepping up its work to reshuffle ownership of airwaves, including efforts to clear large slices of frequencies controlled by government agencies for use by private companies and consumers.

Cisco also forecast that U.S. carriers will increasingly rely on WiFi connections to automatically divert data traffic, nearly two-thirds of it by 2018.

"Even as networks get more and more powerful, they're certainly continuing to add to the amount of traffic that's traveling over both cellular and WiFi networks," Brown said.

"We're going to need more than just technological improvements to satisfy those demand curves. We're going to need more spectrum."

(Editing by Jan Paschal)
http://www.reuters.com/article/2014/...A140VY20140205





'Kill Switch' for Stolen Smartphones? It's in Proposed California Bill
Terry Collins

Officials in California on Friday were set to outline proposed state legislation requiring smartphones and other mobile devices to have a "kill switch" that would render them inoperable if lost or stolen.

State Sen. Mark Leno, San Francisco District Attorney George Gascon and other officials scheduled a news conference about the measure to require any mobile devices sold in or shipped to California to have built-in anti-theft devices.

Leno and Gascon say the measure, which Leno plans to introduce this spring, could deter thieves from stealing smartphones. They believe the bill would be the first of its kind in the United States.

A San Francisco Democrat, Leno joins Gascon, New York Attorney General Eric Schneiderman and other law enforcement officials who have been demanding that manufacturers create kill switches to combat surging smartphone theft across the country.

"With robberies of smartphones reaching an all-time high, California cannot continue to stand by when a solution to the problem is readily available," Leno said in a statement.

The CTIA-The Wireless Association, a trade group for wireless providers, says a permanent kill switch has serious risks, including potential vulnerability to hackers who could disable mobile devices and lock out not only individuals' phones but also phones used by entities such as the Department of Defense, Homeland Security and law enforcement agencies.

The association has been working with the Federal Communications Commission, law enforcement agencies and elected officials on a national stolen phone database that launched in November to remove "the aftermarket."

"These 3G and 4G/LTE databases, which blacklist stolen phones and prevent them from being reactivated, are part of the solution," Michael Altschul, CTIA's senior vice president and general counsel, said in a statement. "Yet we need more international carriers and countries to participate to help remove the aftermarket abroad for these trafficked devices."

Almost 1 in 3 U.S. robberies involve phone theft, according to the FCC. Lost and stolen mobile devices — mostly smartphones — cost consumers more than $30 billion in 2012, according to an FCC study.

In San Francisco alone, more than 50 percent of all robberies involve the theft of a mobile device, the San Francisco District Attorney's office said. Across the bay in Oakland, such thefts amount to about 75 percent of robberies, authorities say.

"This legislation will require the industry to stop debating the possibility of implementing existing technological theft solutions, and begin embracing the inevitability," Gascon said in a statement. "The wireless industry must take action to end the victimization of its customers."

Last year Samsung Electronics, the world's largest mobile phone manufacturer, proposed installing a kill switch in its devices. But the company told Gascon's office the biggest U.S. carriers rejected the idea.

Samsung has said it would continue to work with Gascon, other officials and its wireless carrier partners toward a common goal of stopping smartphone theft.

Apple, the maker of the popular iPhone, said its "Activation Lock," as part of its iOS 7 software released in the fall, is designed to prevent thieves from turning off the Find My iPhone application, which allows owners to track their phone on a map, remotely lock the device and delete its data.

In December, Gascon praised Apple for its efforts, but said "it is still too early to tell how effective their solution will be."

Gascon and Schneiderman have given manufacturers a June 2014 deadline to come up with solutions to curb the theft of stolen smartphones.
http://www.fresnobee.com/2014/02/06/...-proposed.html





Hackers Can Use Snapchat to Disable iPhones, Researcher Says
Salvador Rodriguez

A cyber security researcher has discovered a vulnerability within the Snapchat mobile app that makes it possible for hackers to launch a denial-of-service attack that temporarily freezes a user's iPhone.

Jaime Sanchez, who works as a cyber-security consultant for Telefonica, a major telecommunications company in Spain, said he and another researcher found a weakness in Snapchat’s system that allows hackers to send thousands of messages to individual users in a matter of seconds. Sanchez said he and the fellow researcher discovered the glitch on their own time.

Flooding one user with so many messages can clog their account to the point that the Snapchat app causes the entire device to freeze and ultimately crash, or require that the user perform a hard reset.

Snapchat is a popular mobile app for iPhone and Android devices that allows users to send each other photo and video messages that disappear a few seconds after they are opened by their recipients.

Every time a user attempts to send a message through Snapchat, a token, which is a code made up of letters and numbers, is generated to verify their identity. Sanchez, who wrote about his security findings on seguridadofensiva.com (in Spanish), said a flaw within Snapchat’s system allows hackers to reuse old tokens to send new messages.

ALSO: How to watch the Sochi Winter Olympics online and on mobile

By reusing old tokens, hackers can send massive amounts of messages using powerful computers. This method could be used by spammers to send messages in mass quantities to numerous users, or it could be used to launch a cyber attack on specific individuals, he said.

Sanchez demonstrated how this works by launching a Snapchat denial-of-service attack on my account. He sent my account 1,000 messages within five seconds, causing my device to freeze until it finally shut down and restarted itself. (See the video above.)

Launching a denial-of-service attack on Android devices doesn’t cause those smartphones to crash, but it does slow their speed. It also makes it impossible to use the app until the attack has finished.

Sanchez said he has not contacted Snapchat about the vulnerability because he claims the Los Angeles startup has no respect for the cyber security research community.

He says Snapchat earned that reputation by ignoring advice in August and on Christmas Eve from Gibson Security, a security group that predicted a flaw within the app could be used to expose user data. On New Year’s Eve, another group exploited that vulnerability and exposed the user names and phone numbers of nearly 5 million Snapchat users.

“They warned Snapchat about issues -- about the possible dump of database -- and Snapchat didn't care,” he said.

The Times asked Snapchat if it knew of the vulnerability claimed by Sanchez. Snapchat said it was not aware of the problem.

“We are interested in learning more and can be contacted at security@snapchat.com," a Snapchat spokeswoman wrote in an email reply.
http://www.latimes.com/business/tech...,3127301.story





NSA Collects 20% or Less of U.S. Call Data

Program doesn't cover records for most cellphones
Siobhan Gorman

The National Security Agency's collection of phone data, at the center of the controversy over U.S. surveillance operations, gathers information from about 20% or less of all U.S. calls—much less than previously thought, according to people familiar with the NSA program.

The controversial NSA phone-data program has been described by lawmakers as collecting records on virtually every phone call placed in the U.S. In fact, it collects data for 20% or less. Siobhan Gorman reports on the News Hub.

The program had been described as collecting records on almost every phone call placed in the U.S. But, in fact, it doesn't collect records for most cellphones, the fastest-growing sector in telephony and an area where the agency has struggled to keep pace, the people said.

The dwindling coverage suggests the NSA's program is less pervasive than widely believed—and also less useful.

"Landlines are going away, and new providers are entering the field," said one person familiar with the program. "It's hard to keep up."

The agency's legal orders for data from U.S. phone companies don't cover most cellphone records, a gap the NSA has been trying to address for years. That effort has been slowed by the NSA's need to fix a host of problems that it uncovered in the program and reported to the U.S. court that oversees NSA surveillance in 2009, people familiar with the matter say.

Moreover, the NSA has been stymied by how to remove location data—which it isn't allowed to collect without getting additional court approval—from U.S. cellphone records collected in bulk, a U.S. official said.

The exact details of which phone data are collected by the NSA couldn't be fully determined. NSA spokeswoman Vanee Vines said, "While we are not going to discuss specific intelligence collection methods, we are always evaluating our activities to ensure they are keeping pace with changes in technology."

Former officials have said the three companies served with the court orders are AT&T Inc., Verizon Communications Inc. and Sprint Corp. Verizon's order applies only to its landline business, Verizon Business Network Services Inc., and not to its separate cellphone business. Sprint was recently bought by a Japanese conglomerate, and it is unclear if that deal affected how much data is provided under its court order.

Further, these companies don't end up providing all of their data, as reflected in a little-noticed line in a report written by a White House-commissioned NSA review panel. The report said that data collected by NSA are "only a portion of the records of only a few telephone service providers."

The NSA's collection of phone data has become one of the most controversial aspects of the agency's surveillance operations, because it picks up so much data on innocent Americans. Details of the program, along with many others, came to light among the documents leaked by former NSA contractor Edward Snowden last year.

Critics of the phone-data program have regularly cited its comprehensive collection as a major invasion of privacy. Rep. Justin Amash (R., Mich.), who led a charge against the program last year, has described it as "collecting the phone records of every single person in the United States, regardless of whether you're under any suspicion."

A spokesman for Mr. Amash, Will Adams, said there weren't "second-class privacy rights" for people who use landlines. He added that he assumed terrorists would be more inclined to use cellphones than landlines, and not collecting all the phone data "undercuts" the argument for why it is needed at all.

As recently as this week, the top Democrat on the House Judiciary Committee, Rep. John Conyers (D., Mich.), called for the end of the phone-data program, which he said collected "records on virtually every phone call placed in the United States."

On Friday, Mr. Conyers said, "Our objection is to bulk collection. Bulk collection is inconsistent with the statute. It is inconsistent with the Constitution and inconsistent with our national values. The idea that the NSA has fallen behind in the project, and that their stated intent is to catch up, is of no comfort to us."

Intelligence officials have defended the mass nature of the program by contending they need the "whole haystack" of phone calls before they can analyze connections to a particular lead.

Director of National Intelligence James Clapper has also coined a second argument for the value of the program, saying it has a "peace-of-mind metric" and can be used to "rule out" domestic links to terror plots. A spokesman for Mr. Clapper didn't respond to a request for comment.

"The one question that has not been adequately answered is whether this program is effective," said Nancy Libin, a former chief privacy officer at the Justice Department. "You don't want to intrude on people's privacy and undermine public trust if it's not going to do any good." Ms. Libin added that she didn't think the program has proved its value.

President Barack Obama announced plans last month to move the phone data out of federal control. He also said a judicial order would be required to search the data, a requirement that was instituted on Thursday.

Mr. Obama's top advisers must report by March 28 on options to rework the program. The two most obvious alternatives—holding the data at the phone companies or at a third party—pose operational and privacy problems. One U.S. official said that expanding the program to cover more phone records could prove more difficult under a restructured version of the program.

The reduced scope of the program, which will only increase over time as cellphone use increases, could reshape the argument about its future.

Gen. Keith Alexander, director of the NSA, has said such a program, if it existed at the time, would have uncovered the 9/11 plot. It is unclear that the 20% coverage now could provide the same type of tripwire for a terror plot. A spokeswoman for Gen. Alexander didn't respond to a request for comment.

The disclosure of the lower collection rate undermines intelligence agencies' justification for the program, said Steve Vladeck, a constitutional law professor at American University.

"The whole point of the program is that it only works if they have all of the data," he said. "It may not be as alarming as collecting all the data, but it absolutely destroys the analytical justification for having any of it."

But the more limited scope didn't provide much encouragement to critics of the program.

"I don't find this revelation very reassuring," said Jameel Jaffer, deputy legal director of the American Civil Liberties Union. "To accept their legal reasoning is to accept that they will eventually collect everything, even if they're not doing so already. They're arguing that they have the right to collect it all."

A key difficulty for the NSA's efforts to keep pace has been the technical challenge of separating location data from cellphone calling records. The NSA has an agreement with the secret Foreign Intelligence Surveillance Court that it won't collect location data from phones without getting court approval.

The NSA in 2010 and 2011 tested its ability to address how to handle location information that was collected along with phone data. Intelligence officials have acknowledged that effort, but said it wasn't pursued because it wasn't found to be of value.

Gen. Alexander has left open the potential of collecting the data in the future, telling Congress in October that "this may be something that is a future requirement for the country, but it is not right now." The NSA promised to inform Congress before it collects location data.

Several people familiar with the program maintained it can still be useful even by covering 20% or fewer of calls in the U.S. That figure still represents many millions of calls each day, and billions each year. The records are kept for five years.

"If you want to cover the seam [between foreign and domestic plotting], it's better than nothing at all," said the person familiar with the phone-data program.

—Gautham Nagesh and Devlin Barrett contributed to this article.
http://online.wsj.com/news/article_e...MDAwNzEwNDcyWj





FISA Court Agrees To Changes That Limit NSA's Ability To Query Phone Records
Mike Masnick

While we were mostly disappointed by President Obama's speech concerning his plans for reforming surveillance efforts, there were a few significant suggestions, with the most major one being a limit from being able to explore "3 hops" down to "2 hops." That might not sound that big, but it is a pretty big limitation when you dig into the math. Furthermore, he said that there should be a court reviewing each request to query the phone records database. He left open a pretty big loophole, saying that this judicial review could be skipped in a "true emergency," but it's still something.

In response, the Justice Department actually went to the FISA Court and filed a motion to revise the current order approving the telephone records collection (under Section 215 of the PATRIOT Act, sometimes called the "bulk metadata" program), to change it to put in place these restrictions. The FISA Court has now approved that request, and will release a (possibly redacted) version of the order within the next week and a half or so.

This is a small change, but it is still a meaningful change that creates both more oversight and greater limits on how this data can be used. It's a small step in the right direction.
http://www.techdirt.com/articles/201...-records.shtml





Cryptography Breakthrough Could Make Software Unhackable
Erica Klarreich

As a graduate student at the Massachusetts Institute of Technology in 1996, Amit Sahai was fascinated by the strange notion of a “zero-knowledge” proof, a type of mathematical protocol for convincing someone that something is true without revealing any details of why it is true. As Sahai mulled over this counterintuitive concept, it led him to consider an even more daring notion: What if it were possible to mask the inner workings not just of a proof, but of a computer program, so that people could use the program without being able to figure out how it worked?

The idea of “obfuscating” a program had been around for decades, but no one had ever developed a rigorous mathematical framework for the concept, let alone created an unassailable obfuscation scheme. Over the years, commercial software companies have engineered various techniques for garbling a computer program so that it will be harder to understand while still performing the same function. But hackers have defeated every attempt. At best, these commercial obfuscators offer a “speed bump,” said Sahai, now a computer science professor at the University of California, Los Angeles. “An attacker might need a few days to unlock the secrets hidden in your software, instead of a few minutes.”

Secure program obfuscation would be useful for many applications, such as protecting software patches, obscuring the workings of the chips that read encrypted DVDs, or encrypting the software controlling military drones. More futuristically, it would allow people to create autonomous virtual agents that they could send out into the computing “cloud” to act on their behalf. If, for example, you were heading to a remote cabin in the woods for a vacation, you could create and then obfuscate a computer program that would inform your boss about emails you received from an important client, or alert your sister if your bank balance dropped too low. Your passwords and other secrets inside the program would be safe.

“You could send that agent into the computing wild, including onto untrusted computers,” Sahai said. “It could be captured by the enemy, interrogated, and disassembled, but it couldn’t be forced to reveal your secrets.”

As Sahai pondered program obfuscation, however, he and several colleagues quickly realized that its potential far surpassed any specific applications. If a program obfuscator could be created, it could solve many of the problems that have driven cryptography for the past 40 years — problems about how to conduct secure interactions with people at, say, the other end of an Internet connection, whom you may not know or trust.

“A program obfuscator would be a powerful tool for finding plausible constructions for just about any cryptographic task you could conceive of,” said Yuval Ishai, of the Technion in Haifa, Israel.

Precisely because of obfuscation’s power, many computer scientists, including Sahai and his colleagues, thought it was impossible. “We were convinced it was too powerful to exist,” he said. Their earliest research findings seemed to confirm this, showing that the most natural form of obfuscation is indeed impossible to achieve for all programs.

Then, on July 20, 2013, Sahai and five co-authors posted a paper on the Cryptology ePrint Archive demonstrating a candidate protocol for a kind of obfuscation known as “indistinguishability obfuscation.” Two days later, Sahai and one of his co-authors, Brent Waters, of the University of Texas, Austin, posted a second paper that suggested, together with the first paper, that this somewhat arcane form of obfuscation may possess much of the power cryptographers have dreamed of.

“This is the first serious positive result” when it comes to trying to find a universal obfuscator, said Boaz Barak, of Microsoft Research in Cambridge, Mass. “The cryptography community is very excited.” In the six months since the original paper was posted, more papers have appeared on the ePrint archive with “obfuscation” in the title than in the previous 17 years.

However, the new obfuscation scheme is far from ready for commercial applications. The technique turns short, simple programs into giant, unwieldy albatrosses. And the scheme’s security rests on a new mathematical approach that has not yet been thoroughly vetted by the cryptography community. It has, however, already withstood the first attempts to break it.

Researchers are hailing the new work as a watershed moment for cryptography. For many cryptographers, the conversation has shifted from whether obfuscation is possible to how to achieve it.

“Six or seven years ago, you could have looked at this question and wondered if we’ll ever know the answer,” said Leonard Schulman, of the California Institute of Technology in Pasadena. “The fact that there’s now a plausible construction is huge.”

Too Powerful to Exist

When Sahai started thinking about obfuscation 17 years ago, the first task was simply to define it. After all, users can always learn something about a garbled version of a program simply by feeding it inputs and seeing what comes out.

The most natural, and also the strongest, definition was the idea of a “black box” obfuscator, which would jumble a program so thoroughly that a person with the best available computational resources could figure out nothing at all about it, except for what might be gleaned from inputs and outputs. You could not figure out the value of a password hidden inside the software, unless that password was one of the program’s outputs, nor could you reassemble parts of the program to compute anything meaningful other than what the program was originally designed to compute.

A black box obfuscator, if it existed, would be immensely powerful, providing instant solutions to many cryptography problems that took decades to figure out or in some cases remain unsolved. Take, for example, public key encryption, whose development in the 1970s paved the way for Internet commerce. Prior to its creation, two people who wanted to communicate secretly had to meet in advance to choose an encryption scheme and share a secret key for encoding and decoding messages. Public key encryption allows you to announce a key to the entire world that permits people you’ve never met to send you messages that only you can decrypt. The innovation so revolutionized cryptography that its early developers have been recognized with one award after another.

But if you have a black box obfuscator, creating a public key encryption protocol becomes a simple matter of choosing your favorite secret-key encryption scheme, expressing its workings as a computer program, obfuscating the program, and making the obfuscated version widely available. Anyone can then use it to encrypt a message to send to you, but no one can tease the decryption key out of the obfuscated software.

Similarly, a black box obfuscator would provide a way to instantly convert any private cryptography scheme to a public one that could be performed over the Internet by strangers. In a sense, obfuscation is the key to all cryptographies.

“Modern cryptography is about the transition from private to public,” Sahai said. “Obfuscation gives you a remarkable ability to move between these two worlds that, for decades, we thought of as fundamentally different.”

The power of universal black box obfuscation seemed too good to be true, and it was. In 2001, Sahai, Barak and several co-authors showed that it is impossible. Some programs, the researchers demonstrated, are like people who insist on sharing their most private moments on Twitter or Facebook — they are so determined to reveal their secrets that no obfuscator can hide them.

Still, Sahai couldn’t stop thinking about the problem. The computer programs the team had devised, which spilled their guts so insistently, were contrived objects unlike any real-world program. Might some weaker notion than black box obfuscation protect the secrets of programs that hadn’t been specifically constructed to resist obfuscation? And if so, just how powerful would such an idea be?

Jigsaw Puzzle Programs

Sahai, Barak and their colleagues had put forward one definition of a weaker kind of obfuscation in their 2001 paper, a rather esoteric concept called indistinguishability obfuscation. A program-garbling procedure qualifies as an indistinguishability obfuscator if, whenever two programs that do exactly the same thing pass through the obfuscator, no one is able to tell which garbled program came from which original.

There’s no obvious reason why this concept should be particularly useful. After all, even if no one can distinguish the sources of the two garbled programs, it might still be possible to glean important secrets — a decryption key, or classified instructions — from looking at the garbled software.

“It’s a very weak notion of obfuscation,” said Craig Gentry, of the IBM Thomas J. Watson Research Center in Yorktown Heights, N.Y.

Craig Gentry, of the IBM Thomas J. Watson Research Center, worked with Sahai on the new protocol. Courtesy of Craig Gentry

But in 2007, Shafi Goldwasser of MIT and Guy Rothblum of Microsoft Research Silicon Valley in Mountain View, Calif., showed that an indistinguishability obfuscator, if it could be built, would be the best possible obfuscator. The idea is that if some other obfuscator were the best, you could use it to garble the program and then put both the original program and the garbled version through the indistinguishability obfuscator for an additional layer of distortion. Someone looking at the resulting two programs wouldn’t be able to tell which one came from the original program, meaning that the indistinguishability obfuscator was at least as good at hiding the program’s secrets as that other, “best” obfuscator.

Goldwasser and Rothblum’s result meant that indistinguishability obfuscation was the best hope for protecting all of a computer program’s secrets that are protectable. But no one knew how to build such an obfuscator, or even knew which of a program’s secrets are protectable. Would an indistinguishability obfuscator, Sahai wondered, protect the secrets people really cared about?

For Sahai, the decade leading up to the new finding was marked by dead ends and incremental results. “There was a long period of banging my head against the wall and hoping a dent would form,” he said. “We were all very pessimistic, but it was such a beautiful problem that I was completely hooked.”

In the fall of 2012, Sahai started collaborating with Gentry and Waters, along with Sanjam Garg and Shai Halevi of the IBM Thomas J. Watson Research Center and Mariana Raykova of SRI International in Menlo Park, Calif., on a problem called functional encryption, which deals with how to give different people particular levels of access to encrypted data. After what Sahai called “an incredibly intense period” of putting forward ideas, breaking them, and returning to the drawing board, in the spring of 2013 the team came up with a complicated solution to the problem. “What we had was a mess, with so many moving parts and subscripts of subscripts, but it was the first thing we couldn’t break,” Sahai recalled.

As the researchers tried to simplify their construction, they discovered that it went much further than anticipated: It presented a way to perform indistinguishability obfuscation on all computer programs.

“That’s a moment I’ll never forget,” Sahai said.

Sahai and Waters proceeded to show that their indistinguishability obfuscator seems to offer much of the all-encompassing cryptographic protection that a black box obfuscator would offer. It can be used, for example, to create public key encryption, digital signatures (which enable a website to convince its visitors that it is legitimate) and a laundry list of other fundamental cryptographic protocols, including two major ones that were previously unsolved, functional encryption and deniable encryption.

The team’s obfuscator works by transforming a computer program into what Sahai calls a “multilinear jigsaw puzzle.” Each piece of the program gets obfuscated by mixing in random elements that are carefully chosen so that if you run the garbled program in the intended way, the randomness cancels out and the pieces fit together to compute the correct output. But if you try to do anything else with the program, the randomness makes each individual puzzle piece look meaningless.

This obfuscation scheme is unbreakable, the team showed, provided that a certain newfangled problem about lattices is as hard to solve as the team thinks it is. Time will tell if this assumption is warranted, but the scheme has already resisted several attempts to crack it, and Sahai, Barak and Garg, together with Yael Tauman Kalai of Microsoft Research New England and Omer Paneth of Boston University, have proved that the most natural types of attacks on the system are guaranteed to fail. And the hard lattice problem, though new, is closely related to a family of hard problems that have stood up to testing and are used in practical encryption schemes.

Sahai’s hope is that not only will this hard problem stand the test of time, but computer scientists will figure out ways to base the obfuscation scheme on more conventional cryptographic assumptions. Cryptographers are already jumping on the indistinguishability obfuscation bandwagon, searching for ways to make the scheme more efficient, bolster its security assumptions, and further elucidate just which secrets it can protect.

The proposed obfuscator has already produced a sea change in many cryptographers’ views of program obfuscation. “It seems that the problem is not impossible,” said Daniele Micciancio, of the University of California, San Diego.

Beyond the immediate task of refining the team’s obfuscation protocol lies a deeper question: If the problem of obfuscation has been solved, what remains for cryptographers?

“What is the next major cryptographic frontier that is not solved, at least in principle, by obfuscation?” Sahai said. “That’s one of the big questions for our field.”
http://www.wired.com/wiredscience/20...akthrough/all/





Senate Cybersecurity Report Finds Agencies Often Fail to Take Basic Preventive Measures
Craig Timberg and Lisa Rein

The message broadcast in several states last winter was equal parts alarming and absurd: “Civil authorities in your area have reported that the bodies of the dead are rising from their graves and attacking the living. . . . Do not attempt to approach or apprehend these bodies, as they are considered extremely dangerous.”

The reported zombie invasion was not something out of the “The Walking Dead.” It was the federal Emergency Alert System under control of hackers — who exploited weaknesses that are disturbingly common in many critical systems throughout government, according to a Senate cybersecurity report set for release Tuesday.

U.S. officials have warned for years that the prospect of a cyberattack is the top threat to the nation and have sharply increased spending for computer security. Yet the report by the Republican staff of the Senate Homeland Security and Governmental Affairs Committee says that federal agencies are ill-prepared to defend networks against even modestly skilled hackers.

“As a taxpayer, I’m outraged,” said Alan Paller, who is research director at the SANS Institute, a cybersecurity education group, and reviewed a draft version of the report ahead of its official release. “We’re spending all this money and getting so little impact for it.”

The report draws on previous work by agency inspectors general and the Government Accountability Office to paint a broader picture of chronic dysfunction, citing repeated failures by federal officials to perform the unglamorous work of information security. That includes installing security patches, updating anti-virus software, communicating on secure networks and requiring strong passwords. A common password on federal systems, the report found, is “password.”

Obama administration officials quibbled with elements of the report but acknowledged that getting agencies to secure their systems against attack has been difficult.

“Almost every agency faces a cybersecurity challenge,” said Michael Daniel, special assistant to the president on cybersecurity policy. “Some are farther along than others in driving awareness of it. It often depends on whether they’ve been in the crosshairs of a major cyber incident.”

The report levels particularly tough criticism at the Department of Homeland Security, which helps oversee cybersecurity at other federal agencies. The report concluded that the department had failed even to update essential software — “the basic security measure just about any American with a computer has performed.”

“None of the other agencies want to listen to Homeland Security when they aren’t taking care of their own systems,” said Sen. Tom Coburn (Okla.), who as the ranking Republican on the committee oversaw the development of the report. “They aren’t even doing the simple stuff.”

The underlying problem, said Coburn and several outside experts, is the failure of federal agencies to hire top-notch information technology workers, pay them enough and give them enough clout to enforce routine security practices.

“It’s a low-status, often low-paid, high-stress position because people only notice systems administrators when something breaks,” said Steven Bellovin, a Columbia University computer science professor and former Federal Trade Commission technologist. “It becomes a very easy position to neglect.”

Higher up the chain of command, agency directors are rarely held accountable for security failures, experts said, because it is often unclear who is responsible. No penalties are mandated by law.

Take the bogus zombie alert, which was carried by television stations in Michigan, Montana and New Mexico. It highlighted flaws in the oversight of the Emergency Alert System, which is mandated by the Federal Communications Commission and managed by the Federal Emergency Management Agency.

Hackers discovered that some television stations had connected their alert-system equipment to the Internet without installing a firewall or changing the default password, as the company’s guide instructed, said Ed Czarnecki, an official with Monroe Electronics, which manufactured the equipment that was breached. He said those mistakes in elementary network security might have been prevented with more instruction from the government.

“Neither the FCC nor FEMA had issued clear guidelines on how to secure this gear,” said Czarnecki said.

Though the incident was seen as a prank, it highlighted weaknesses that could have been dangerous if hackers had broadcast misinformation during an actual emergency or terrorist attack, experts said. Monroe Electronics and the FCC have worked with affected stations to prevent a recurrence, they said.

The Department of Homeland Security said that it, too, has worked to resolve problems identified in the Senate report.

“DHS has taken significant measures to improve and strengthen our capabilities to address the cyber risks associated with our critical information networks and systems,” S.Y. Lee, a department spokesman, said in an e-mailed statement.

Other problems identified in the Senate report:

●In every year since 2008, the GAO has found roughly 100 weaknesses in the computer security practices of the Internal Revenue Service, which took an average of 55 days to patch critical system flaws once they were identified. It is supposed to take only three days to do so.

●Hackers have cracked the systems of the Energy Department, gaining access to the personal information of 104,000 past and present department employees.

●The Nuclear Regulatory Commission, which keeps data on the design and security of every nuclear reactor and waste facility in the country, “regularly experiences unauthorized disclosures of sensitive information.” An agency spokeswoman issued a statement saying it “takes information security very seriously and works continuously toward improvements.”

●And at the Securities and Exchange Commission, laptops containing sensitive information were not encrypted and staffers sometimes transmitted private information about financial institutions on personal e-mail accounts. On at least one occasion, an SEC staffer logged onto an unsecured WiFi network at a convention of computer hackers.

While the report was released by Coburn, a Republican, the Democratic chairman of the Senate committee concurred with many of its findings.

“Federal agencies still have more work to do in this area, and the laws that govern the security of our federal civilian networks need to be reformed,” said Emily Spain, spokeswoman for Sen. Thomas R. Carper (D-Del.).

Still, Washington has been slow to act. A 2000 law to improve government cybersecurity did not mandate consequences for agency lapses. In recent years, numerous bills calling for better computer and network security have languished in Congress. The White House, meanwhile, is pushing to give the Department of Homeland Security more authority to enforce cybersecurity rules across government.

“At the end of the day, it’s a lot like the problem you have in businesses,” said James A. Lewis, a cybersecurity expert at the Center for Strategic and International Studies. “The CEOs don’t see cyber as their mission, as a fundamental problem. You don’t see your job as running a secure network. If something goes wrong, nothing happens to you.”
http://www.washingtonpost.com/busine...267_story.html





War on Anonymous: British Spies Attacked Hackers, Snowden Docs Show
Mark Schone, Richard Esposito, Matthew Cole and Glenn Greenwald, Special Contributor

A secret British spy unit created to mount cyber attacks on Britain’s enemies has waged war on the hacktivists of Anonymous and LulzSec, according to documents taken from the National Security Agency by Edward Snowden and obtained by NBC News.

The blunt instrument the spy unit used to target hackers, however, also interrupted the web communications of political dissidents who did not engage in any illegal hacking. It may also have shut down websites with no connection to Anonymous.

According to the documents, a division of Government Communications Headquarters (GCHQ), the British counterpart of the NSA, shut down communications among Anonymous hacktivists by launching a “denial of service” (DDOS) attack – the same technique hackers use to take down bank, retail and government websites – making the British government the first Western government known to have conducted such an attack.

The documents, from a PowerPoint presentation prepared for a 2012 NSA conference called SIGDEV, show that the unit known as the Joint Threat Research Intelligence Group, or JTRIG, boasted of using the DDOS attack – which it dubbed Rolling Thunder -- and other techniques to scare away 80 percent of the users of Anonymous internet chat rooms.

The existence of JTRIG has never been previously disclosed publicly.

The documents also show that JTRIG infiltrated chat rooms known as IRCs and identified individual hackers who had taken confidential information from websites. In one case JTRIG helped send a hacktivist to prison for stealing data from PayPal, and in another it helped identify hacktivists who attacked government websites.

In connection with this report, NBC is publishing documents that Edward Snowden took from the NSA before fleeing the U.S. The documents are being published with minimal redactions.

Intelligence sources familiar with the operation say that the British directed the DDOS attack against IRC chat rooms where they believed criminal hackers were concentrated. Other intelligence sources also noted that in 2011, authorities were alarmed by a rash of attacks on government and corporate websites and were scrambling for means to respond.

“While there must of course be limitations,” said Michael Leiter, the former head of the U.S. government’s National Counterterrorism Center and now an NBC News analyst, “law enforcement and intelligence officials must be able to pursue individuals who are going far beyond speech and into the realm of breaking the law: defacing and stealing private property that happens to be online.”

“No one should be targeted for speech or thoughts, but there is no reason law enforcement officials should unilaterally declare law breakers safe in the online environment,” said Leiter.

But critics charge the British government with overkill, noting that many of the individuals targeted were teenagers, and that the agency’s assault on communications among hacktivists means the agency infringed the free speech of people never charged with any crime.

“Targeting Anonymous and hacktivists amounts to targeting citizens for expressing their political beliefs,” said Gabriella Coleman, an anthropology professor at McGill University and author of an upcoming book about Anonymous. “Some have rallied around the name to engage in digital civil disobedience, but nothing remotely resembling terrorism. The majority of those embrace the idea primarily for ordinary political expression.” Coleman estimated that the number of “Anons” engaged in illegal activity was in the dozens, out of a community of thousands.

In addition, according to cyber experts, a DDOS attack against the servers hosting Anonymous chat rooms would also have shut down any other websites hosted by the same servers, and any other servers operated by the same Internet Service Provider (ISP), whether or not they had any connection to Anonymous. It is not known whether any of the servers attacked also hosted other websites, or whether other servers were operated by the same ISPs.

In 2011, members of the loose global collective called Anonymous organized an online campaign called “Operation Payback” targeting the pay service PayPal and several credit card companies. Some hacktivists also targeted U.S. and British government websites, including the FBI, CIA and GCHQ sites. The hacktivists were protesting the prosecution of Chelsea Manning, who took thousands of classified documents from U.S. government computers, and punishing companies that refused to process donations to WikiLeaks, the website that published the Manning documents.

The division of GCHQ known as JTRIG responded to the surge in hacktivism. In another document taken from the NSA by Snowden and obtained by NBC News, a JTRIG official said the unit’s mission included computer network attacks, disruption, “Active Covert Internet Operations,” and “Covert Technical Operations.” Among the methods listed in the document were jamming phones, computers and email accounts and masquerading as an enemy in a "false flag" operation. The same document said GCHQ was increasing its emphasis on using cyber tools to attack adversaries.

In the presentation on hacktivism that was prepared for the 2012 SIGDEV conference, one official working for JTRIG described the techniques the unit used to disrupt the communications of Anonymous and identify individual hacktivists, including some involved in Operation Payback. Called “Pushing the Boundaries and Action Against Hacktivism,” the presentation lists Anonymous, Lulzsec and the Syrian Cyber Army among “Hacktivist Groups,” says the hacktivists’ targets include corporations and governments, and says their techniques include DDOS and data theft.

Under “Hacktivism: Online Covert Action,” the presentation refers to “Effects Operations.” According to other Snowden documents obtained by NBC News, “Effects” campaigns are offensive operations intended to “destroy” and “disrupt” adversaries.

The presentation gives detailed examples of “humint” (human intelligence) collection from hacktivists known by the on-line names G-Zero, Topiary and p0ke, as well as a fourth whose name NBC News has redacted to protect the hacker's identity. The hacktivists were contacted by GCHQ agents posing as fellow hackers in internet chat rooms. The presentation includes transcripts of instant message conversations between the agents and the hackers in 2011.

“Anyone here have access to a website with at least 10,000+ unique traffic per day?” asks one hacktivist in a transcript taken from a conversation that began in an Operation Payback chat room. An agent responds and claims to have access to a porn website with 27,000 users per day. “Love it,” answers the hacktivist. The hackers ask for access to sites with traffic so they can identify users of the site, secretly take over their computers with malware and then use those computers to mount a DDOS attack against a government or commercial website.

A GCHQ agent then has a second conversation with a hacker known as GZero who claims to “work with” the first hacktivist. GZero sends the agent a series of lines of code that are meant to harvest visitors to the agent’s site and make their computers part of a “botnet” operation that will attack other computers.

The “outcome,” says the presentation, was “charges, arrest, conviction.” GZero is revealed to be a British hacker in his early 20s named Edward Pearson, who was prosecuted and sentenced to 26 months in prison for stealing 8 million identities and information from 200,000 PayPal accounts between Jan. 1, 2010 and Aug. 30, 2011. He and his girlfriend were convicted of using stolen credit card identities to purchase take-out food and hotel stays.

In a transcript taken from a second conversation in an Operation Payback chat room, a hacktivist using the name “p0ke” tells another named “Topiary” that he has a list of emails, phone numbers and names of “700 FBI tards.”

An agent then begins a conversation with p0ke, asking him about what sites he’s accessed. The hacktivist responds that he was able to defeat the security on a U.S. government website, and pulled up credit card information that’s attached to congressional and military email addresses.

The agent then asks whether p0ke has looked at a BBC News web article called “Who loves the hacktivists?” and sends him a link to the story.

“Cool huh?” asks the agent, and pOke responds, “ya.”

When p0ke clicked on the link, however, JTRIG was able to pull up the IP address of the VPN (virtual private network) the hacktivist was using. The VPN was supposed to protect his identity, but GCHQ either hacked into the network, asked the VPN for the hacker’s personal information, or asked law enforcement in the host nation to request the information.

A representative of the VPN told NBC News the company had not provided GCHQ with the hacker's information, but indicated that in past instances it has cooperated with local law enforcement.

In whatever manner the information was retrieved, GCHQ was able to establish p0ke’s real name and address, as shown in the presentation slides. (NBC News has redacted the information).

P0ke was never arrested for accessing the government databases, but Topiary, actually an 18-year-old member of Anonymous and LulzSec spokesman from Scotland named Jake Davis, was arrested in July 2011. Davis was arrested soon after LulzSec mounted hack attacks against Congress, the CIA and British law enforcement.

Two weeks before his arrest, the Guardian published an interview with Davis in which he described himself as “an internet denizen with a passion for change.” Davis later pled guilty to two DDOS attacks and was sentenced to 24 months in a youth detention center, but was released in June 2013 after five weeks because he had worn an electronic ankle tag and been confined to his home without computer access for 21 months after his arrest. Davis declined comment to NBC News.

In the concluding portion of the JTRIG presentation, the presenters sum up the unit’s “Effects on Hacktivism” as part of “Op[eration] Wealth” in the summer of 2011 and apparently emphasize the unit’s success against Anonymous, including the DDOS attack. The listed effects include identifying top targets for law enforcement and “Denial of Service on Key Communications outlets.”

A slide headlined “DDOS” refers to “initial trial info” from the operation known as “Rolling Thunder.” It then quotes from a transcript of a chat room conversation between hacktivists. “Was there any problem with the IRC [chat room] network?” asks one. “I wasn’t able to connect the past 30 hours.”

“Yeah,” responds another. “We’re being hit by a syn flood. I didn’t know whether to quit last night, because of the DDOS.”

The next slide is titled “Information Operations,” and says JTRIG uses Facebook, Twitter, email, instant messenger, and Skype to dissuade hacktivists with the message, “DDOS and hacking is illegal, please cease and desist.”

The following slide lists the outcome of the operation as “80% of those messaged where (sic) not in the IRC channels 1 month later.”

Gabriella Coleman, the author and expert on Anonymous, said she believed the U.K. government had punished a large number of people for the actions of a few. “It is hard to put a number on Anonymous,” she said, “but at the time of those events, there were thousands of supporters and probably a dozen or two individuals who were breaking the law.”

Said Coleman, “Punishing thousands of people, who are engaging in their democratic right to protest, because a couple people committed vandalism is … an appalling example of overreacting in order to squash dissent.”

Jason Healey, a former top White House cyber security official under George W. Bush, called the British government’s DDOS attack on Anonymous “silly,” and said it was a tactic that should only be used against another nation-state.

He also questioned the time and energy spent chasing teenage hackers.

“This is a slippery slope,” said Healey. “It’s not what you should be doing. It justifies [Anonymous]. Giving them this much attention justifies them and is demeaning to our side.”

In a statement, a GCHQ spokesperson emphasized that the agency operated within the law.

“All of GCHQ's work is carried out in accordance with a strict legal and policy framework,” said the statement, “which ensure[s] that our activities are authorized, necessary and proportionate, and that there is rigorous oversight, including from the Secretary of State, the Interception and Intelligence Services Commissioners and the Parliamentary Intelligence and Security Committee. All of our operational processes rigorously support this position.”

Told by NBC News that his on-line alias appeared in the JTRIG presentation, the hacker known as p0ke, a college student in Scandinavia, said he was confused about why he hadn’t been confronted by authorities. (NBC News is withholding his name, age and country of residence.)

But p0ke said he had stopped hacking because he’d grown bored with it, and was too busy with his studies. He was never a “hacktivist” anyway, he said. “Politics aren’t mah thang,” he said in an online interview. “Seriously tho, I had no motive for doing it.”

He said that hacking had only satisfied an urge to show off. “Fancy the details for a while,” he wrote, “then publish em to enlarge my e-penis.”

A British hacktivist known as T-Flow, who was prosecuted for hacking alongside Topiary, told NBC News he had long suspected that the U.K.’s intelligence agencies had used hacker techniques to catch him, since no evidence of how his identity was discovered ever appeared in court documents. T-Flow, whose real name is Mustafa Al-Bassam, pleaded guilty but did not serve time in an adult facility because he was 16 when he was arrested.

“When I was going through the legal process,” explained Al-Bassam, “I genuinely felt bad for all those attacks on government organizations I was involved in. But now that I know they partake in the exact same activities, I have no idea what’s right and wrong anymore.”
http://www.nbcnews.com/news/investig...cs-show-n21361





New Surveillance Technology Can Track Everyone in an Area for Several Hours at a Time
Craig Timberg

Shooter and victim were just a pair of pixels, dark specks on a gray streetscape. Hair color, bullet wounds, even the weapon were not visible in the series of pictures taken from an airplane flying two miles above.

But what the images revealed — to a degree impossible just a few years ago — was location, mapped over time. Second by second, they showed a gang assembling, blocking off access points, sending the shooter to meet his target and taking flight after the body hit the pavement. When the report reached police, it included a picture of the blue stucco building into which the killer ultimately retreated, at last beyond the view of the powerful camera overhead.

From 10,000 feet up, tracking an entire city at one glance: Ohio-based Persistent Surveillance Systems is trying to convince cities across the country that its surveillance technology can help reduce crime. Its new generation of camera technology is far more powerful than the police cameras to which America has grown accustomed. But these newer cameras have sparked some privacy concerns.

“I’ve witnessed 34 of these,” said Ross McNutt, the genial president of Persistent Surveillance Systems, which collected the images of the killing in Ciudad Juárez, Mexico, from a specially outfitted Cessna. “It’s like opening up a murder mystery in the middle, and you need to figure out what happened before and after.”

As Americans have grown increasingly comfortable with traditional surveillance cameras, a new, far more powerful generation is being quietly deployed that can track every vehicle and person across an area the size of a small city, for several hours at a time. Although these cameras can’t read license plates or see faces, they provide such a wealth of data that police, businesses and even private individuals can use them to help identify people and track their movements.

Already, the cameras have been flown above major public events such as the Ohio political rally where Sen. John McCain (R-Ariz.) named Sarah Palin as his running mate in 2008, McNutt said. They’ve been flown above Baltimore; Philadelphia; Compton, Calif.; and Dayton in demonstrations for police. They’ve also been used for traffic impact studies, for security at NASCAR races and at the request of a Mexican politician, who commissioned the flights over Ciudad Juárez.

Video: A time machine for police, letting them watch criminals — and everyone else.

Defense contractors are developing similar technology for the military, but its potential for civilian use is raising novel civil liberties concerns. In Dayton, where Persistent Surveillance Systems is based, city officials balked last year when police considered paying for 200 hours of flights, in part because of privacy complaints.

“There are an infinite number of surveillance technologies that would help solve crimes . . . but there are reasons that we don’t do those things, or shouldn’t be doing those things,” said Joel Pruce, a University of Dayton postdoctoral fellow in human rights who opposed the plan. “You know where there’s a lot less crime? There’s a lot less crime in China.”

The Supreme Court generally has given wide latitude to police using aerial surveillance as long as the photography captures images visible to the naked eye.

McNutt, a retired Air Force officer who once helped design a similar system for the skies above Fallujah, a battleground city in Iraq, hopes to win over officials in Dayton and elsewhere by convincing them that cameras mounted on fixed-wing aircraft can provide far more useful intelligence than police helicopters do, for less money.

A single camera mounted atop the Washington Monument, McNutt boasts, could deter crime all around the Mall. He said regular flights over the most dangerous parts of Washington — combined with publicity about how much police could see — would make a significant dent in the number of burglaries, robberies and murders. His 192-megapixel cameras would spot as many as 50 crimes per six-hour flight, he estimated, providing police with a continuous stream of images covering more than a third of the city.

“We watch 25 square miles, so you see lots of crimes,” he said. “And by the way, after people commit crimes, they drive like idiots.”

What McNutt is trying to sell is not merely the latest techno-wizardry for police. He envisions such steep drops in crime that they will bring substantial side effects, including rising property values, better schools, increased development and, eventually, lower incarceration rates as the reality of long-term overhead surveillance deters those tempted to commit crimes.

Dayton Police Chief Richard Biehl, a supporter of McNutt’s efforts, has proposed inviting the public to visit the operations center to get a glimpse of the technology in action.

“I want them to be worried that we’re watching,” Biehl said. “I want them to be worried that they never know when we’re overhead.”

Technology in action

McNutt, a suburban father of four with a doctorate from the Massachusetts Institute of Technology, is not deaf to concerns about his company’s ambitions. Unlike many of the giant defense contractors that are eagerly repurposing wartime surveillance technology for domestic use, he sought advice from the American Civil Liberties Union in writing a privacy policy.

It has rules on how long data can be kept, when images can be accessed and by whom. Police are supposed to begin looking at the pictures only after a crime has been reported. Fishing expeditions are prohibited.

The technology has inherent limitations as well. From the airborne cameras, each person appears as a single pixel indistinguishable from any other person. What people are doing — even whether they are clothed or not — is impossible to see. As technology improves the cameras, McNutt said he intends to increase their range, not the precision of the imagery, so that larger areas can be monitored.

The notion that McNutt and his roughly 40 employees are peeping Toms clearly rankles. The company made a PowerPoint presentation for the ACLU that includes pictures taken to assist the response to Hurricane Sandy and the severe Iowa floods last summer. The section is titled: “Good People Doing Good Things.”

“We get a little frustrated when people get so worried about us seeing them in their backyard,” McNutt said in his operation center, where the walls are adorned with 120-inch monitors, each showing a different grainy urban scene collected from above. “We can’t even see what they are doing in their backyard. And, by the way, we don’t care.”

Yet in a world of increasingly pervasive surveillance, location and identity are becoming all but inextricable. One quickly leads to the other for those with the right tools.

During one of the company’s demonstration flights over Dayton in 2012, police got reports of an attempted robbery at a bookstore and shots fired at a Subway sandwich shop. The cameras revealed a single car moving between the two locations.

By reviewing the images frame by frame, analysts were able to help police piece together a larger story: A man had left a residential neighborhood at midday and attempted to rob the bookstore, but fled when somebody hit an alarm. Then he drove to Subway, where the owner pulled a gun and chased him off. His next stop was a Family Dollar Store, where the man paused for several minutes. He soon returned home, after a short stop at a gas station where a video camera captured an image of his face.

A few hours later, after the surveillance flight ended, the Family Dollar Store was robbed. Police used the detailed map of the man’s movements, along with other evidence from the crime scenes, to arrest him for all three crimes.

On another occasion, Dayton police got a report of a burglary in progress. The aerial cameras spotted a white truck driving away from the scene. Police stopped the driver before he got home and found the stolen goods in the back of the truck. A witness identified him soon afterward.

Privacy concerns

In addition to normal cameras, the planes can carry infrared sensors that permit analysts to track people, vehicles or wildlife at night — even through foliage and into some structures, such as tents.

Courts have put stricter limits on technology that can see things not visible to the naked eye, ruling that they can amount to unconstitutional searches when conducted without a warrant. But the lines remain fuzzy as courts struggle to apply old precedents — from a single overflight carrying an officer equipped with nothing stronger than a telephoto lens, for example — to the rapidly advancing technology.

“If you turn your country into a totalitarian surveillance state, there’s always some wrongdoing you can prevent,” said Jay Stanley, a privacy expert with the American Civil Liberties Union. “The balance struck in our Constitution tilts toward liberty, and I think we should keep that value.”

Police and private businesses have invested heavily in video surveillance since the Sept. 11, 2001, attacks. Although academics debate whether these cameras create significantly lower crime rates, an overwhelming majority of Americans support them. A Washington Post poll in November found that only 14 percent of those surveyed wanted fewer cameras in public spaces.

But the latest camera systems raise new issues because of their ability to watch vast areas for long periods of time — something even military-grade aerial cameras have struggled to do well.

The military’s most advanced experimental research lab is developing a system that uses hundreds of cellphone cameras to watch 36-square-mile areas. McNutt offers his system — which uses 12 commercially available Canon cameras mounted in an array — as an effective alternative that’s cheap enough for local police departments to afford. He typically charges between $1,500 and $2,000 per hour for his services, including flight time, operation of the command center and the time that analysts spend assisting investigations.

Dayton police were enticed by McNutt’s offer to fly 200 hours over the city for a home-town discount price of $120,000. The city, with about 140,000 people, saw its police force dwindle from more than 400 officers to about 350 in recent years, and there is little hope of reinforcements.

“We’re not going to get those officers back,” Biehl, the police chief, said. “We have had to use technology as force multipliers.”

Still, the proposed contract, coming during Dayton’s campaign season and amid a wave of revelations about National Security Agency surveillance, sparked resistance. Biehl is looking for a chance to revive the matter. But the new mayor, Nan Whaley, has reservations, both because of the cost and the potential loss of privacy.

“Since 2001, we haven’t had really healthy conversations about personal liberty. It’s starting to bloom about a decade too late,” Whaley said. “I think the conversation needs to continue.”

To that end, the mayor has another idea: She’s encouraging the businesses that own Dayton’s tallest buildings to mount rooftop surveillance cameras capable of continuously monitoring the downtown and nearby neighborhoods. Whaley hopes the businesses would provide the video feeds to the police.

McNutt, it turns out, has cameras for those situations, too, capable of spotting individual people from seven miles away.
http://www.washingtonpost.com/busine...ba3_story.html





Judges Poised to Hand U.S. Spies the Keys to the Internet
Kevin Poulsen

How does the NSA get the private crypto keys that allow it to bulk eavesdrop on some email providers and social networking sites? It’s one of the mysteries yet unanswered by the Edward Snowden leaks. But we know that so-called SSL keys are prized by the NSA – understandably, since one tiny 256 byte key can expose millions of people to intelligence collection. And we know that the agency has a specialized group that collects such keys by hook or by crook. That’s about it.

Which is why the appellate court challenge pitting encrypted email provider Lavabit against the Justice Department is so important: It’s the only publicly documented case where a district judge has ordered an internet company to hand over its SSL key to the U.S. government — in this case, the FBI.

If the practice — which may well have happened in secret before — is given the imprimatur of the U.S. 4th Circuit Court of Appeals, it opens a new avenue for U.S. spies to expand their surveillance against users of U.S. internet services like Gmail and Dropbox. Since the FBI is known to work hand in hand with intelligence agencies, it potentially turns the judiciary into an arm of the NSA’s Key Recovery Service. Call it COURTINT.

Oral arguments in the Lavabit appeal were heard by a three-judge panel in Richmond, Virginia last week. The audio (.mp3) is available online (and PC World covered it from the courtroom). It’s clear that the judges weren’t much interested in the full implications of Lavabit’s crypto key breach, which one of the judges termed “a red herring.”

“My fear is that they won’t address the substantive argument about whether the government can get these keys,” Lavabit founder Ladar Levison told WIRED after the hearing.

The case began in June, when Texas-based Lavabit was served with a “pen register” order requiring it to give the government a live feed of the email activity on a particular account. The feed would include metadata like the “from” and “to” lines on every message, and the IP addresses used to access the mailbox.

Because pen register orders provide only metadata, they can be obtained without probable cause that the target has committed a crime. But in this case the court filings suggest strongly that the target was indicted NSA-leaker Edward Snowden, Lavabit’s most famous user.

Levison resisted the order on the grounds that he couldn’t comply without reprogramming the elaborate encryption system he’d built to protect his users’ privacy. He eventually relented and offered to gather up the email metadata and transmit it to the government after 60 days. Later he offered to engineer a faster solution. But by then, weeks had passed, and the FBI was determined to get what it wanted directly and in real time.

So in July it served Levison with a search warrant striking at the Achilles heel of his system: the private SSL key that would allow the FBI to decrypt traffic to and from the site, and collect Snowden’s metadata directly. The government promised it wouldn’t use the key to spy on Lavabit’s other 400,000 users, which the key would technically enable them to do.

The FBI attached a Carnivore-like monitoring system at Lavabit’s upstream provider in anticipation of getting the key, but Levison continued to resist, and even flew from Texas to Virginia to unsuccessfully challenge the order before U.S. District Judge Claude Hilton.

Levison turned over the keys as a nearly illegible computer printout in 4-point type. In early August, Hilton – who once served on the top-secret FISA court – ordered Levison again to provide them in the industry-standard electronic format, and began fining him $5,000 a day for noncompliance. After two days, Levison complied, but then immediately shuttered Lavabit altogether. Levison is appealing the contempt order.

The SSL key is a small file of inestimable importance for the integrity of a website and the privacy of its users. In the wrong hands, it would allow malefactors to impersonate a website, or, more relevantly in this case, permit snoops to eavesdrop on traffic to and from the site. Levison says he was concerned that once the government had his SSL key, it would obtain more secret warrants to spy on his users, and he would have no opportunity to review or potentially challenge those warrants.

“The problem I had is that the government’s interpretation of what’s legal and what isn’t is currently at its apex, in terms of authority and scope,” Levison says. “My concern is that they could get a warrant – maybe a classified warrant – that I wouldn’t even have knowledge of, much less the opportunity to object to … My responsibility was to ensure that everybody else’s privacy was protected.”

That was Levison’s thinking even before Snowden’s revelations showed us how pervasive and ambitious the NSA’s internet monitoring has become.

The judges in last week’s 4th Circuit hearing, though, weren’t interested in hearing about encryption keys. At one point, Judge Paul Niemeyer apologetically interrupted Levison’s attorney as soon as raised the subject, and made it clear that he accepted the government’s position that the FBI was only going to use the key to spy on the user targeted by the pen register order.

“The encryption key comes in only after your client is refusing to give them the unencrypted data,” Niemeyer said. “They don’t want the key as an object. They want this data with respect to a target that they’re investigating. And it seems to me that that’s all this case is about and its been blown out of proportion by all these contentions that the government is seeking keys to access others people’s data and so forth.”

“There was never an order to provide keys until later on, when [Levison] resisted,” Niemeyer added later in the hearing. “Even then, the government was authorized to use the key only with respect to a particular target.”

On that last point, Judge Niemeyer is mistaken. Neither the July 16 search warrant nor the August 5 order imposing sanctions placed any restrictions on what the government could do with the key. Without such a protective order, there are no barriers to the FBI handing the key over to the NSA, says a former senior Justice Department attorney, speaking to WIRED on condition of anonymity.

“You sometimes see limitations, or what’s referred to as minimization procedures: The government can only use this for the following purpose. There’s nothing like that here,” says the former official. “I’d say this is a very broad order. Nothing in it would prevent the government from sharing that key with intelligence services.”

The FBI’s relationship with the NSA is close – the FBI receives 1,000 tips a year from the NSA’s bulk telephone metadata collection; the bureau’s Data Intercept Technology Unit in Quantico, Virginia channels PRISM data to NSA headquarters in Ft. Meade from Silicon Valley. Presumably the two agencies are even closer on the matter that brought the FBI to Lavabit.

By shutting down Lavabit, Levison obviously thwarted prospective surveillance efforts. But we know – again, thanks to Snowden – that the agency sometimes collects encrypted data that it can’t crack, in the hope of getting the key later.

“We know from the minimization rules that are out that if they collect encrypted information they’re allowed to keep it indefinitely,” says Jennifer Granick, Director of Civil Liberties at the Stanford Center for Internet and Society. “That’s exactly why the Lavabit case is so important.”

If NSA did collect Lavabit traffic, users who checked their email using Safari or Internet Explorer are theoretically compromised now. That’s because Lavabit failed to preference the full suite of encryption algorithms that provide “perfect forward secrecy,” which generates a temporary key for every session, making both passive eavesdropping and retrospective cryptanalysis unlikely. Firefox and Chrome users should not be similarly vulnerable.

If it wasn’t collecting Lavabit traffic already, it’s safe to assume the NSA began doing so when Snowden revealed himself as the NSA leaker in early June.

The NSA could not legally target U.S. citizens or legal residents without first getting a specific warrant from the Foreign Intelligence Surveillance Court. But non-U.S. Lavabit users would be fair game.

Levison flew back to Texas on Friday to await the 4th Circuit’s ruling and continue work on his new initiative: a surveillance-resistant email infrastructure called Dark Mail. He notes that one possible – even likely – outcome of the case is that the appeals court rules against him on a technicality. Some of his lawyer’s arguments weren’t clearly raised below in front of Judge Hilton. The court could find that those arguments are forfeit now, and leave the substantive issues undecided.

Pragmatically, that could be the best outcome, given the panel’s hostility to the encryption question and its faith in the government’s honesty. But Levison would prefer to lose on the substantive issue and continue the fight all the way to the Supreme Court. If the 4th Circuit doesn’t decide one way or the other, other U.S. internet companies won’t know where they stand when the government comes for their keys. The cloud of distrust that’s gathered over U.S. companies in the contrail of the NSA revelations will grow even darker.

“It’ll leave this issue completely in limbo, with no end in sight,” Levison says. “So how is the industry going to handle that? They’ll have to wait years for somebody else to come along who’s willing to stand up and say, ‘no,’ and take the government back to court.”
http://www.wired.com/threatlevel/2014/02/courtint/





Police Will Have 'Backdoor' Access to Health Records Despite Opt-Out, Says MP

David Davis says police would be able to approach central NHS database without a warrant as critics warn of catastrophic breach of trust
Randeep Ramesh

The database that will store all of England's health records has a series of "backdoors" that will allow police and government bodies to access people's medical data.

David Davis MP, a former shadow home secretary, told the Guardian he has established that police will be able to access the health records of patients when investigating serious crimes even if they had opted out of the new database, which will hold the entire population's medical data in a single repository for the first time from May.

In the past, Davis said, police would need to track down the GP who held a suspect's records and go to court for a disclosure order. Now, they would be able to simply approach the new arms-length NHS information centre, which will hold the records. "The idea that police will be able to request information from a central database without a warrant totally undermines a long-held belief in the confidentiality of the doctor-patient relationship," he said.

The records will include mental health conditions, drugs prescribed, as well as smoking and drinking habits – and will be created from GP records and linked to hospital records. Ministers have defended the incoming system – which supporters say could bring huge benefits to care and research – saying it has mechanisms to de-identify records and a series of committees which will consider requests from thinktanks, businesses, universities and government bodies, as well as offering opt-outs for patients concerned about the use of their data.

But opting out of data sharing outside the NHS will not prevent records being sucked up and state agencies in some cases will be able to get access to them.

In the case of the police, officers will be able to request all of the medical data held for specific suspects with their correct identities, regardless of whether they had opted out.

With a national database in place, the request only has to be considered by officials at the information centre, who will not know the patient personally.

Davis, who established the existence of these "backdoors" in a parliamentary question answered by health services minister Dan Poulter, said he had "no problems with the data being used for licensed medical research, but when we have police accessing from a database that people have opted out from, and companies being able to buy this data, I think we need to have a debate about whether my property, which are my patient records, can be sold and used".

Advocates say that sharing data will make medical advances easier and ultimately save lives because it will allow researchers to investigate drug side-effects or the performance of hospital surgical units by tracking the impact on patients. But privacy experts warn there will be no way for the public to work out who has their medical records or to what use their data will be put.

The extracted information will contain a person's NHS number, date of birth, postcode, ethnicity and gender. Once live, organisations such as university research departments – but also insurers and drug companies – will be able to apply to the new Health and Social Care Information Centre (HSCIC) to gain access to the database, called care.data.

Last year it emerged that the private health insurer Bupa was among four firms that had been cleared to access "sensitive" patient data.

If an application is approved then firms will have to pay to extract this information, which will be scrubbed of some personal identifiers but not enough to make the information completely anonymous – a process known as "pseudonymisation".

Speaking generally about the new system, Davis said that medical records were a person's "fingerprint".

"I have had my nose broken five times. Once you know that, I am probably in a group of 100 people in England. Then you figure out when I had my diptheria jab, usually done at birth, and bang you got me. Let me be clear: people can be identified from this data."

This week, the Information Commissioner's Office warned that information provided to patients on care.data was not clear enough about how to opt out of the programme.

Brian Jarman, who developed the statistical methodology used to pinpoint high death rates in the NHS and is director of the Dr Foster research unit at Imperial College London, said the system should be "opt in, not opt out". He said: "There is simply too much data and the risks that something leaks are too great. We need to slow this process down to ensure we have the right checks in place."

Phil Booth of medConfidential, which campaigns on medical privacy, told the Guardian: "This is precisely the danger when you create a giant database of highly sensitive information about people – all sorts of other people want to go rifling through it, including the government." There's always another good reason to go digging, but no one thinks of the catastrophic breach of trust this represents."

"The lack of independent oversight and transparency is what's most worrying. People trust their GP, but who's heard of the Health and Social Care Information Centre or the four people who sign off on access to all our medical records?"

A Department of Health spokesperson said: "There are strong legal safeguards in place to protect patients' confidentiality. If people do not want their data to be shared, they can speak to their GP and information will not leave the surgery. Any release of identifiable data without consent would only be in a very limited number of exceptional circumstances, where there is a clear basis in existing law – such as for the police to investigate a serious crime."
http://www.theguardian.com/society/2...health-records





Guardian Reveals Threats of Imprisonment and Closure Over Snowden Leaks
Robert Stevens

The British government threatened to jail Guardian editor Alan Rusbridger and close the newspaper last July, over the newspaper’s reporting of the Edward Snowden revelations.

The Guardian, via its former journalist Glenn Greenwald, began to release the revelations on June 5, 2012, detailing how the US National Security Agency (NSA), in close alliance with Britain’s Government Communications Headquarters (GCHQ), had created a global surveillance operation illegally monitoring the world’s population.

On July 20 last year, the government sent two security service agents to the Guardian’s London office to oversee the destruction of hard drives and memory cards containing encrypted files from Snowden on two computers.

On Friday, the Guardian released footage of the hard drives and computer equipment being destroyed.

Alongside the footage it published several articles by journalist Luke Harding, revealing more details about the extraordinary and unprecedented military-style three-hour operation. One of the articles, “The day GCHQ came to call on the Guardian”, is an edited excerpt from a new book released this week, The Snowden Files: The Inside Story of the World’s Most Wanted Man, authored by Harding.

Harding reveals that the day after the Guardian’s exposé, UK spies had bugged foreign leaders at two G20 summits, Rusbridger was contacted by Prime Minister David Cameron’s press officer, Craig Oliver. Oliver informed Rusbridger that intelligence officials were angry over the story being published and that some wanted to see him imprisoned. “But we are not going to do that,” Oliver added.

Sir Jeremy Heywood, the cabinet secretary, was then sent to the Guardian’s offices by Cameron to request that it hand over the material it had received from Snowden.

Harding reports that Heywood claimed that the Guardian being in possession of the Snowden material meant it was now a target of foreign powers, including China.

He asked, “Do you know how many Chinese agents are on your staff?”

In another move to intimidate the newspaper, Harding reports that Heywood intimated that the intelligence agents were monitoring the paper’s office at the time. Heywood “gestured at the flats visible from the Guardian windows and said, ‘I wonder where our guys are?’” he writes.

By July 12 the government had decided to stop the Guardian continuing reporting the Snowden revelations by forcing it to hand over the material. When informed by Rusbridger that the raw Snowden files had been copied and were with journalists in the US, Heywood warned, “We can do this nicely, or we can go to law.”

Harding states that Rusbridger then “suggested an apparent compromise: that GCHQ could send technical experts to the Guardian to advise staff on how the material could be handled securely. And, possibly in due course, destroyed.”

The government wasn’t interested in offering any such “advice”, with Rusbridger being told three days later by Oliver Robbins, Cameron’s deputy national security adviser, “You’ve had your fun. Now it’s time to hand the files back.”

The morning after this warning Robbins called the Guardian and declared it’s “all over”. He explicitly warned that if the Snowden material held by the newspaper was not destroyed, it faced being closed down.

Robbins, says Harding, told the Guardian that GCHQ technicians wanted to inspect the files to see if a third party had intercepted them. In response, Rusbridger told Robbins, “This doesn’t make sense. It’s in US hands. We will go on reporting from the US. You are going to lose any sense of control over the conditions. You’re not going to have this chat with US news organisations.”

“Rusbridger then asked: ‘Are you saying explicitly, if we don’t do this, you will close us down?’”

“I’m saying this,” Robbins confirmed.

A few days later, according to Harding, Guardian deputy editor Paul Johnson and Robbins agreed that instead of the government seizing its computers, at Johnson’s suggestion, “the Guardian would bash up its own computers under GCHQ’s tutelage.”

The operation went ahead with GCHQ instructing the Guardian to buy the necessary equipment, including angle-grinders, Dremel revolving drills and masks. Harding reveals that GCHQ also provided its own machine known as a degausser, able to “destroy magnetic fields, thereby erasing hard drives and data.”

Following the destruction of the drives the remnants of the computer hardware were put through the degausser.

Yet more intimidation took place during the operation with “Ian”, one of the GCHQ agents overseeing the operation, giving details as to how he would have been able to break into the Guardian’s HQ and take control of a computer anyway. He stated “I would have given the guard £5k and got him to install a dummy keyboard. Black ops would have got it back. We would have seen everything you did.”

In his previous account of the event Rusbridger stated that as the computer equipment was wrecked one of the GCHQ agents also said, “We can call off the black helicopters.”

That such an extraordinary operation was mounted against a newspaper of national and world renown, with accompanying brazen statements about the military being on standby, testifies to the drastic and perilous erosion of democratic rights in the UK.

The government/GCHQ operation against the Guardian then escalated with the illegal detention of David Miranda, the partner of Greenwald, for nearly nine hours at Heathrow Airport on August 18. Police threatened Miranda with jail and seized his laptop, camera, cell phone and other personal items. Miranda had on his person encrypted files containing documents passed on by Snowden.

This was the first time that journalistic materials were seized by the authorities under the pretext of the Terrorism Act. In papers made public in November at a hearing in which Miranda challenged the legality of his detention, the police, in league with the UK government and its intelligence agencies, described Miranda as being involved in terrorist activity.

The police document, cited by Home Office lawyers at the High Court, stated, “Intelligence indicates that Miranda is likely to be involved in espionage activity which has the potential to act against the interests of UK national security... We assess that Miranda is knowingly carrying material, the release of which would endanger people’s lives.”

“Additionally the disclosure, or threat of disclosure, is designed to influence a government, and is made for the purpose of promoting a political or ideological cause. This therefore falls within the definition of terrorism and as such we request that the subject is examined under schedule 7.”

Harding reveals in his upcoming book that it was the UK domestic spying operation MI5 that initiated the detaining of Miranda. He writes, “MI5 tried to conceal its role in the affair, telling the police at Heathrow in a briefing: ‘Please do not make any reference to espionage activity. It is vital that MIRANDA is not aware of the reason for this ports stop.’”

Senior government and intelligence officials have aired constant calls for the Guardian to be prosecuted. To this end, Rusbridger was subjected to an intensive and hostile interrogation by Parliament’s Home Affairs Select Committee in December.

The Metropolitan Police are currently investigating the material seized from Miranda and are attempting to establish if the newspaper can be prosecuted under section 58(a) of the Terrorism Act—which involves eliciting, publishing or communicating information about members of the armed forces.
http://www.wsws.org/en/articles/2014.../guar-f04.html





News Corp. Paper Made 6,813 Phone-Hacking Calls in 2 Years
Jeremy Hodges

News Corp. (NWSA) journalists and a private investigator employed by the company’s News of the World tabloid made 6,813 calls targeting 282 voice mails during 2005 and 2006, according to billing data analyzed by police.

More than 4,700 of the calls were made from the newspaper’s own phones by journalists, Detective Constable Richard Fitzgerald said at the London phone-hacking trial today. Convicted private eye Glenn Mulcaire attempted to hack 87 voice mails a total of 1,450 times in the same period, he said.

Former News of the World editors Rebekah Brooks and Andy Coulson, are two of seven people at the trial charged with a range of offenses related to hacking, bribing public officials and destruction of evidence. The prosecution’s case is nearing the end after more than three months of argument.

Clive Goodman, the tabloid’s ex-royal reporter also on trial for bribery, targeted 14 voice-mail numbers in the 12 months to August 2006 from his home phone number, Fitzgerald said.

Rupert Murdoch, chairman of New York-based News Corp., closed the News of the World in July 2011 after the discovery that the newspaper accessed the voice mails of a murdered schoolgirl years earlier. Mulcaire, who was convicted of phone hacking in 2007, pleaded guilty to more interception charges last year.

Trial Verdict

The trial is expected to last until mid-May when the jury will retire to consider its verdict, Judge John Saunders said today.

“This is an important case for all concerned,” Saunders said to the jury. “We have to get through to the end, you and I.”

Notes from a January 2010 meeting between News Corp. U.K. executives and lawyers revealed the wrangling over a civil settlement with celebrity publicist Max Clifford, Andrew Edis, a prosecution lawyer said today.

Brooks discussed the possibility of Clifford being paid 200,000 pounds ($326,260) a year to represent News Corp.’s best-selling Sun daily tabloid in an effort to get him to drop the civil hacking claim, Edis said reading from the notes.

Clifford was, at the time, attempting to force Mulcaire to reveal who he dealt with at News Corp. through a civil court order.

Clifford Deal

“You have to think about what is worse,” one executive said in the 2010 meeting, according to the notes read out by Edis. “Her doing a deal with Max, which will be perceived as a cover-up, or indemnifying Mulcaire so he doesn’t say anything about” the company.

Clifford won the court order to force Mulcaire to reveal who instructed him to intercept the voice mail of Nicola Phillips, who worked for the publicist, and to say what information he obtained. Mulcaire appealed the decision all the way to the U.K. Supreme Court, losing in July 2012.

Brooks said it would “look terrible” if the company was seen to be “buying off” Clifford, according to the notes of the meeting.

A settlement was reached with Clifford according to correspondence between News Corp. and the publicist’s lawyers without an amount being specified.
http://www.bloomberg.com/news/2014-0...n-2-years.html
















Until next week,

- js.



















Current Week In Review





Recent WiRs -

February 1st , January 25th, January 18th, January 11th,

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 11:00 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)