P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 12-02-14, 08:18 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - February 15th, '14

Since 2002


































"FCC chairman Tom Wheeler told me, 'I do not intend to be sitting in the chairman's seat when a major cyber attack occurs, having done nothing,'" – Admiral James Barnett-USN, retired






































February 15th, 2014




De La Soul to Make Entire Catalog Available for Free

Pioneering hip-hop group also releasing flood of new music
Jason Newman

In honor of next month's 25th anniversary of their debut album 3 Feet High and Rising, De La Soul are making their entire catalog available for free download for 25 hours on the group's website. The download bonanza will begin on Friday, February 14th at 11 a.m. EST until Saturday at noon.

"It's about allowing our fans who have been looking and trying to get a hold of our music to have access to it," De La Soul member Posdnuos tells Rolling Stone. "It's been too long where our fans haven't had access to everything. This is our way of showing them how much we love them."

The same things that made 3 Feet High and other De La albums so influential — its creative, if not fully licensed, use of a myriad of samples — has also prevented the group's work from appearing on many digital platforms. "It's been a trying journey," admits Posdnuos. "We've been blessed to be in the Library of Congress, but we can't even have our music on iTunes. We've been working very hard to get that solved." The rapper points to frequent personnel changes at record labels and hazy language in early contracts that have led to long delays in properly clearing the group's catalog.

The release of the group's catalog is the first of numerous upcoming projects. In a few weeks, they'll post new songs to their site, with You're Welcome, their first album since 2004's The Grind Date, expected to be released before summer. Next month will also see the release of Preemium Soul on the Rocks, a six-song EP with three beats each from DJ Premier and Pete Rock. The group is also planning a visit to Detroit to work on an unreleased beat from J Dilla, the prolific producer who passed away in 2006. "Dilla was the Tupac of producers," says Posdnuos. "He has so many unreleased things that no one has heard. His family knows how vital and important an ingredient his music was to our work."

Asked about the status of You're Welcome, an album originally scheduled for release last year, Posdnuos says that it's "coming along amazingly," but points to self-criticism in its delay. "We have tons of music, but we're our own worst critics," admits the rapper. "Certain groups have too many 'yes men.' In our group, we have too many 'no men.' When we look back on some of the stuff we have, we're like, 'Yo, we need to just put this out.' The album is still called You're Welcome, but we also have this whole other album that we're working on that…Wooo, I wish I could talk about it."

Twenty-five years into their career, the group is ready, if somewhat cautiously, to adopt the more-is-more release schedule of its younger peers. "We're just getting in the mode of constantly giving people new music," says Posdnuos. "I'll be the first to say that not everyone can do it. You can put out a new mixtape every week, but it can dilute what you're putting out because you haven't had enough time to see what's going on with your life to write something from a different angle. With us, we've sat a long time without releasing an album. It's high time we start releasing a bunch of stuff because it's there."
http://www.rollingstone.com/music/ne...-free-20140213





BitTorrent Sync: The NSA-Resistant File Sharing Service You Might Have Missed
Alyssa Hertig

BitTorrent Inc. is shifting the emphasis of its business to BitTorrent Sync, a transformative file-sharing service that boasts NSA resistance.

Last year, Belarussian Konstantin Lissounov threw together a crude version of Sync at a BitTorrent hackathon. It allowed him to “quickly and easily send encrypted photos of his three children across dodgy Eastern European network lines to the rest of his family.” Now, the peer-to-peer file synchronization tool boasts two million users a month and is developing into BitTorrent's primary product. Wired shines some light on the motivation for the move around:

A big part of the commercial opportunity for the tool, BitTorrent executives believe, lies in the reality that large corporations are aggressively reining in data following Snowden’s revelations.

Like Dropbox, BitTorrent Sync enables easy transfer of music, documents, and other files. But Sync's decentralized structure distinguishes it. Sync replaces data-storage centers, which the NSA can easily tap, with a peer-to-peer network. Like the BitTorrent protocol, users can share files directly, from one device to another. This leaves absolutely no opportunity for an agency like the NSA to harvest bulk data, because it cannot penetrate a central server. This method of file-sharing is somewhat less convenient because, Wired explains, “in order to synchronize files across multiple systems, all must be online at the same time.” But CEO Eric Klinker believes that the pros outweigh the cons for many consumers.

Sync has also been used as a platform for other exciting projects. Wired reports:

Two open source programmers, one in Texas and one in South Africa, have launched vole.cc, a distributed social network built on Sync. Last month, an engineer who works for Harvard University unveiled SyncNet, a parallel version of the world wide web that runs on Sync.

Decentralized technologies are stirring a productive excitement. Bitcoin, the cryptocurrency, similarly relies on a peer-to-peer protocol. Projects like BitCloud, which aims to “decentralize the internet,” are popping up. The sharing economy is nurturing disruptive technologies that grant increased privacy, cheaper access, and a decentralized protocol. The “Dropbox killer” is embedded in that trend.
http://reason.com/blog/2014/02/12/a-...ng-service-you





With Porn Filters Going Oh So Well, UK Roars Ahead In Expanding Them To Include 'Extremist' Content
Karl Bode

The UK government's futile and ham-fisted attempts to purge the Internet of all of its rough edges and naughty bits are about to see international escalation. The country is only really just kicking off their campaign to impose porn filters that not only often don't work, but also have so far managed to accidentally block numerous entirely legal and useful websites including technology news sites like Slashdot, digital rights groups like the EFF, rape counseling websites, and more. David Cameron's government has long-stated they want this filtering to eventually extend to websites deemed "extremist" by the government, and it appears that new proposals being drafted hope to make that a reality sooner rather than later.

Just as child porn is used to justify broader porn filters, beheading videos appear to be the magic bullet into scaring people into accepting filters that move well beyond porn. According to the BBC, government-funded operations within the counter-terrorism referral unit will soon order UK broadband ISPs like TalkTalk, Virgin Media and BSkyB to expand filters to include websites declared to be promoting terrorism. As most filter opponents have warned, the slope in the UK is moving beyond slippery and is getting downright muddy thanks in part to new UK Immigration Minister James Brokenshire:

"Terrorist propaganda online has a direct impact on the radicalisation of individuals and we work closely with internet service providers (ISPs) to remove terrorist material hosted in the UK or overseas," said Brokenshire. "Through proposals from the Extremism Taskforce announced by the Prime Minister in November, we will look to further restrict access to material that is hosted overseas - but illegal under UK law - and help identify other harmful content to be included in family-friendly filters," he added.

In other words, because of pesky things like the Constitution in the United States and instead of just using existing, vast international resources to prosecute criminals and terrorists, we're going to be expanding broken ISP filters against the advice of pretty much everybody. Granted what is deemed "extremist" will likely be entirely arbitrary, and as we've seen with the porn filters, there's probably no limit to the number of entirely legal and legitimate websites UK citizens will find suddenly inaccessible.
http://www.techdirt.com/articles/201...websites.shtml





George Brandis Signals Government Crackdown On Online Piracy
Matthew Knott

Attorney-General Senator George Brandis said the fundamental principles of copyright law - protecting creators' and owners' rights - did not change with the advent of the internet. Photo: Andrew Meares

The Abbott government is considering a major crackdown on online piracy, including forcing internet service providers to block websites that allow users to illegally stream or download movies, music and television shows.

The federal government is also considering implementing a "graduated response scheme" that could lead to consumers' internet accounts being temporarily suspended if they ignore notifications to stop downloading illegal content.

If implemented, the reforms could see popular file sharing sites such as The Pirate Bay blocked by some internet service providers.

Attorney-General George Brandis flagged the changes in a major speech to the Australian Digital Alliance forum on Friday.

"The government will be considering possible mechanisms to provide a legal incentive for an internet service provider to cooperate with copyright owners in preventing infringement on their systems and networks," Mr Brandis said.

"This may include looking carefully at the merits of a scheme whereby ISPs are required to issue graduated warnings to consumers who are using websites to facilitate piracy. This is a complex reform proposal, and how it is paid for is one of the principal unresolved issues."

He continued: "Another option that some stakeholders have raised with me is to provide the Federal Court with explicit powers to provide for third party injunctions against ISPs, which will ultimately require ISPs to take down websites hosting infringing content."

Such measures would be welcomed by entertainment companies and sections of the artistic community, but are likely to prove controversial among internet users and providers.

Australians are among the most avid users of pirating websites in the world. For example, Australians accounted for 16 per cent of all illegal downloads of television program Breaking Bad.

In his speech Mr Brandis said he stood firmly on the side of content creators in the copyright debate.

"I firmly believe the fundamental principles of copyright law, the protection of rights of creators and owners did not change with the advent of the internet and they will not change with the invention of new technologies."

He described the Copyright Act as "overly long, unnecessarily complex, often comically outdated and all too often, in its administration, pointlessly bureaucratic".
http://www.brisbanetimes.com.au/fede...lixzz2tOkvGUd9





Hyperlinking is Not Copyright Infringement, EU Court Rules
Andy

Does publishing a hyperlink to freely available content amount to an illegal communication to the public and therefore a breach of creator's copyrights under European law? After examining a case referred to it by Sweden's Court of Appeal, the Court of Justice of the European Union has ruled today that no, it does not.

The European Union has been expanding since its creation in the 1950s and is now comprised of 28 member states, each committed to EU law.

One of the key roles of the EU’s Court of Justice is to examine and interpret EU legislation to ensure its uniform application across all of those member states. The Court is also called upon by national courts to clarify finer points of EU law to progress local cases with Europe-wide implications.

One such case, referred to the CJEU by Sweden’s Court of Appeal, is of particular interest to Internet users as it concerns the very mechanism that holds the web together.

The dispute centers on a company called Retriever Sverige AB, an Internet-based subscription service that indexes links to articles that can be found elsewhere online for free.

The problem came when Retriever published links to articles published on a newspaper’s website that were written by Swedish journalists. The company felt that it did not have to compensate the journalists for simply linking to their articles, nor did it believe that embedding them within its site amounted to copyright infringement.

The journalists, on the other hand, felt that by linking to their articles Retriever had “communicated” their works to the public without permission. In the belief they should be paid, the journalists took their case to the Stockholm District Court. They lost their case in 2010 and decided to take the case to appeal. From there the Svea Court of Appeal sought advice from the EU Court.

Today the Court of Justice published its lengthy decision and it’s largely good news for the Internet.

“In the circumstances of this case, it must be observed that making available the works concerned by means of a clickable link, such as that in the main proceedings, does not lead to the works in question being communicated to a new public,” the Court writes.

“The public targeted by the initial communication consisted of all potential visitors to the site concerned, since, given that access to the works on that site was not subject to any restrictive measures, all Internet users could therefore have free access to them,” it adds.

“Therefore, since there is no new public, the authorization of the copyright holders is not required for a communication to the public such as that in the main proceedings.”

However, the ruling also makes it clear that while publishing a link to freely available content does not amount to infringement, there are circumstances where that would not be the case.

“Where a clickable link makes it possible for users of the site on which that link appears to circumvent restrictions put in place by the site on which the protected work appears in order to restrict public access to that work to the latter site’s subscribers only, and the link accordingly constitutes an intervention without which those users would not be able to access the works transmitted, all those users must be deemed to be a new public,” the Court writes.

So, in basic layman’s terms, if content is already freely available after being legally published and isn’t already subject to restrictions such as a subscription or pay wall, linking to or embedding that content does not communicate it to a new audience and is therefore not a breach of EU law.

The decision, which concurs with the opinions of a panel of scholars, appears to be good news for anyone who wants to embed a YouTube video in their blog or Facebook page, but bad news for certain collecting societies who feel that embedding should result in the payment of a licensing fee.
https://torrentfreak.com/hyperlinkin...-rules-140213/





You Can’t Resell Valve Games in Germany – Court
Swapnil Bhartiya

A German court has dismissed a ‘reselling’ case in favour of Valve Software, the maker of Steam OS. German consumer group Verbraucherzentrale Bundesverband (vzbv) had filed a complaint against Valve as Valve’s EULA (End User Licence Agreement) prohibits users from re-selling their games.

It’s bad new for users who lose the right to re-sell their digital content. What it means in layman’s terms, if the court’s decision is applied in the physical world, is that you are not allowed to re-sell your old car, couch or anything at all.

In July 2012 the Court of Justice of the European Union (CJEU) ruled that users have the right to re-sell downloaded content and a publisher can’t stop that via EULA. US companies are extremely aggressive over ‘ownership’ of content and taking away control from users. That case was one of the reasons why vzbv filed second complaint against Valve as the UsedSoft vs Oracle case concluded that the copyright owners exhaust their exclusive right after first sale which allows users to resell the digital content.

This ruling was the basis of vzbv complaint, but it seems the German Regional court believe that the CJEU ruling doesn’t apply to digitally ‘distributed’ games. For now it seems that digitally distributed games in Germany are not covered by ‘exhaustion’.

While companies do try to protect their works from ‘illegal’ downloads, or illegale re-selling, they should give customer total ownership of works they paid for. As I stated above, looking at the extremely community friendly stand on Valve Software, we may expect Valve to give more power to users.

Patha Das contributed to this story
http://www.muktware.com/2014/02/cant...ny-court/20773





What Jobs Will the Robots Take?

Nearly half of American jobs today could be automated in "a decade or two," according to new research. The question is: Which half?
Derek Thompson

It is an invisible force that goes by many names. Computerization. Automation. Artificial intelligence. Technology. Innovation. And, everyone's favorite, ROBOTS.

Whatever name you prefer, some form of it has been stoking progress and killing jobs—from seamstresses to paralegals—for centuries. But this time is different: Nearly half of American jobs today could be automated in "a decade or two," according to a new paper by Carl Benedikt Frey and Michael A. Osborne, discussed recently in The Economist. The question is: Which half?

Another way of posing the same question is: Where do machines work better than people? Tractors are more powerful than farmers. Robotic arms are stronger and more tireless than assembly-line workers. But in the past 30 years, software and robots have thrived at replacing a particular kind of occupation: the average-wage, middle-skill, routine-heavy worker, especially in manufacturing and office admin.

Indeed, Frey and Osborne project that the next wave of computer progress will continue to shred human work where it already has: manufacturing, administrative support, retail, and transportation. Most remaining factory jobs are "likely to diminish over the next decades," they write. Cashiers, counter clerks, and telemarketers are similarly endangered. On the far right side of this graph, you can see the industry breakdown of the 47 percent of jobs they consider at "high risk."

And, for the nitty-gritty breakdown, here's a chart of the ten jobs with a 99-percent likelihood of being replaced by machines and software. They are mostly routine-based jobs (telemarketing, sewing) and work that can be solved by smart algorithms (tax preparation, data entry keyers, and insurance underwriters). At the bottom, I've also listed the dozen jobs they consider least likely to be automated. Health care workers, people entrusted with our safety, and management positions dominate the list.

If you wanted to use this graph as a guide to the future of automation, your upshot would be: Machines are better at rules and routines; people are better at directing and diagnosing. But it doesn't have to stay that way.

The Next Big Thing

Predicting the future typically means extrapolating the past. It often fails to anticipate breakthroughs. But it's precisely those unpredictable breakthroughs in computing that could have the biggest impact on the workforce.

For example, imagine somebody in 2004 forecasting the next ten years in mobile technology. In 2004, three years before the introduction of the iPhone, the best-selling mobile device, the Nokia 2600, looked like this:

Many extrapolations of phones from the early 2000s were just "the same thing, but smaller." It hasn't turned out that way at all: Smartphones are hardly phones, and they're bigger than the Nokia 2600. If you think wearable technology or the "Internet of Things" seem kind of stupid today, well, fine. But remember that ten years ago, the future of mobile appeared to be a minuscule cordless landline phone with Tetris, and now smartphones sales are about to overtake computers. Breakthroughs can be fast.

We might be on the edge of a breakthrough moment in robotics and artificial intelligence. Although the past 30 years have hollowed out the middle, high- and low-skill jobs have actually increased, as if protected from the invading armies of robots by their own moats. Higher-skill workers have been protected by a kind of social-intelligence moat. Computers are historically good at executing routines, but they're bad at finding patterns, communicating with people, and making decisions, which is what managers are paid to do. This is why some people think managers are, for the moment, one of the largest categories immune to the rushing wave of AI.

Meanwhile, lower-skill workers have been protected by the Moravec moat. Hans Moravec was a futurist who pointed out that machine technology mimicked a savant infant: Machines could do long math equations instantly and beat anybody in chess, but they can't answer a simple question or walk up a flight of stairs. As a result, menial work done by people without much education (like home health care workers, or fast-food attendants) have been spared, too.

But perhaps we've hit an inflection point. As Erik Brynjolfsson and Andrew McAfee pointed out in their book Race Against the Machine (and in their new book The Second Machine Age), robots are finally crossing these moats by moving and thinking like people. Amazon has bought robots to work its warehouses. Narrative Science can write earnings summaries that are indistinguishable from wire reports. We can say to our phones I'm lost, help and our phones can tell us how to get home.

Computers that can drive cars, in particular, were never supposed to happen. Even ten years ago, many engineers said it was impossible. Navigating a crowded street isn't mindlessly routine. It needs a deft combination of spacial awareness, soft focus, and constant anticipation--skills that are quintessentially human. But I don't need to tell you about Google's self-driving cars, because they're one of the most over-covered stories in tech today.

And that's the most remarkable thing: In a decade, the idea of computers driving cars went from impossible to boring.

The Human Half

In the 19th century, new manufacturing technology replaced what was then skilled labor. Somebody writing about the future of innovation then might have said skilled labor is doomed. In the second half of the 20th century, however, software technology took the place of median-salaried office work, which economists like David Autor have called the "hollowing out" of the middle-skilled workforce.

The first wave showed that machines are better at assembling things. The second showed that machines are better at organization things. Now data analytics and self-driving cars suggest they might be better at pattern-recognition and driving. So what are we better at?

If you go back to the two graphs in this piece to locate the safest industries and jobs, they're dominated by managers, health-care workers, and a super-category that encompasses education, media, and community service. One conclusion to draw from this is that humans are, and will always be, superior at working with, and caring for, other humans. In this light, automation doesn't make the world worse. Far from it: It creates new opportunities for human ingenuity.

But robots are already creeping into diagnostics and surgeries. Schools are already experimenting with software that replaces teaching hours. The fact that some industries have been safe from automation for the last three decades doesn't guarantee that they'll be safe for the next one. As Frey and Osborne write in their conclusion:

While computerization has been historically confined to routine tasks involving explicit rule-based activities, algorithms for big data are now rapidly entering domains reliant upon pattern recognition and can readily substitute for labour in a wide range of non-routine cognitive tasks. In addition, advanced robots are gaining enhanced senses and dexterity, allowing them to perform a broader scope of manual tasks. This is likely to change the nature of work across industries and occupations.

It would be anxious enough if we knew exactly which jobs are next in line for automation. The truth is scarier. We don't really have a clue.
http://www.theatlantic.com/business/...s-take/283239/





Meet Beep, The Chromecast For Your Old Speakers

What do you get when you cross a Chromecast with a Sonos? A less expensive way to stream music across your house.
Taylor Hatmaker

It feels like a new music app launches every week, but there are surprisingly few innovative hardware options that put them to work. Happily, soon a new player will be in the mix—literally.

Meet Beep, a project by two former members of Google's Android team. Beep wants to give your home stereo system the Chromecast treatment, turning old speakers into smart, streaming music-savvy speakers. And while it's not dirt cheap, at $150 it's one of the most affordable options around for anyone who wants to blend hi-fi and Wi-Fi.

Like A Chromecast, But For Speakers

Beep—the name of both the company and its product—looks like a retro volume controller of some kind, but it's a whole lot smarter than that. Much the way a Chromecast plugs into (almost) any TV, the Beep plugs right into any speakers with RCA jacks or aux or optical inputs.
See also: Google's Chromecast Is Now Open To All, So Bring On The Streaming Apps

That means it'll work just as well on a cheapo set of bookshelf speakers as on your old-school home hi-fi system. And that's awesome. You control Beep through an app that both indexes your local music collection and lets you "cast" anything playing on Pandora. For now, Pandora is the only streaming partner in its roster, but it's hard to imagine that Spotify, Rdio, Beats and the like won't be next in line come launch time.

Unlike most of the competition, Beep isn't about selling proprietary hardware systems. Sonos, for example, offers an amazing modular smart home speaker system and support for nearly every streaming music app known to man, but its speakers price quite a few folks out of taking the plunge.

Similarly, Apple's AirPlay only plays nice with AirPlay-friendly speakers, which usually aren't cheap. The beauty of Beep is the beauty of the $35 Chromecast—just plug in a gizmo and breathe new life into your existing hardware. The Beep acts as a wireless receiver as well as a volume dial and controller, and you can tap it to instantly resume whatever you were listening to last.
Take My Money

The device takes a lot of cues from Google's Chromecast, and multiple users can even take turns playing living room DJ. Like the Sonos system, Beeps work well in teams. If you have one Beep in the living room and one in the bedroom, you can sync them up to play the same tunes or queue up different sonic experiences in different rooms, all via the Beep app.

The Beep will go on sale for $150 later this fall in two colors: gunmetal and copper. If you're already stoked about the prospect of making your old speakers smart, you can pre-order in advance and save $50.
http://readwrite.com/2014/02/07/beep...ovtn96SURkfKAf





There’s Something Rotten in the State of Online Video Streaming, and the Data is Starting to Emerge
Stacey Higginbotham

Summary: Peering disagreements aren’t fun or consumer-friendly, but they might be the reason consumers’ video streams are suffering. New data purports to show much an effect these fights are having on your broadband.

If you’ve been having trouble with your Netflix streams lately, or maybe like David Rafael, director of engineering for a network security company in Texas, you’re struggling with what appears to be throttled bandwidth on Amazon Web Services, you’re not alone.

It’s an issue I’ve been reporting on for weeks to try to discover the reasons behind what appears to be an extreme drop on broadband throughput for select U.S. internet service providers during prime time. It’s an issue that is complicated and shrouded in secrecy, but as consumer complaints show, it’s becoming increasingly important to the way video is delivered over the internet.

The problem is peering, or how the networks owned and operated by your ISP connect with networks owned and operated by content providers such as Amazon or Netflix as well as transit providers and content deliver networks. Peering disputes have been occurring for years, but are getting more and more attention as the stakes in delivering online video are raised. The challenge for regulators and consumers is that the world of peering is very insular and understanding the deals that companies have worked out in the past or are trying to impose on the industry today is next to impossible.

Which is why we need more data. And it’s possible that the Federal Communications Commission has that data — or at least the beginnings of that data. The FCC collects data for a periodic Measuring Broadband Quality report that was most recently issued in Feb. 2013. In that report the FCC said it would to look at data from broadband providers during September 2013 and issue a subsequent report in that same year. That hasn’t happened, but the agency is preparing one likely for late Spring. The report measures how fast actual U.S. broadband speeds are relative to the advertised speeds. While the initial report published in 2011 showed that some ISPs were delivering sub par speeds versus their advertised speeds, ISPs have since improved their delivery and FCC rankings. As a result, the reports goals have shifted to measuring mobile broadband and even caps.

But the FCC’s next report will have what is likely to be a hidden trove of data that paints a damning picture of certain ISPs and their real-world broadband throughput. The data is provided in part by Measurement-Lab, a consortium of organizations including Internet 2, Google, Georgia Tech, Princeton and the Internet Systems Consortium. M-Lab, which gathers broadband performance data and distributes that data to the FCC, has uncovered significant slowdowns in throughput on Comcast, Time Warner Cable and AT&T. Such slowdowns could be indicative of deliberate actions taken at interconnection points by ISPs.

When contacted prior to publishing this story, AT&T didn’t respond to my request for comment, and both Time Warner Cable and Comcast declined to comment. I had originally asked about data from Verizon and CenturyLink, but M-Labs said those companies had data that was more difficult to map.

So what are we looking at in the above chart? It’s showing the median broadband throughput speeds at the listed ISPs. As you can see, certain providers have seen a noticeable decline in throughput. Measurement Lab was created in 2008 in the wake of the discovery that Comcast was blocking BitTorrent packets. Vint Cerf, who is credited as one of the creators of the internet, and Sascha Meinrath of the Open Technology Institute decided to help develop a broadband measurement testing platform that took into account the experience that an end user of an actual web service like Google or Netflix might experience.

The idea was to capture data on traffic management practices by ISPs and test against servers that are not hosted by the ISP. The company gives its data to the FCC as part of the agency’s Measuring Broadband America Report, and provides the data under an open source license to anyone who asks for it.

The FCC also uses additional data from SamKnows, a U.K. firm that provides routers to customers around the country and tracks their broadband speed, to produce its report. SamKnows did not respond to requests for comment on this story, and the FCC did not respond to my questions about the M-Lab data. So right now it’s an open question if the upcoming Measuring Broadband Report will have M-Lab’s data incorporated into the overall results, or if, because of the terms under which the FCC gets the M-Lab data, the agency will merely release the data without validating it.

Dueling methodologies

Ben Scott, a senior advisor to the Open Technology Institute at the New America Foundation who is working on the M-Lab data, said he and researchers at M-Lab are exploring new ways to test the data to see if they can “give more clarity about the cause or causes” of the slowdown.

While it does that, it will also have to address why its data is so different from the existing FCC data (a source at the Open Technology Institute explained that the FCC says SamKnows data is not showing the same trends) or even data available from Ookla, which runs the popular Speedtest.net broadband tests. Checks with other companies that monitor broadband networks also don’t show these trends. For a contrast, here’s what Ookla shows for Comcast’s speeds over the same time period as the M-Lab data.

Scott said that the goal behind M-Lab’s tests is to replicate what an average user experiences. That means measuring results not just from a carefully tuned server designed for offering bandwidth tests, but to include some of the many and varied hops that a packet might take in getting from Point A to Point B. Thus, the M-Lab tests include data on throttling, latency and over 100 variables that influence performance.

The servers that act as the end point for the M-Lab tests are in a variety of places such as cloud providers, universities and research institutions, and may connect to the end ISP via any number for different transit or CDN providers. For example, Level 3, Cogent, XO, Voxel, Tata and others own some of the transit networks that M-Lab’s tests traverse. Some of these companies such as Cogent have had established peering disputes that have affected traffic on their networks.

It’s at those transit and CDN providers where the packets make those different hops, and that’s where Scott said he and his researchers are focusing.

Ookla, the company behind Speedtest.net, is probably the most popular speed test out there but it also tends to have a few weak points. When you run a speed test using Speedtest.net, the app sends a package of packets to the closest server, which can be hosted at a local ISP or data center where interconnection points are common. There are several areas where the owner of the testing server can “tune” the test so it delivers maximum speeds. From the Ookla wiki:

The Speedtest is a true measurement of HTTP throughput between the web server and the client. Packet size and all other TCP parameters are controlled by the server itself, so there is a certain amount of tuning which can be done to ensure maximum throughput.

Ookla also eliminates the fastest 10 percent and slowest 30 percent of the results it receives before averaging the rest to get a sense of the reported throughput. Critics say the ability to tune servers and use ISP-hosted servers skews the results.

The consumer impact is growing, or at least the complaints are

What might look like an esoteric debate over the best way to measure broadband speeds is hiding a real issue for America’s broadband networks. Sources at large content providers believe the M-Lab data shows how ISPs are interfering with the flow of traffic as it reaches their last-mile networks.

So you might get something that looks like this — as I did on Saturday night while watching a show on Amazon (I had a similar experience while watching a Hulu stream the evening before).

While I was seeing my episode of The Good Wife falter at what appeared to be 1.9 Mbps, I was able to measure connection speeds of 28 Mbps to my house using a Speedtest.net test from Ookla. This is exactly the dichotomy that the M-Lab data is showing, and my example is not an isolated one; Comcast users have been complaining for months.

During the summer the CEO of Cogent accused Verizon of throttling traffic Cogent was delivering onto the Verizon network because Cogent wasn’t paying for interconnection. Yesterday, an IT worker in Plano, Texas named David Raphael put up a blog post that accuses Verizon of violating network neutrality because it appears to have admitted to blocking Amazon AWS traffic.

The real fight is over a business model for the internet

While peering disputes are not a network neutrality issue because those disputes are not actually governed by the recent legal decision striking down the Open Internet Order, it is an issue of competition and whether the last-mile ISPs are behaving like a monopoly.

Wednesday’s blog post from Raphael documents a Verizon technician apparently admitting that Verizon is throttling Amazon traffic. That might be a mistaken admission by a tech (as Verizon said in a statement) but the post does a credible job of laying out exactly what many consumers are experiencing and providing traceroute documentation.

Verizon’s statement on the post emphasized that it treats all traffic equally before noting that a variety of factors could influence the actual application performance including, “that site’s servers, the way the traffic is routed over the Internet, and other considerations.” For details on the ways the application provider can fail users, Comcast’s head of network engineering provides a much more in-depth post in response to user complaints of poor quality Netflix streams.

ISPs are correct to point out where their control ends and begins. Decisions made on server capacity, whether to buy content delivery services and choosing transit providers have an impact on the ability of content companies to deliver internet content to your TV or computer. Anyone who tries to visit a smaller blog after a post or photo has gone viral has seen those limits in action; those 404 errors are because of insufficient server capacity.

But pointing to Amazon, Netflix, Hulu or other internet giants and assuming they aren’t dedicating the resources to serve their customers is a hard sell. In fact, the pressure to build out that infrastructure may actually be behind some of the escalation in user complaints.

Industry watchers who count both ISPs and content companies as customers say that the decision by Netflix to create its own CDN last summer has prompted ISPs to get more aggressive in their peering negotiations, which has led to the consumer complaints. That aggression may come from not wanting to give Netflix — which increasingly competes with many ISPs’ own pay TV services — a “free ride” into the network, or it may be a grab for incremental revenue from a company that ISPs view as making bank off their pipes. Meanwhile, just this week rumors surfaced that Apple is building its own CDN business.

What’s happening is as the traffic on the web has consolidated into a few large players, those players are both a threat to their existing video businesses and a source of revenue for ISPs who control the access to the end consumers. As those players build out their infrastructure, the ISPs are halting them at the edge of their networks with refusals to peer or to peer for pay. The result of that “negotiation” between the two sides can be a slowdown in service as certain CDNs or transit providers are unable to peer directly with an ISP without paying up.
As frustration mounts, intervention seems far away

In conversations with sources at ISPs who are uncomfortable or prohibited from speaking on the record, the feeling is that content providers need to help pay for the upgrades to the last mile network that the rise in overall traffic is causing, as well as frustration that Netflix and others are somehow “getting around paying for transit or CDN services” by building their own systems. ISPs also say they don’t want to have to host half a dozen servers that cache content for the big internet providers with the prospect of more coming as new services grow, citing power and space constraints.

All vehemently deny throttling traffic while pointing out that certain transit providers such as Cogent (every ISP will use Cogent as a scapegoat) are known bad actors and won’t pay to peer directly with ISPs. Unfortunately ISPs gloss over the real debate, which is whether transit providers, content companies and CDNs should have to pay to peer — that is, pay for the right to deliver all of the traffic that an ISP’s users are demanding — given that the end user has paid the ISP to deliver the content the user has asked for?

That is the heart of the debate with issues such as the lack of broadband competition at the last mile, and the possibility that ISPs who have their own pay TV businesses have an interest in blocking competing TV services just adds more complexity. The challenge is proving that such slowdowns are happening, show where they are happening and then have a debate about what should be done about this. The data from M-Lab is a start, and if it can refine the data to deliver proof of ISP wrongdoing, then the FCC should take it into consideration.

So when the Measuring broadband Report eventually comes out, a lot of people will be looking for the M-Lab data. As for right now, myself and other consumers are looking for a conversation about broadband quality that so far the agency isn’t having.
http://gigaom.com/2014/02/06/theres-...ing-to-emerge/





ISP Lobby has Already Won Limits on Public Broadband in 20 States

Bills limiting municipal ISPs in Kansas and Utah continue noble tradition.
Jon Brodkin

It's no secret that private Internet service providers hate when cities and towns decide to enter the telecommunications business themselves. But with private ISPs facing little competition and offering slow speeds for high prices, municipalities occasionally get fed up and decide to build their own broadband networks.

To prevent this assault on their lucrative revenue streams, ISPs have teamed up with friends in state legislatures to pass laws that make it more difficult or impossible for cities and towns to offer broadband service.

Attorney James Baller of the Baller Herbst Law Group has been fighting attempts to restrict municipal broadband projects for years. He's catalogued restrictions placed upon public Internet service in 20 states, and that number could be much higher already if not for the efforts of consumer advocates.

Some state restrictions have been in place for decades. A new wave of state laws were passed in the years after the federal Telecommunications Act of 1996 was passed, Baller told Ars. Another wave of proposals came after a US Supreme Court decision in 2004 that said the Telecommunications Act "allows states to prevent municipalities from providing telecommunications services."

Pennsylvania enacted a new law limiting municipal broadband later in 2004, but then the tide began to turn.

"The next year we saw 14 states consider barriers, and we fought most of them off, and then it was two or three a year," Baller told Ars. "We won all the battles for a while until North Carolina in 2011 and South Carolina in 2012, and there were no new ones in 2013." (That North Carolina bill was titled, "An act to protect jobs and investment by regulating local government competition with private business.")

There could be two new states restricting public Internet service in 2014. As we've reported, the cable lobby proposed a bill in Kansas to outlaw municipal broadband service for residents except in "unserved areas," which were defined in such a way that it would be nearly impossible to call any area unserved. After protest, the cable lobby group said it would rewrite the bill to change how it defines unserved areas.

In Utah, a new bill would make it harder for regional fiber networks to expand. That one also drew criticism, and the legislator who proposed it said he's make some "minor adjustments."

Baller said the Kansas bill is the most extreme one he's seen.

"In its key operative language, it says that municipalities can't provide telecommunications, cable, or broadband services, period, and they can't make their facilities available to private sector entities that would otherwise use them to provide telecommunications, video, or broadband services," he said. "When you think about that combination, what do you have left? You can provide for your own internal needs, there's an exception for that, but that's pretty much it."

Recent laws restricting municipal broadband buildouts, often based on a model devised by the American Legislative Exchange Council, are usually subtler. One tactic is to require public broadband networks to quickly achieve profitability, something that is difficult even for a private entity because of how much initial construction costs. Some laws also force municipalities to impute to themselves costs that private providers would pay, even if the municipality doesn't actually have to pay them.

"Municipalities typically have lower costs than private entities and do not seek the high short-term profits that shareholders and investors expect of private entities," Baller wrote. "As a result, municipalities can sometimes serve areas that private entities shun and can often provide more robust and less expensive services than private entities are willing to offer. Imputed cost requirements—a form of legislatively sanctioned price fixing—have the purpose and effect of driving municipal rates up to the uncompetitive levels that private entities would charge if they were willing to provide the services at issue. Imputing costs is also difficult, time-consuming, inexact, and highly subjective. As a result, imputed cost requirements give opponents of public communications initiatives virtually unlimited opportunities to raise objections that significantly delay and add to the costs of such initiatives."

The Kansas Cable Telecommunications Association argued that "scarce taxpayer dollars should not be used by municipalities to directly compete with private telecom providers." That's an argument Baller has seen time and again.

"These bills are supposedly intended to create fair competition but what they're really intended to do is thwart altogether or significantly delay and make public communications projects as unattractive as possible," he told Ars.

The key to defeating these proposals is to share information with state legislators, Baller said. "They're usually not professional lawmakers, they have jobs," and not much time to study bills before voting on them, he noted. "Often the key is to inform people what the real implications of the laws can be in their state."

Baller and others have worked with community members to help prevent such bills from being passed. Baller helped organize opposition in Kansas from organizations including Google, OnTrac, Alcatel-Lucent, and several utility and telecom consortiums. The proposal in Utah has "just come on our radar screen, so we're evaluating how to respond there," he said.

While the UTOPIA regional fiber consortium that bill targets has been in many ways a financial failure, municipal broadband can succeed. In Chattanooga, Tennessee, a fiber network built by a community-owned electric utility has provided affordable high-speed broadband in a rural area, turned a profit, and forced competitors to upgrade their services. The electric utility in Bristol, Virginia, serves most of its residents and businesses with a fiber to the premises network, and it was praised in the federal government's National Broadband Plan as "a good example of the potential of community broadband in rural America."

Bristol was able to deploy its network before politicians in Virginia placed restrictions on similar projects.

Cox Communications, a cable company that is backing the proposed broadband restrictions in Kansas, told Ars that "approximately 22 other states hav[e] some type of restriction on the use of taxpayer dollars for these kinds of facilities."

Taxpayer dollars can be used to build networks, but Christopher Mitchell, director of the Telecommunications as Commons Initiative at the Institute for Local Self-Reliance, argued that cable company arguments pointing to taxpayer funds are misleading.

"Most networks sell bonds to private investors who are repaid by revenues from the network. No taxpayer dollars," he told Ars. "If anything, taxpayer dollars are better spent by no longer overpaying for service to schools, fire departments, and the like. And the municipal utilities that often operate the network generally pay far more in what is called PILoT—Payments in lieu of taxes—than the private providers do, meaning that even though the prices on muni networks are generally lower than what existed in the market prior to the muni entry, more of the revenue goes to pay for other government services. The result is that municipal networks more often subsidize the general fund rather than the general fund subsidizing a municipal network."

Cox said its "22 other states" statistic is based on internal research and declined to say which states it includes in that figure. As we mentioned, Baller has identified 20 states with such restrictions. Let's take a look at each one (quotes come from Baller's analysis):

A Sad states of affairs

• Alabama: Municipal communications services must be self-sustaining, "thus impairing bundling and other common industry marketing practices." Municipalities cannot use "local taxes or other funds to pay for the start-up expenses that any capital-intensive project must pay until the project is constructed and revenues become sufficient to cover ongoing expenses and debt service."

• Arkansas: Only municipalities that operate electric utilities may provide communications services, but they aren't allowed to provide "basic local exchange service," i.e. traditional phone service.

• California: Public entities are generally allowed to provide communications services, but "Community Service Districts" may not if any private entity is willing to do so.

• Colorado: Municipalities must hold a referendum before providing cable, telecommunications, or broadband service, unless the community is unserved.

• Florida: Imposes special tax on municipal telecommunications service and a profitability requirement that makes it difficult to approve capital-intensive communications projects.

• Louisiana: Municipalities must hold referendums before providing service and "impute to themselves various costs that a private provider might pay if it were providing comparable services."

• Michigan: Municipalities must seek bids before providing telecom services and can move forward only if they receive fewer than three qualified bids.

• Minnesota: 65 percent of voters must approve before municipalities can offer local exchange services or operate facilities that support communications services.

• Missouri: Cities and towns can't sell telecom services or lease telecom facilities to private providers "except for services used for internal purposes; services for educational, emergency, and health care uses; and 'Internet-type' services."

• Nebraska: Public broadband services are generally prohibited except when provided by power utilities. However, "public power utilities are permanently prohibited from providing such services on a retail basis, and they can sell or lease dark fiber on a wholesale basis only under severely limited conditions."

• Nevada: Municipalities with at least 25,000 residents and counties with at least 50,000 residents may not provide telecommunications services.

• North Carolina: "Numerous" requirements make it impractical to provide public communications services. "For example, public entities must comply with unspecified legal requirements, impute phantom costs into their rates, conduct a referendum before providing service, forego popular financing mechanisms, refrain from using typical industry pricing mechanisms, and make their commercially sensitive information available to their incumbent competitors."

• Pennsylvania: Municipalities cannot sell broadband services if a "local telephone company" already provides broadband, even if the local telephone company charges outrageously high prices or offers poor quality service.

• South Carolina: The state "requires governmental providers to comply with all legal requirements that would apply to private service providers, to impute phantom costs into their prices, including funds contributed to stimulus projects, taxes that unspecified private entities would incur, and other unspecified costs."

• Tennessee: Municipalities that own electric utilities may provide telecom services "upon complying with various public disclosure, hearing, voting, and other requirements that a private provider would not have to meet. Municipalities that do not operate electric utilities can provide services only in 'historically unserved areas,' and only through joint ventures with the private sector."

• Texas: The state "prohibits municipalities and municipal electric utilities from offering telecommunications services to the public either directly or indirectly through a private telecommunications provider."

• Utah: Various procedural and accounting requirements imposed on municipalities would be "impossible for any provider of retail services to meet, whether public or private." Municipal providers that offer services at wholesale rather than retail are exempt from some of the requirements, "but experience has shown that a forced wholesale-only model is extremely difficult, or in some cases, impossible to make successful."

• Virginia: Municipal electric utilities can offer phone and Internet services "provided that they do not subsidize services, that they impute private-sector costs into their rates, that they do not charge rates lower than the incumbents, and that [they] comply with numerous procedural, financing, reporting and other requirements that do not apply to the private sector." Other requirements make it nearly impossible for municipalities to offer cable service, except in Bristol, which was grandfathered.

• Washington: The state "authorizes some municipalities to provide communications services but prohibits public utility districts from providing communications services directly to customers."

• Wisconsin: Cities and towns must "conduct a feasibility study and hold a public hearing prior to providing telecom, cable, or Internet services." Additionally, the state "prohibits 'subsidization' of most cable and telecom services and prescribes minimum prices for telecommunications services."

http://arstechnica.com/tech-policy/2...-in-20-states/





Industry and Congress Await the F.C.C. Chairman’s Next Moves on Internet Rules
Edward Wyatt

In his first 100 days as the chairman of the Federal Communications Commission, Tom Wheeler persuaded mobile phone companies to agree on rules about unlocking consumers’ phones, cemented an effort to increase the reliability of calls to 911, proposed tests to do away with old-fashioned telephone networks and freed $2 billion to connect schools and libraries to the Internet.

Those were the easy tasks. In the coming days, the telecommunications, media and Internet industries will be watching to see how Mr. Wheeler responds to last month’s federal appeals court decision that invalidated the rules created by the F.C.C. in 2011 to maintain an open Internet.

Mr. Wheeler has said that he views the decision, which many people saw as a setback for the agency, as an opportunity. He contends he can use it to assert the commission’s broad legal authority to enforce equality and access throughout the networks on which Internet traffic travels — a concept known as net neutrality.

Stressing the depth of his conviction, Mr. Wheeler answered a reporter’s question at a recent news conference about how the F.C.C. would react by pounding the lectern, emphasizing each word: “We will preserve and protect the open Internet.”

An open Internet means that the companies controlling the network through which digital traffic travels cannot determine who gets access to the network. An Internet company could not charge more for certain kinds of content, say a movie or breaking news, although the appeals court decision makes those kinds of practices possible.

Republican members of Congress have warned Mr. Wheeler that the federal court has twice told the F.C.C. it does not have that authority. In a pointed statement after the appeals court threw out the F.C.C.’s rules, Senator John Thune, a South Dakota Republican, reminded Mr. Wheeler that he had promised during confirmation that he would return to Congress “for more direction before attempting another iteration of network neutrality rules.”

But in an interview Friday, Mr. Wheeler asserted that he was under no such obligation. “What I said was if the Open Internet Order was thrown out by the court, of course I would talk to Congress. But the Open Internet Order was not thrown out by the court,” he said. “In fact, the court affirmed our authority.”

How Mr. Wheeler navigates the issue represents a critical stage in his short tenure. Members of Congress on both sides of the aisle will be watching closely, largely because they are staking out positions for a rewriting of the laws governing the nation’s communications systems, which were last updated in 1996.

“His goal is to determine how the agency as an institution will develop and evolve in a more advanced technological arena,” said Phil Weiser, the dean of the University of Colorado law school.

If he feels pressure, Mr. Wheeler, a former businessman in both the cable and Internet industries, is not showing it. “These are issues I’ve been living with for a lifetime,” he said Friday. “My job is to be here representing American consumers.”

He has assembled a team of experienced telecom hands to guide him. Among them are Philip Verveer, the politically connected lawyer who headed a federal effort to prosecute AT&T that led to the breakup of the Bell phone monopoly; Gigi B. Sohn, former president of the advocacy group Public Knowledge; and Ruth Milkman, who has held senior positions in several F.C.C. departments.

“He has taken people with lots of different real-world experiences and perspective, and he has been clear about his goals,” said Karen Kornbluh, a telecommunications specialist who was a candidate for the F.C.C.’s top post.

Mr. Wheeler’s first 100 days have not proceeded without missteps — most notably, creating a firestorm with the impression that the F.C.C. was about to allow cellphone calls to be made on airplanes.

(The F.C.C. did say there remained no engineering reasons to maintain the prohibition, but it is the Federal Aviation Administration’s decision whether or not to do so.)

“I think, frankly, that I handled it clumsily in that I didn’t use terms that were not regulatory-ese,” Mr. Wheeler said. “I learned that the minute you put something on an agenda you need to be able to explain it in something other than regulator jargon.”

In fact, Mr. Wheeler rarely resorts to jargon; a 67-year-old Midwesterner, he seems to have a wry aphorism for every occasion.

In three months he has reminded listeners that this is “not my first rodeo” and that as chairman he did not intend to “sit around and suck eggs.” When a public interest group delivered a petition with more than one million signatures urging the F.C.C. to protect the open Internet, Mr. Wheeler said — without irony — “That’s boffo!”

But he has yet to speak plainly about his plans to overcome the net neutrality decision. Critics say that in doing so he has hidden just how much power the F.C.C. had gained from the decision.

In the case, Verizon v. F.C.C., the United States Court of Appeals for the District of Columbia Circuit said that the commission was wrong in how it went about imposing rules on how broadband providers treat Internet traffic. But the decision embraced a view the F.C.C. itself had previously rejected — that the agency’s charge to promote the expansion of broadband gives it sway not only over Internet service providers but also over companies that offer Internet content, like Google, Facebook or Netflix.

“It gave the F.C.C. a lot more power to do anything it wants to a lot of Internet companies,” said Berin Szoka, a founder of TechFreedom, which promotes digital rights and privacy. “It means three unchecked bureaucrats at the F.C.C.,” the number required for a majority on the five-member commission, “get to regulate the Internet however they want without any oversight.”

Robert M. McDowell, a former F.C.C. commissioner who is now a visiting fellow at the Hudson Institute, said the decision “was clearly written in a way to give the F.C.C. the authority to do something.”

However, he added, “The court left open what that something is.”
http://www.nytimes.com/2014/02/10/bu...net-rules.html





This is What a Competitive Broadband Market Looks Like
Brian Fung

Google Fiber is a Lamborghini priced like a Camry, according to Blair Levin. Now even local businesses are competing on that basis. (Guzmán Lozano)

When Google said it was going to bring its high-speed fiber optic service to Austin, it probably didn't expect to touch off a race to switch on the cheapest, fastest Internet service around. But within a year of announcing the move, AT&T followed suit. And now a third company has beaten them both.

Grande Communications, a 10-year-old provider based a half hour away in San Marcos, Tex., is rolling out full gigabit fiber to seven neighborhoods in west Austin next week. Gigabit service customers will benefit from speeds up to 100 times the national average. The company's service won't require a contract, doesn't impose data caps and vows to obey net neutrality principles. At $65 a month, it'll be more affordable than either Google or AT&T's offerings — and it'll come with fewer strings attached.

By the time Grande unveils the faster service, the larger businesses will still be working to finish their infrastructure rollout. Even though its gigabit service isn't ready yet, AT&T is encouraging Austinites to commit early for speeds of up to 300 Mbps (still pretty zippy). Those GigaPower subscribers will be automatically upgraded to gigabit speeds once AT&T has completed its construction later this year. It'll cost $99 a month for plain, vanilla service, or $70 a month if you agree to let AT&T monitor your Web behavior so it can send you targeted advertising.

Meanwhile, Google hasn't announced a price for Austin. But judging from its other rollouts in Kansas City, Kan., and Provo, Utah, gigabit service is expected to cost $70 a month as well, on top of a one-time $30 construction fee.

Grande's entry suggests it isn't only large, national businesses that can compete when it comes to offering high-speed broadband. Austin is fast becoming the site of an arms race among broadband providers at a time when many U.S. communities are dominated by one or perhaps two companies. But there's a good reason for that: The city is already known for its forward thinking. Thanks in part to conferences like SXSW, university students and big health-care centers, Austin has become "a mecca for creative and entrepreneurial people," according to Google. That tech-savvy culture makes the high cost of investing in Austin worth it to Internet providers, despite having to lay down all-new fiber themselves.

As the second city to get Google Fiber, Austin has also become a wake-up call for companies fearful of being threatened by the search giant. When I spoke with Blair Levin, a telecom lawyer who was instrumental in drafting the government's national broadband plan, in September, he said that some initially thought Google Fiber was going to be a premium product. They were wrong.

"What Google did instead was say, 'We're going to build you a Lamborghini, but price it at the same price as a Camry,'" said Levin. "And that's what's so disruptive about it."
http://www.washingtonpost.com/blogs/...et-looks-like/





Google Working On 10 Gigabit Internet Speeds

Project to develop 'next generation' of the Internet is part of Google's broader obsession with speed, CFO says
Alistair Barr

Google is working on technology that will provide data transfer speeds over the Internet that are many times faster than its current Google Fiber service in Kansas City, an executive at the online search giant said on Wednesday.

Google Fiber offers data transfer speeds of 1 gigabit per second currently. But the company is already working on speeds of 10 gigabits per second, Chief Financial Officer Patrick Pichette said during the Goldman Sachs Technology and Internet conference.

Pichette called this the next generation of the Internet and said it was part of Google's broader, long-term obsession with speed.

Faster speeds will increase the use of software as a service because users will be able to trust that critical applications that are data intensive will run smoothly over the Internet, he explained.

"That's where the world is going. It's going to happen," Pichette said. It may happen over a decade, but "why wouldn't we make it available in three years? That's what we're working on. There's no need to wait," he added.

Google is not the only one working on this. Last year, researchers in the U.K. announced that they achieved data transmission speeds of 10 gigabits per second using "li-fi" a wireless Internet connectivity technology that uses light.

Pichette has experience in this area. From early 2001 until July 2008, he was an executive at Bell Canada, which offers a fast, fiber optic Internet service to homes in that country.

Google Fiber is currently available in Kansas City, but Google has said it is bringing the service to Austin, Texas and Pichette told analysts last year that the project is not a hobby for the company.

On Wednesday he was asked whether Google Fiber will be coming to more cities. "Stay tuned," Pichette answered.
http://www.usatoday.com/story/tech/2...peeds/5421709/





Comcast Agrees to Buy Time Warner Cable for $45.2 Billion
Alex Sherman and Jeffrey McCracken

Comcast Corp. agreed to acquire Time Warner Cable Inc. for $45.2 billion, combining the two largest U.S. cable companies in an all-stock transaction.

Investors of New York-based Time Warner Cable will receive 2.875 Comcast stock for each of their shares, the companies said in a joint statement today. The deal values each Time Warner Cable share at $158.82, or 17 percent more than its close yesterday. The transaction, subject to approval by stockholders and regulators, is expected to be completed by the end of 2014.

Time Warner Cable shares jumped 11 percent to $150.65 in early U.S. trading. Comcast, based in Philadelphia, slipped 0.4 percent to $55.

Comcast Chief Executive Officer Brian Roberts will extend his lead in the U.S. cable-TV market after trouncing John Malone-backed Charter Communications Inc., which had courted Time Warner Cable since June. Holding out for a better offer than Charter’s $132.50-a-share bid allowed Time Warner Cable to deliver an almost 70 percent gain for shareholders since the end of May.

“This leaves Comcast as the sole king of the cable hill, with John Malone and Charter hitting a brick wall in their hopes of becoming a close number-two,” Richard Greenfield, an analyst with BTIG LLC, said by e-mail. “This is a game changer for Comcast.”
Share Buyback

Time Warner Cable shareholders will own about 23 percent of Comcast’s common stock. The deal will generate savings of about $1.5 billion and increase Comcast’s free cash flow per share, according to the statement. Comcast plans to buy back an additional $10 billion of its shares.

John Demming, a Comcast spokesman, said there is no breakup fee on the transaction.

Charter is unlikely to match Comcast’s bid and is willing to study any assets Comcast would sell, said a person familiar with the matter, who asked not to be identified because the negotiations were private. Comcast will volunteer to divest about 3 million subscribers of the combined company to keep its market share below 30 percent and is willing to sell them to Stamford, Connecticut-based Charter, another person said.

Bargaining Power

“Charter has always maintained that our greatest opportunity to create value for our shareholders is by executing our current business plan, and that we will continue to be disciplined in this and any other M&A activity we pursue,” the company said in a statement.

Buying the second-largest U.S cable-TV company brings Comcast more than 11 million residential subscribers. It also gives Comcast access to the New York City cable market and brings it more bargaining power with content providers, Bill Smead, chief investment officer at Smead Capital Management, said in an e-mailed reply to questions.

“This is definitely a bet on a positive future for high-speed access, cable and other services in an economic recovery,” said Smead, whose fund owns Comcast shares.

The Comcast-Time Warner agreement caught Charter by surprise, people familiar with the matter said. Comcast and Charter had been negotiating an asset sale after a potential Charter acquisition of Time Warner Cable, according to the people.

Comcast’s Demands

Those talks broke down last week, culminating in a meeting where Comcast Chief Financial Officer Michael Angelakis stormed out and threatened to do a deal for Time Warner Cable without Charter’s help, the people said.

Comcast pressed Charter to divest more assets, including Time Warner Cable’s Los Angeles regional sports networks, beyond the New England, North Carolina and New York systems initially offered, one of the people said. It also wanted a say in how Charter handled its proxy fight with Time Warner Cable, the person said.

Comcast also didn’t want to commit a lot of cash to a deal, preferring to do an all-stock transaction, which Charter disagreed with, another person said.

The Comcast acquisition values Time Warner Cable at about $69 billion including net debt, or 8.3 times its estimated 2014 earnings before interest, taxes, depreciation and amortization, according to data compiled by Bloomberg. North American cable and satellite companies trade at an average multiple of 9 times on that basis, the data show.
‘Ridiculous Lowball’

In its counterproposal to Charter, Time Warner Cable had asked for $160 a share. Time Warner Cable Chief Executive Officer Robert Marcus would prefer to work with Comcast CEO Roberts rather than with Charter’s Malone, a person with direct knowledge of the matter said in November.

“The Comcast bid makes the Time Warner board look smart for telling Charter its offer was a ridiculous lowball,” said Erik Gordon, a business professor at the University of Michigan.

Comcast has made $65.6 billion of acquisitions over the past 10 years, according to data compiled by Bloomberg. It acquired the remainder of NBCUniversal from General Electric Co. for $16.7 billion in March, following through on the cable company’s purchase of a controlling stake in 2011.

A tie-up between Comcast and Time Warner Cable would face tough scrutiny from the Federal Communications Commission, Craig Moffett, an analyst at MoffettNathanson LLC, said in an interview in January. The merged company would account for almost three-quarters of the cable industry, according to the National Cable Television Association.

Last month, Time Warner Cable announced fourth-quarter profit that beat estimates and said it will add 1 million residential customers in the next three years. It lost 217,000 residential video subscribers in the fourth quarter, hurt by competition from AT&T Inc., Verizon Communications Inc. and streaming services such as Netflix Inc. The larger Comcast added 43,000 television customers in the same period.

Comcast was advised by JPMorgan Chase & Co., Paul J. Taubman and Barclays Plc. Time Warner Cable’s advisers are Morgan Stanley, Allen & Co., Citigroup Inc. and Centerview Partners LLC.
http://www.bloomberg.com/news/2014-0...ner-cable.html





Change Your Passwords: Comcast Hushes, Minimizes Serious Hack

Summary: Opinion: Comcast took a page from Snapchat's playbook to hush and downplay NullCrew FTS' successful hack on dozens of Comcast's servers — from an unpatched, easy-to-fix vulnerability dated December 2013 — which most likely exposed customer data.
Violet Blue

Are you a Comcast customer? Please change your password.

On February 6, NullCrew FTS hacked into at least 34 of Comcast's servers and published a list of the company's mail servers and a link to the root file with the vulnerability it used to penetrate the system on Pastebin.

Comcast, the largest internet service provider in the United States, ignored news of the serious breach in press and media for over 24 hours — only when the Pastebin page was removed did the company issue a statement, and even then, it only spoke to a sympathetic B2B outlet.

During that 24 hours, Comcast stayed silent, and the veritable "keys to the kingdom" sat out in the open internet, ripe for the taking by any malicious entity with a little know-how around mail servers and selling or exploiting customer data.

Comcast customers have not been not told to reset their passwords. But they should.

Once NullCrew FTS openly hacked at least 24 Comcast mail servers, and the recipe was publicly posted, the servers began to take a beating. Customers in Comcast's janky, hard-to-find, 1996-style forums knew something was wrong, and forum posts reflected the slowness, the up and down servers, and the eventual crashing.

The telecom giant ignored press requests for comment and released a limited statement on February 7 — to Comcast-friendly outlet, broadband and B2B website Multichannel News.

The day-late statement failed to impress the few who saw it, and was criticized for its minimizing language and weak attempt to suggest that the breach had been unsuccessful.

From Comcast's statement on Multichannel's post No Evidence That Personal Sub Info Obtained By Mail Server Hack:
Comcast said it is investigating a claim by a hacker group that claims to have broken into a batch of the MSO email servers, but believes that no personal subscriber data was obtained as a result.

"We're aware of the situation and are aggressively investigating it," a Comcast spokesman said. "We take our customers' privacy and security very seriously, and we currently have no evidence to suggest any personal customer information was obtained in this incident."

Not only is there a high probability that customer information was exposed — because direct access was provided to the public for 24 hours — but the vulnerability exploited by the attackers was disclosed and fixed in December 2013.

Just not by Comcast, apparently.

Vulnerability reported December 2013, not patched by Comcast

NullCrew FTS used the unpatched security vulnerability CVE-2013-7091 to open what was essentially an unlocked door for anyone access to usernames, passwords, and other sensitive details from Comcast's servers.

NullCrew FTS used a Local File Inclusion (LFI) exploit to gain access to the Zimbra LDAP and MySQL database — which houses the usernames and passwords of Comcast ISP users.

"Fun Fact: 34 Comcast mail servers are victims to one exploit," tweeted NullCrew FTS.

If you are a Comcast customer, you are at risk: All Comcast internet service includes a master email address.

Even if a customer doesn't use Comcast's Xfinity mail service, every Comcast ISP user has a master email account with which to manage their services, and it is accessible through a "Zimbra" webmail site.

This account is used to access payment information, email settings, user account creation and settings, and any purchases from Comcast's store or among its services.

With access to this master email address, someone can give up to six "household members" access to the Comcast account.

NullCrew taunted Comcast on Twitter, then posted the data on Pastebin and taunted the company a little bit more.

Because there were "no passwords" on the Pastebin, some observers believed — incorrectly — that there was no serious risk for exploitation of sensitive customer information.
NullCrew FTS: 2 — big telecoms: 0

On the first weekend of February 2014, NullCrew FTS took credit for a valid hack against telecom provider Bell Canada.

In the first strike of what looks like it'll be a very successful campaign to cause pain and humiliation to big telecoms, NullCrew FTS accessed and exposed more than 22,000 usernames and passwords, and some credit card numbers belonging to the phone company's small business customers.

Establishing a signature game of cat and mouse with clueless support staff, NullCrew FTS contacted Bell customer support two weeks before its disclosure.

Like Comcast's robotic customer service responses to NullCrew FTS on Twitter, Bell's support staff either didn't know how to report the security incident upstream, had no idea what a hacking event was, or didn't take the threat seriously.

Bell also tried to play fast and loose with its accountability in the security smash and grab; it acknowledged the breach soon after, but blamed it on an Ottawa-based third-party supplier.

However, NullCrew FTS announced the company's insecurities in mid January with a public warning that the hackers had issued to a company support representative about the vulnerabilities.

NullCrew FTS followed up with Bell by posting a Pastebin link on Twitter with unredacted data.

A page from Snapchat's playbook

Just over a month ago, popular social media sharing app Snapchat was the subject of headlines and the target of public scorn when hackers (Gibson Security) posted multiple known exploits after warning the company about its security holes, and having the problems ignored.

Snapchat further attempted — badly — to ignore press and public when the hackers later published details about Snapchat's security holes (some which still call into question the validity of Snapchat's userbase) and released to the world a few very active Snapchat database exploits.

On Christmas Day 2013, headlines reported: Researchers publish Snapchat code allowing phone number matching after exploit disclosures ignored.

Less than a week later, the database exploits and recipes for access were used maliciously against Snapchat customers when the world read: Predictably, Snapchat user database maliciously exposed.

Snapchat hung its userbase out to dry.

It look like Comcast has, too.

It's a reprehensible playbook, void of accountability and rife with risk for the only people involved who can't do a damn thing to protect themselves.

I think the situation demands we ask the question: What else isn't Comcast doing?

Perhaps Comcast should change its tagline.
http://www.zdnet.com/change-your-pas...ck-7000026118/





Inside the FCC’s Plan to Take on Hackers
Brian Fung

Until now, fighting hackers has mainly been the domain of law enforcement, intelligence services and the military. But the Federal Communications Commission, an agency better known for approving wireless mergers and regulating phone companies, intends to create a vastly expanded role for itself on cybersecurity, current and former agency officials say.

The exact nature of that role has yet to be decided, the officials said. Options range from helping telecommunications companies implement a major cybersecurity framework released Wednesday by the Obama administration to writing new rules on network reliability. In addition, the commission may lean on a recent federal court decision on net neutrality to pursue other forms of national security-related oversight.

Recent attacks on retailers, banks and other institutions have drawn regulators' attention to the security of servers, networks and industrial systems. In the case of the FCC, cybersecurity means maintaining the integrity of the telecom links between those systems, as well as making sure first-responder communications remain functional in a crisis. At the same time, other technological advances have made regulating some communications companies more difficult. By staking a claim on cybersecurity, the FCC stands to reclaim some of its bureaucratic authority.

The FCC's interest in network security dates back to 2009, when then-chairman Julius Genachowski appointed James Barnett, a retired Navy rear admiral, to head the commission's public safety bureau. Barnett created the FCC's Cybersecurity and Communications Reliability Division and launched three programs in coordination with broadband providers to, among other things, fight botnets and prevent e-mails from being hijacked and rerouted to foreign countries.

But what's coming next under FCC chairman Tom Wheeler will be an even broader effort to secure the nation's communications from online attack, said Barnett in an interview.

"Chairman Wheeler told me, 'I do not intend to be sitting in the chairman's seat when a major cyber attack occurs, having done nothing,'" said Barnett.

Wheeler has been hinting as much for several months. In July, the commission appointed David Bray, a top official from the Office of the Director of National Intelligence, as its chief information officer.

"For those of us who use the Internet to engage in public and personal transactions," Bray wrote in an FCC blog post, "it is a quality assurance concern that our digital communications on the public infrastructure be kept both secure and private."

The staffing surge continued with Adm. David Simpson, whom the FCC named as chief of the public safety bureau in November. Simpson is a former vice director of the Defense Information Systems Agency, a Pentagon unit that builds and maintains communications systems for the White House and armed services.

Then last month, Wheeler created an entirely new position known as the chief counsel for cybersecurity. Filling that post is Clete Johnson, a top Senate Intelligence Committee staffer who played a key role drafting cybersecurity legislation under Sen. Jay Rockefeller (D-W. Va.).

Much of the coming cybersecurity push will be led by Simpson, who will be in charge of expanding the cybersecurity agenda "across all bureaus" of the agency, according to Barnett. As the head of the public safety and homeland security bureau, Simpson has already played a key role in developing trials for advanced 911 services.

But Simpson is also expected to consult heavily with the private sector. Last month, wireless industry representatives met with Simpson to discuss the presidential cybersecurity framework. There's also CSRIC, a public-private working group that makes security recommendations to the FCC. CSRIC would be the organization most likely to handle the White House framework, according to an FCC official.

The FCC's plan risks creating tensions with other federal agencies, such as the Department of Homeland Security. But according to Barnett, the FCC's considerable technical expertise on communications will smooth over any conflict.

"Whether it's cable, wireless or wireline, there is a tremendous body of knowledge at the FCC that just does not exist at DHS," said Barnett. "They collaborate with us. We collaborate with them. The FCC is consulted on networks."

The growing focus on cybersecurity at the FCC comes at a time when advances in technology risk eroding some of its power. The country is in the early stages of abandoning its old, copper telephone networks. Phone companies are increasingly moving to new infrastructure based on Internet Protocol. Few regulations have been established in that area, and experts say telecom companies are partly motivated by the prospect of escaping the old rules that governed the copper system. The D.C. Circuit court also recently dealt a blow to the commission when it ruled against the FCC's net neutrality regulations (although some experts believe the agency can recover a great deal of authority through another loophole).

By seizing the mantle of cybersecurity, the FCC has an opportunity to become one of a number of key governmental players in national security. The FCC has not traditionally had a role in IT security, though it has tried on occasion to find one.

"When I was at the White House 10 years ago, you'd be at their event and they'd be trying to pick over these issues to stick a toe in," said Jason Healey, the former director for cyber infrastructure protection in the George W. Bush administration. "It hasn't been easy for them."

Others say there has never been any question of the commission's authority, especially when it comes to protecting critical telecommunications infrastructure.

"If there's anything we have jurisdiction over, it's the security of our networks," said the FCC official.
http://www.washingtonpost.com/blogs/...ke-on-hackers/





Hackers Circulate Thousands of FTP Credentials; New York Times Among Those Hit

A list of compromised FTP credentials is circulating in underground forums
Jeremy Kirk

Hackers are circulating credentials for thousands of FTP sites and appear to have compromised file transfer servers at The New York Times and other organizations, according to a security expert.

The hackers obtained credentials for more than 7,000 FTP sites and have been circulating the list in underground forums, said Alex Holden, chief information security officer for Hold Security, a Wisconsin-based company that monitors cyberattacks.

In some cases, hackers used the credentials to access FTP servers and upload malicious files, including scripts in the PHP programming language. In other instances, they placed files on FTP servers that incorporate malicious links directing people to websites advertising work-at-home schemes and other scams.

An FTP server run by The New York Times was among those affected, and hackers uploaded several files to the server, Holden said.

Eileen Murphy, head of communications for the Times, said via email the company was "taking steps to secure" its network and could not comment further due to an investigation.

UNICEF, another organization whose credentials appear on the list, did not confirm it had been compromised but said it had disabled the FTP application in question, which it said was part of a system no longer in use.

UNICEF has been moving to a "more robust" content management platform and the organization uses third parties to check its infrastructure for vulnerabilities, spokeswoman Sarah Crowe said via email.

"It is therefore very rare for us to witness such a breach," she said.

Not all the credentials on the list are valid but a sampling showed that many of them work, said Holden, whose research credits include discovering large data breaches affecting the retailer Target and software vendor Adobe Systems.

Holden said he did not know the name of the group responsible for the FTP attacks.

The attackers may have obtained the credentials through malware installed on other computers at the affected organizations, he said. The passwords in many cases are complex, suggesting the hackers weren't merely guessing default credentials that had not been changed.

FTP servers are online repositories where people can upload and download files, and they're designed to be accessible remotely via login and password.

The default application for accessing FTP servers is usually a Web browser, which can log into an FTP site automatically if supplied with a link containing the proper credentials. Hackers could therefore embed links in spam emails, for example, and the name of a familiar company might give victims the confidence to trust a link and click on it.

In the case of The New York Times, one of the files uploaded to its FTP server was a .html file, Holden said. That file could be incorporated into a malicious link that could be used in a spam message, he said. If opened, the link would take a person to The New York Times' FTP server but then redirect them to another website advertising a work-at-home scheme.

Users need to be careful about opening links in emails even if they appear to be for legitimate domains, Holden said.

FTP applications can also be used to update files on a Web server, meaning hackers could potentially use the credentials to make changes to a company's website. It's hard to say how many of the FTP sites on the list are connected to Web servers, he said.

Several other companies whose FTP domains appear on the list could not be reached for comment.
http://www.computerworld.com/s/artic...mong_those_hit





Rate-Limiting State

The edge of the Internet is an unruly place
Paul Vixie

By design, the Internet core is stupid, and the edge is smart. This design decision has enabled the Internet's wildcat growth, since without complexity the core can grow at the speed of demand. On the downside, the decision to put all smartness at the edge means we're at the mercy of scale when it comes to the quality of the Internet's aggregate traffic load. Not all device and software builders have the skills—and the quality assurance budgets—that something the size of the Internet deserves. Furthermore, the resiliency of the Internet means that a device or program that gets something importantly wrong about Internet communication stands a pretty good chance of working "well enough" in spite of its failings.

Witness the hundreds of millions of CPE (customer-premises equipment) boxes with literally too much memory for buffering packets. As Jim Gettys and Dave Taht have been demonstrating in recent years, more is not better when it comes to packet memory. Wireless networks in homes and coffee shops and businesses all degrade shockingly when the traffic load increases. Rather than the "fair-share" scheduling we expect, where N network flows will each get roughly 1/Nth of the available bandwidth, network flows end up in quicksand where they each get 1/(N2) of the available bandwidth. This isn't because CPE designers are incompetent; rather, it's because the Internet is a big place with a lot of subtle interactions that depend on every device and software designer having the same—largely undocumented—assumptions.

Witness the endless stream of patches and vulnerability announcements from the vendors of literally every smartphone, laptop, or desktop operating system and application. Bad guys have the time, skills, and motivation to study edge devices for weaknesses, and they are finding as many weaknesses as they need to inject malicious code into our precious devices where they can then copy our data, modify our installed software, spy on us, and steal our identities—113 years of science fiction has not begun to prepare us for how vulnerable we and our livelihoods are, now that everyone is online. Since the adversaries of freedom and privacy now include nation-states, the extreme vulnerability of edge devices and their software is a fresh new universal human-rights problem for the whole world.

Source Address Validation

Nowhere in the basic architecture of the Internet is there a more hideous flaw than in the lack of enforcement of simple SAV (source-address validation) by most gateways. Because the Internet works well enough even without SAV, and because the Internet's roots are in academia where there were no untrusted users or devices, it's safe to say that most gateway makers (for example, wireless routers, DSL modems, and other forms of CPE) will allow most edge devices to emit Internet packets claiming to be from just about anywhere. Worse still, providers of business-grade Internet connections, and operators of Internet hosting data centers and "clouds," are mostly not bothering to turn on SAV toward their customers. Reasons include higher cost of operation (since SAV burns some energy and requires extra training and monitoring), but the big reason why SAV isn't the default is: SAV benefits only other people's customers, not an operator's own customers.

There is no way to audit a network from outside to determine if it practices SAV. Any kind of compliance testing for SAV has to be done by a device that's inside the network whose compliance is in question. That means the same network operator who has no incentive in the first place to deploy SAV at all is the only party who can tell whether SAV is deployed. This does not bode well for a general improvement in SAV conditions, even if bolstered by law or treaty. It could become an insurance and audit requirement in countries where insurance and auditing are common, but as long as most of the world has no reason to care about SAV, it's safe to assume that enough of the Internet's edge will always permit packet-level source-address forgery, so that we had better start learning how to live with it—for all eternity.

While there are some interesting problems in data poisoning made possible by the lack of SAV, by far the most dangerous thing about packet forgery is the way it facilitates DDoS (distributed denial of service). If anybody can emit a packet claiming to be from anybody else, then a modest stream of requests by an attacker, forged to appear to have come from the victim, directed at publicly reachable and massively powerful Internet servers, will cause that victim to drown in responses to requests they never made. Worse, the victim can't trace the attack back to where it entered the network and has no recourse other than to wait for the attack to end, or hire a powerful network-security vendor to absorb the attack so that the victim's other services remain reachable during the attack.

Domain Name System Response Rate Limiting

During a wave of attacks a few years ago where massively powerful public DNS (Domain Name System) servers were being used to reflect and amplify some very potent DDoS attacks, Internet researchers Paul Vixie and Vernon Schryver developed a system called DNS RRL (Response Rate Limiting) that allowed the operators of the DNS servers being used for these reflected amplified attacks to deliberately drop the subset of their input request flow that was statistically likely to be attack-related. DNS RRL is not a perfect solution, since it can cause slight delays in a minority of normal (non-attack) transactions during attack conditions. The DNS RRL tradeoff, however, is obviously considered a positive since all modern DNS servers and even a few IPS/IDS (intrusion protection system/intrusion detection system) products now have some form of DNS RRL, and many TLD (top-level domain) DNS servers are running DNS RRL. Operators of powerful Internet servers must all learn and follow Stan Lee's law (as voiced by Spider-Man): "With great power comes great responsibility."

DNS RRL was a domain-specific solution, relying on detailed knowledge of DNS itself. For example, the reason DNS RRL is response rate limiting is that the mere fact of a question's arrival does not tell the rate limiter enough to make a decision as to whether that request is or is not likely to be part of an attack. Given also a prospective response, though, it is possible with high confidence to detect spoofed-source questions and thereby reduce the utility of the DNS server as a reflecting DDoS amplifier, while still providing "good enough" service to non-attack traffic occurring at the same time—even if that non-attack traffic is very similar to the attack.

The economics of information warfare is no different from any other kind of warfare—one seeks to defend at a lower cost than the attacker, and to attack at a lower cost than the defender. DNS RRL did not have to be perfect; it merely had to tip the balance: to make a DNS server less attractive to an attacker than the attacker's alternatives. One important principle of DNS RRL's design is that it makes a DNS server into a DDoS attenuator—it causes not just lack of amplification, but also an actual reduction in traffic volume compared with what an attacker could achieve by sending the packets directly. Just as importantly, this attenuation is not only in the number of bits per second, but also in the number of packets per second. That's important in a world full of complex stateful firewalls where the bottleneck is often in the number of packets, not bits, and processing a small packet costs just as much in terms of firewall capacity as processing a larger packet.

Another important design criterion for DNS RRL is that its running costs are so low as to not be worth measuring. The amount of CPU capacity, memory bandwidth, and memory storage used by DNS RRL is such a small percentage of the overall load on a DNS server that there is no way an attacker can somehow "overflow" a DNS server's RRL capacity in order to make DNS RRL unattractive to that server's operator. Again, war is a form of applied economics, and the design of DNS RRL specifically limits the cost of defense to a fraction of a fraction of the attacker's costs. Whereas DNS achieves its magnificent performance and scalability by being stateless, DNS RRL adds the minimum amount of state to DNS required for preventing reflected amplified attacks, without diminishing DNS's performance.

Current State

To be stateless in the context of network protocols means simply that the responder does not have to remember anything about a requester in between requests. Every request is complete unto itself. For DNS this means a request comes in and a response goes out in one single round-trip from the requester to the responder and back. Optional responder state isn't prohibited—for example, DNS RRL adds some modest state to help differentiate attack from non-attack packets. Requesters can also hold optional state such as RTT (round-trip time) of each candidate server, thus guiding future transactions toward the server that can respond most quickly. In DNS all such state is optional, however, and the protocol itself will work just fine even if nobody on either end retains any state at all.

DNS is an example of a UDP (User Datagram Protocol), and there are other such protocols. For example, NTP (Network Time Protocol) uses UDP, and each response is of equal or greater size than the request. A true NTP client holds some state, in order to keep track of what time the Internet thinks it is. An attacker, however, need not show an NTP responder any evidence of such state in order to solicit a response. Since NTP is often built into CPE gateways and other edge devices, there are many millions of responders available for DDoS attackers to use as reflectors or as amplifying reflectors.

TCP (Transmission Control Protocol), on the other hand, is stateful. In current designs both the initiator and the responder must remember something about the other side; otherwise, communication is not possible. This statefulness is a mixed blessing. It is burdensome in that it takes several round-trips to establish enough connection state on both sides to make it possible to send a request and receive a response, and then another one-and-a-half round-trips to close down the connection and release all state on both sides. TCP has an initiation period when it is trying to create shared state between the endpoints, during which several SYN-ACK messages can be sent by the responder to the purported initiator of a single SYN message. This means TCP itself can be used as an amplifier of bits and packets, even though the SYN-ACK messages are not sent back to back. With hundreds of millions of TCP responders available, DDoS attackers can easily find all the reflecting amplifying TCP devices needed for any attack on any victim—no matter how capacious or well-defended.

ICMP (Internet Control Message Protocol) is stateless, in that gateways and responders transmit messages back to initiators in asynchronous response to network conditions and initiator behavior. The popular "ping" and "traceroute" commands rely on the wide availability of ICMP; thus, it's uncommon for firewalls to block ICMP. Every Internet gateway and host supports ICMP in some form, so ICMP-based reflective DDoS attackers can find as many ICMP reflectors as they look for.

The running theme of these observations is that in the absence of SAV, statelessness is bad. Many other UDP-based protocols, including SMB (Server Message Block) and NFS (Network File System), are stateful when used correctly, but, like TCP, are stateless during initial connection startup and can thus be used as DDoS reflectors or amplifying DDoS reflectors depending on the skill level of a DDoS attacker. While the ultimate cause of all this trouble is the permanent lack of universal SAV, the proximate cause is stateless protocols. Clearly, in order to live in a world without SAV, the Internet and every protocol and every system is going to need more state. That state will not come to the Internet core, which will be forever dumb. Rather, the state that must be added to the Internet system in order to cope without SAV has to be added at the edge.

Conclusion

Every reflection-friendly protocol mentioned in this article is going to have to learn rate limiting. This includes the initial TCP three-way handshake, ICMP, and every UDP-based protocol. In rare instances it's possible to limit one's participation in DDoS reflection and/or amplification with a firewall, but most firewalls are either stateless themselves, or their statefulness is so weak that it can be attacked separately. The more common case will be like DNS RRL, where deep knowledge of the protocol is necessary for a correctly engineered rate-limiting solution applicable to the protocol. Engineering economics requires that the cost in CPU, memory bandwidth, and memory storage of any new state added for rate limiting be insignificant compared with an attacker's effort. Attenuation also has to be a first-order goal—we must make it more attractive for attackers to send their packets directly to their victims than to bounce them off a DDoS attenuator.

This effort will require massive investment and many years. It is far more expensive than SAV would be, yet SAV is completely impractical because of its asymmetric incentives. Universal protocol-aware rate limiting (in the style of DNS RRL, but meant for every other presently stateless interaction on the Internet) has the singular advantage of an incentive model where the people who would have to do the work are actually motivated to do the work. This effort is the inevitable cost of the Internet's "dumb core, smart edge" model and Postel's law ("be conservative in what you do, be liberal in what you accept from others").

Reflective and amplified DDoS attacks have steadily risen as the size of the Internet population has grown. The incentives for DDoS improve every time more victims depend on the Internet in new ways, whereas the cost of launching a DDoS attack goes down every time more innovators add more smart devices to the edge of the Internet. There is no way to make SAV common enough to matter, nor is there any way to measure or audit compliance centrally if SAV somehow were miraculously to become an enforceable requirement.

DDoS will continue to increase until the Internet is so congested that the benefit to an attacker of adding one more DDoS reaches the noise level, which means, until all of us including the attackers are drowning in noise. Alternatively, rate-limiting state can be added to every currently stateless protocol, service, and device on the Internet.
http://queue.acm.org/detail.cfm?id=2578510





Internet Trolls Really Are Horrible People

Narcissistic, Machiavellian, psychopathic, and sadistic.
Chris Mooney

In the past few years, the science of Internet trollology has made some strides. Last year, for instance, we learned that by hurling insults and inciting discord in online comment sections, so-called Internet trolls (who are frequently anonymous) have a polarizing effect on audiences, leading to politicization, rather than deeper understanding of scientific topics.

That’s bad, but it’s nothing compared with what a new psychology paper has to say about the personalities of trolls themselves. The research, conducted by Erin Buckels of the University of Manitoba and two colleagues, sought to directly investigate whether people who engage in trolling are characterized by personality traits that fall in the so-called Dark Tetrad: Machiavellianism (willingness to manipulate and deceive others), narcissism (egotism and self-obsession), psychopathy (the lack of remorse and empathy), and sadism (pleasure in the suffering of others).

It is hard to underplay the results: The study found correlations, sometimes quite significant, between these traits and trolling behavior. What’s more, it also found a relationship between all Dark Tetrad traits (except for narcissism) and the overall time that an individual spent, per day, commenting on the Internet.

In the study, trolls were identified in a variety of ways. One was by simply asking survey participants what they “enjoyed doing most” when on online comment sites, offering five options: “debating issues that are important to you,” “chatting with others,” “making new friends,” “trolling others,” and “other.” Here’s how different responses about these Internet commenting preferences matched up with responses to questions designed to identify Dark Tetrad traits:

To be sure, only 5.6 percent of survey respondents actually specified that they enjoyed “trolling.” By contrast, 41.3 percent of Internet users were “non-commenters,” meaning they didn’t like engaging online at all. So trolls are, as has often been suspected, a minority of online commenters, and an even smaller minority of overall Internet users.

The researchers conducted multiple studies, using samples from Amazon’s Mechanical Turk but also of college students, to try to understand why the act of trolling seems to attract this type of personality. They even constructed their own survey instrument, which they dubbed the Global Assessment of Internet Trolling, or GAIT, containing the following items:

I have sent people to shock websites for the lulz.

I like to troll people in forums or the comments section of websites.

I enjoy griefing other players in multiplayer games.

The more beautiful and pure a thing is, the more satisfying it is to corrupt.


Yes, some people actually say they agree with such statements. And again, doing so was correlated with sadism in its various forms, with psychopathy, and with Machiavellianism. Overall, the authors found that the relationship between sadism and trolling was the strongest, and that indeed, sadists appear to troll because they find it pleasurable. “Both trolls and sadists feel sadistic glee at the distress of others,” they wrote. “Sadists just want to have fun ... and the Internet is their playground!”

The study comes as websites, particularly at major media outlets, are increasingly weighing steps to rein in trollish behavior. Last year Popular Science did away with its comments sections completely, citing research on the deleterious effects of trolling, and YouTube also took measures to rein in trolling.

But study author Buckels actually isn’t sure that fix is a realistic one. “Because the behaviors are intrinsically motivating for sadists, comment moderators will likely have a difficult time curbing trolling with punishments (e.g., banning users),” she said by email. “Ultimately, the allure of trolling may be too strong for sadists, who presumably have limited opportunities to express their sadistic interests in a socially-desirable manner.”
http://www.slate.com/articles/health...ychopathy.html





Spying by N.S.A. Ally Entangled U.S. Law Firm
James Risen and Laura Poitras

The list of those caught up in the global surveillance net cast by the National Security Agency and its overseas partners, from social media users to foreign heads of state, now includes another entry: American lawyers.

A top-secret document, obtained by the former N.S.A. contractor Edward J. Snowden, shows that an American law firm was monitored while representing a foreign government in trade disputes with the United States. The disclosure offers a rare glimpse of a specific instance in which Americans were ensnared by the eavesdroppers, and is of particular interest because lawyers in the United States with clients overseas have expressed growing concern that their confidential communications could be compromised by such surveillance.

The government of Indonesia had retained the law firm for help in trade talks, according to the February 2013 document. It reports that the N.S.A.’s Australian counterpart, the Australian Signals Directorate, notified the agency that it was conducting surveillance of the talks, including communications between Indonesian officials and the American law firm, and offered to share the information.

The Australians told officials at an N.S.A. liaison office in Canberra, Australia, that “information covered by attorney-client privilege may be included” in the intelligence gathering, according to the document, a monthly bulletin from the Canberra office. The law firm was not identified, but Mayer Brown, a Chicago-based firm with a global practice, was then advising the Indonesian government on trade issues.

On behalf of the Australians, the liaison officials asked the N.S.A. general counsel’s office for guidance about the spying. The bulletin notes only that the counsel’s office “provided clear guidance” and that the Australian agency “has been able to continue to cover the talks, providing highly useful intelligence for interested US customers.”

The N.S.A. declined to answer questions about the reported surveillance, including whether information involving the American law firm was shared with United States trade officials or negotiators.

Duane Layton, a Mayer Brown lawyer involved in the trade talks, said he did not have any evidence that he or his firm had been under scrutiny by Australian or American intelligence agencies. “I always wonder if someone is listening, because you would have to be an idiot not to wonder in this day and age,” he said in an interview. “But I’ve never really thought I was being spied on.”

A Rising Concern for Lawyers

Most attorney-client conversations do not get special protections under American law from N.S.A. eavesdropping. Amid growing concerns about surveillance and hacking, the American Bar Association in 2012 revised its ethics rules to explicitly require lawyers to “make reasonable efforts” to protect confidential information from unauthorized disclosure to outsiders.

Last year, the Supreme Court, in a 5-to-4 decision, rebuffed a legal challenge to a 2008 law allowing warrantless wiretapping that was brought in part by lawyers with foreign clients they believed were likely targets of N.S.A. monitoring. The lawyers contended that the law raised risks that required them to take costly measures, like traveling overseas to meet clients, to protect sensitive communications. But the Supreme Court dismissed their fears as “speculative.”

The N.S.A. is prohibited from targeting Americans, including businesses, law firms and other organizations based in the United States, for surveillance without warrants, and intelligence officials have repeatedly said the N.S.A. does not use the spy services of its partners in the so-called Five Eyes alliance — Australia, Britain, Canada and New Zealand — to skirt the law.

Still, the N.S.A. can intercept the communications of Americans if they are in contact with a foreign intelligence target abroad, such as Indonesian officials. The N.S.A. is then required to follow so-called minimization rules to protect their privacy, such as deleting the identity of Americans or information that is not deemed necessary to understand or assess the foreign intelligence, before sharing it with other agencies.

An N.S.A. spokeswoman said the agency’s Office of the General Counsel was consulted when issues of potential attorney-client privilege arose and could recommend steps to protect such information.

“Such steps could include requesting that collection or reporting by a foreign partner be limited, that intelligence reports be written so as to limit the inclusion of privileged material and to exclude U.S. identities, and that dissemination of such reports be limited and subject to appropriate warnings or restrictions on their use,” said Vanee M. Vines, the spokeswoman.

The Australian government declined to comment about the surveillance. In a statement, the Australian Defense Force public affairs office said that in gathering information to support Australia’s national interests, its intelligence agencies adhered strictly to their legal obligations, including when they engaged with foreign counterparts.Several newly disclosed documents provide details of the cooperation between the United States and Australia, which share facilities and highly sensitive intelligence, including efforts to break encryption and collect phone call data in Indonesia. Both nations have trade and security interests in Indonesia, where Islamic terrorist groups that threaten the West have bases.

The 2013 N.S.A. bulletin did not identify which trade case was being monitored by Australian intelligence, but Indonesia has been embroiled in several disputes with the United States in recent years. One involves clove cigarettes, an Indonesian export. The Indonesian government has protested to the World Trade Organization a United States ban on their sale, arguing that similar menthol cigarettes have not been subject to the same restrictions under American antismoking laws. The trade organization, ruling that the United States prohibition violated international trade laws, referred the case to arbitration to determine potential remedies for Indonesia.

Another dispute involved Indonesia’s exports of shrimp, which the United States claimed were being sold at below-market prices.

The Indonesian government retained Mayer Brown to help in the cases concerning cigarettes and shrimp, said Ni Made Ayu Marthini, attaché for trade and industry at the Indonesian Embassy in Washington. She said no American law firm had been formally retained yet to help in a third case, involving horticultural and animal products.

Mr. Layton, a lawyer in the Washington office of Mayer Brown, said that since 2010 he had led a team from the firm in the clove cigarette dispute. He said Matthew McConkey, another lawyer in the firm’s Washington office, had taken the lead on the shrimp issue until the United States dropped its claims in August. Both cases were underway a year ago when the Australians reported that their surveillance included an American law firm.

Mr. Layton said that if his emails and calls with Indonesian officials had been monitored, the spies would have been bored. “None of this stuff is very sexy,” he said. “It’s just run of the mill.”

He and the other Mayer Brown lawyers do most of their work on the trade issues from Washington, he said. They also make occasional trips to Jakarta, Indonesia’s capital, and Geneva, where the World Trade Organization is based. Mr. Layton said most of his communications with officials in Jakarta had been done through email, while he also talked by phone with officials at the Indonesian Embassy in Washington.

The N.S.A.’s protections for attorney-client conversations are narrowly crafted, said Stephen Gillers, an expert on legal ethics at New York University’s School of Law. The agency is barred from sharing with prosecutors intercepted attorney-client communications involving someone under indictment in the United States, according to previously disclosed N.S.A. rules. But the agency may still use or share the information for intelligence purposes.

Andrew M. Perlman, a Suffolk University law professor who specializes in legal ethics and technology issues, said the growth of surveillance was troubling for lawyers. He helped create the bar association’s ethics code revisions that require lawyers to try to avoid being overheard by eavesdroppers.

“You run out of options very quickly to communicate with someone overseas,” he said. “Given the difficulty of finding anything that is 100 percent secure, lawyers are in a difficult spot to ensure that all of the information remains in confidence.”

In addition to its work on trade issues with the United States, Mr. Layton said, Mayer Brown was representing Indonesia in a dispute with Australia. He said Indonesia had been arguing that Australia’s requirements for plain packaging for tobacco products under its antismoking rules were excessive.

Economic Espionage

Even though the Indonesian issues were relatively modest for the United States — about $40 million in annual trade is related to the clove cigarette dispute and $1 billion annually to shrimp — the Australian surveillance of talks underscores the extent to which the N.S.A. and its close partners engage in economic espionage.

In justifying the agency’s sweeping powers, the Obama administration often emphasizes the N.S.A.’s role in fighting terrorism and cyberattacks, but disclosures in recent months from the documents leaked by Mr. Snowden show the agency routinely spies on trade negotiations, communications of economic officials in other countries and even foreign corporations.

American intelligence officials do not deny that they collect economic information from overseas, but argue that they do not engage in industrial espionage by sharing that information with American businesses. China, for example, is often accused of stealing business secrets from Western corporations and passing them to Chinese corporations.

The N.S.A. trade document — headlined “SUSLOC (Special US Liaison Office Canberra) Facilitates Sensitive DSD Reporting on Trade Talks”— does not say which “interested US customers” besides the N.S.A. might have received intelligence on the trade dispute.
Other documents obtained from Mr. Snowden reveal that the N.S.A. shares reports from its surveillance widely among civilian agencies. A 2004 N.S.A. document, for example, describes how the agency’s intelligence gathering was critical to the Agriculture Department in international trade negotiations.

“The U.S.D.A. is involved in trade operations to protect and secure a large segment of the U.S. economy,” that document states. Top agency officials “often rely on SIGINT” — short for the signals intelligence that the N.S.A. eavesdropping collects — “to support their negotiations.”

The Australians reported another instance to the N.S.A. — in addition to the one with the American law firm — in which their spying involved an American, according to the February 2013 document. They were conducting surveillance on a target who turned out to be an American working for the United States government in Afghanistan, the document said. It offered no details about what happened after the N.S.A. learned of the incident, and the agency declined to respond to questions about it.

In a statement, Ms. Vines, the agency spokeswoman, said: “N.S.A. works with a number of partners in meeting its foreign-intelligence mission goals, and those operations comply with U.S. law and with the applicable laws under which those partners operate. A key part of the protections that apply to both U.S. persons and citizens of other countries is the mandate that information be in support of a valid foreign-intelligence requirement, and comply with U.S. attorney general-approved procedures to protect privacy rights.”

The documents show that the N.S.A. and the Australians jointly run a large signals intelligence facility in Alice Springs, Australia, with half the personnel from the American agency. The N.S.A. and its Australian counterpart have also cooperated on efforts to defeat encryption. A 2003 memo describes how N.S.A. personnel sought to “mentor” the Australians while they tried to break the encryption used by the armed forces of nearby Papua New Guinea.

Most of the collaboration between the N.S.A. and the Australian eavesdropping service is focused on Asia, with China and Indonesia receiving special attention.

Australian intelligence has focused heavily on Indonesia since the Bali bombing of 2002. The attack, which killed 202 people, including 88 Australians, in a resort area popular with Australians, was blamed on the Southeast Asian Islamist group Jemaah Islamiyah.

The Americans and the Australians secretly share broad access to the Indonesian telecommunications system, the documents show. The N.S.A. has given the Australians access to bulk call data from Indosat, an Indonesian telecommunications provider, according to a 2012 agency document. That includes data on Indonesian government officials in various ministries, the document states.

The Australians have obtained nearly 1.8 million encrypted master keys, which are used to protect private communications, from the Telkomsel mobile telephone network in Indonesia, and developed a way to decrypt almost all of them, according to a 2013 N.S.A. document.
http://www.nytimes.com/2014/02/16/us...-law-firm.html





Snowden Used Low-Cost Tool to Best N.S.A.
David E. Sanger and Eric Schmitt

Intelligence officials investigating how Edward J. Snowden gained access to a huge trove of the country’s most highly classified documents say they have determined that he used inexpensive and widely available software to “scrape” the National Security Agency’s networks, and kept at it even after he was briefly challenged by agency officials.

Using “web crawler” software designed to search, index and back up a website, Mr. Snowden “scraped data out of our systems” while he went about his day job, according to a senior intelligence official. “We do not believe this was an individual sitting at a machine and downloading this much material in sequence,” the official said. The process, he added, was “quite automated.”

The findings are striking because the N.S.A.’s mission includes protecting the nation’s most sensitive military and intelligence computer systems from cyberattacks, especially the sophisticated attacks that emanate from Russia and China. Mr. Snowden’s “insider attack,” by contrast, was hardly sophisticated and should have been easily detected, investigators found.

Moreover, Mr. Snowden succeeded nearly three years after the WikiLeaks disclosures, in which military and State Department files, of far less sensitivity, were taken using similar techniques.

Mr. Snowden had broad access to the N.S.A.’s complete files because he was working as a technology contractor for the agency in Hawaii, helping to manage the agency’s computer systems in an outpost that focuses on China and North Korea. A web crawler, also called a spider, automatically moves from website to website, following links embedded in each document, and can be programmed to copy everything in its path.

Mr. Snowden appears to have set the parameters for the searches, including which subjects to look for and how deeply to follow links to documents and other data on the N.S.A.’s internal networks. Intelligence officials told a House hearing last week that he accessed roughly 1.7 million files.

Among the materials prominent in the Snowden files are the agency’s shared “wikis,” databases to which intelligence analysts, operatives and others contributed their knowledge. Some of that material indicates that Mr. Snowden “accessed” the documents. But experts say they may well have been downloaded not by him but by the program acting on his behalf.

Agency officials insist that if Mr. Snowden had been working from N.S.A. headquarters at Fort Meade, Md., which was equipped with monitors designed to detect when a huge volume of data was being accessed and downloaded, he almost certainly would have been caught. But because he worked at an agency outpost that had not yet been upgraded with modern security measures, his copying of what the agency’s newly appointed No. 2 officer, Rick Ledgett, recently called “the keys to the kingdom” raised few alarms.

“Some place had to be last” in getting the security upgrade, said one official familiar with Mr. Snowden’s activities. But he added that Mr. Snowden’s actions had been “challenged a few times.”

In at least one instance when he was questioned, Mr. Snowden provided what were later described to investigators as legitimate-sounding explanations for his activities: As a systems administrator he was responsible for conducting routine network maintenance. That could include backing up the computer systems and moving information to local servers, investigators were told.

But from his first days working as a contractor inside the N.S.A.’s aging underground Oahu facility for Dell, the computer maker, and then at a modern office building on the island for Booz Allen Hamilton, the technology consulting firm that sells and operates computer security services used by the government, Mr. Snowden learned something critical about the N.S.A.’s culture: While the organization built enormously high electronic barriers to keep out foreign invaders, it had rudimentary protections against insiders.

“Once you are inside the assumption is that you are supposed to be there, like in most organizations,” said Richard Bejtlich, the chief security strategist for FireEye, a Silicon Valley computer security firm, and a senior fellow at the Brookings Institution. “But that doesn’t explain why they weren’t more vigilant about excessive activity in the system.”

Investigators have yet to answer the question of whether Mr. Snowden happened into an ill-defended outpost of the N.S.A. or sought a job there because he knew it had yet to install the security upgrades that might have stopped him.

“He was either very lucky or very strategic,” one intelligence official said. A new book, “The Snowden Files,” by Luke Harding, a correspondent for The Guardian in London, reports that Mr. Snowden sought his job at Booz Allen because “to get access to a final tranche of documents” he needed “greater security privileges than he enjoyed in his position at Dell.”

Through his lawyer at the American Civil Liberties Union, Mr. Snowden did not specifically address the government’s theory of how he obtained the files, saying in a statement: “It’s ironic that officials are giving classified information to journalists in an effort to discredit me for giving classified information to journalists. The difference is that I did so to inform the public about the government’s actions, and they’re doing so to misinform the public about mine.”

The N.S.A. declined to comment on its investigation or the security changes it has made since the Snowden disclosures. Other intelligence officials familiar with the findings of the investigations underway — there are at least four — were granted anonymity to discuss the investigations.

In interviews, officials declined to say which web crawler Mr. Snowden had used, or whether he had written some of the software himself. Officials said it functioned like Googlebot, a widely used web crawler that Google developed to find and index new pages on the web. What officials cannot explain is why the presence of such software in a highly classified system was not an obvious tip-off to unauthorized activity.

When inserted with Mr. Snowden’s passwords, the web crawler became especially powerful. Investigators determined he probably had also made use of the passwords of some colleagues or supervisors.

But he was also aided by a culture within the N.S.A., officials say, that “compartmented” relatively little information. As a result, a 29-year-old computer engineer, working from a World War II-era tunnel in Oahu and then from downtown Honolulu, had access to unencrypted files that dealt with information as varied as the bulk collection of domestic phone numbers and the intercepted communications of Chancellor Angela Merkel of Germany and dozens of other leaders.

Officials say web crawlers are almost never used on the N.S.A.’s internal systems, making it all the more inexplicable that the one used by Mr. Snowden did not set off alarms as it copied intelligence and military documents stored in the N.S.A.’s systems and linked through the agency’s internal equivalent of Wikipedia.

The answer, officials and outside experts say, is that no one was looking inside the system in Hawaii for hard-to-explain activity. “The N.S.A. had the solution to this problem in hand, but they simply didn’t push it out fast enough,” said James Lewis, a computer expert at the Center for Strategic and International Studies who has talked extensively with intelligence officials about how the Snowden experience could have been avoided.

Nonetheless, the government had warning that it was vulnerable to such attacks. Similar techniques were used by Chelsea Manning, then known as Pfc. Bradley Manning, who was convicted of turning documents and videos over to WikiLeaks in 2010.

Evidence presented during Private Manning’s court-martial for his role as the source for large archives of military and diplomatic files given to WikiLeaks revealed that he had used a program called “wget” to download the batches of files. That program automates the retrieval of large numbers of files, but it is considered less powerful than the tool Mr. Snowden used.

The program’s use prompted changes in how secret information is handled at the State Department, the Pentagon and the intelligence agencies, but recent assessments suggest that those changes may not have gone far enough. For example, arguments have broken out about whether the N.S.A.’s data should all be encrypted “at rest” — when it is stored in servers — to make it harder to search and steal. But that would also make it harder to retrieve for legitimate purposes.

Investigators have found no evidence that Mr. Snowden’s searches were directed by a foreign power, despite suggestions to that effect by the chairman of the House Intelligence Committee, Representative Mike Rogers, Republican of Michigan, in recent television appearances and at a hearing last week.

But that leaves open the question of how Mr. Snowden chose the search terms to obtain his trove of documents, and why, according to James R. Clapper Jr., the director of national intelligence, they yielded a disproportionately large number of documents detailing American military movements, preparations and abilities around the world.

In his statement, Mr. Snowden denied any deliberate effort to gain access to any military information. “They rely on a baseless premise, which is that I was after military information,” Mr. Snowden said.

The head of the Defense Intelligence Agency, Lt. Gen. Michael T. Flynn, told lawmakers last week that Mr. Snowden’s disclosures could tip off adversaries to American military tactics and operations, and force the Pentagon to spend vast sums to safeguard against that. But he admitted a great deal of uncertainty about what Mr. Snowden possessed.

“Everything that he touched, we assume that he took,” said General Flynn, including details of how the military tracks terrorists, of enemies’ vulnerabilities and of American defenses against improvised explosive devices. He added, “We assume the worst case.”
http://www.nytimes.com/2014/02/09/us...-best-nsa.html





The Day the Internet Didn’t Fight Back
Nicole Perlroth

A consortium of Internet and privacy activists had long promoted Feb. 11 as the day the Internet would collectively stand up and shout down surveillance by the National Security Agency. The group called Tuesday, “The Day We Fight Back,” and encouraged websites to join an online campaign modeled after protests against the Stop Online Privacy Act and Protect I.P. Act two years ago, when sites like Reddit and Wikipedia and companies like Google and Facebook helped successfully topple antipiracy legislation.

Instead, the protest on Tuesday barely registered. Wikipedia did not participate. Reddit — which went offline for 12 hours during the protests two years ago — added an inconspicuous banner to its homepage. Sites like Tumblr, Mozilla and DuckDuckGo, which were listed as organizers, did nothing to their homepages. The most vocal protesters were the usual suspects: activist groups like the Electronic Frontier Foundation, the American Civil Liberties Union, Amnesty International and Greenpeace.

The eight major technology companies — Google, Microsoft, Facebook, AOL, Apple, Twitter, Yahoo and LinkedIn — that joined forces in December in a public campaign to “reform government surveillance” only participated Tuesday insofar as having a joint website flash the protest banner.

The difference may be explained by the fact that two years ago, the Internet powerhouses were trying to halt new legislation. On Tuesday, people were being asked to reverse a secret, multi-billion dollar surveillance effort by five countries that has been in place for nearly a decade.

And unlike 2012, when the goal was simply to block the passage of new bills, the goal of the protests on Tuesday were more muddled. This time around, participants were urged to flash a banner on their sites that urged visitors to call their congressional representative in support of the U.S.A. Freedom Act — a bill sponsored by Representative Jim Sensenbrenner, Republican of Wisconsin, and Senator Patrick Leahy, Democrat of Vermont, which seeks to reform the N.S.A.’s metadata database. They were also asked to oppose the FISA Improvements Act, a bill proposed by Senator Dianne Feinstein that would help legalize the N.S.A.’s metadata collection program.

All was not lost. By late Tuesday, some 70,000 calls had been placed to legislators and roughly 150,000 people had sent their representatives an email. But on privacy forums and Reddit, significant discussions failed to materialize.

“Online petitions,” one Reddit user wrote of the protest. “The very least you can do, without doing nothing.”
http://bits.blogs.nytimes.com/2014/0...nt-fight-back/





Music Industry Sucks Life from Subscription Services

The music subscription sector is intrinsically unprofitable, report states
Lucas Mearian

Subscriptions to music services are expected to more than double by 2017, but because those services pay 60% to 70% of their revenue to record labels and artists, the entire sector is intrinsically unprofitable, according to a new report.

The report, from industry analyst firm Generator Research, offered an analysis of the top services, including Pandora, Spotify and Rhapsody.

The report stated that unless the services can monetize their user base by entering new product and service categories, or they can sell themselves to a larger company that can sustain them, they're doomed to fail.

"We cannot see any conceivable market scenario where the music industry would buy any of these players, let alone the whole sector," said Andrew Sheehey, co-founder and chief analyst of Generator Research, in an email reply to Computerworld.

Generator Research's analysis shows that no current music subscription service can ever profitable, even if it executes perfectly, because the music industry will never agree to significantly reduced royalties.

"For a listed company like Pandora this would mean that the company would need to be taken off the public market (as has happened with Dell)," the analysis stated.

Roads to profitability

One method that subscription services might be able to achieve profitability is to upsell mobile deals or bundles to subscribers. For example, a select package of mobile services would be sold through the music service provider, the report suggested.

Bundled services will likely begin to surface when the music subscription market reaches a certain critical mass. Then, users will see the arrival of a kind of "super bundle" where, for an additional monthly fee, paid-for downloads, for example, would be bundled with the base subscription offer.

"Services like iTunes Match and Google and Amazon are already heading in this direction," Generator research stated.

Music subscription services could also sell anonymous user behavioral data to advertisers and ad platforms that could use that information to better target their advertising, the report said.

Growth industry

The number of subscribers to both paid and unpaid music services is expected to more than double over the next three years, Generator Research stated.

The company estimates that last year, there were 767 million individuals worldwide using a music subscription service. Of those, 36 million paid for subscription-based access.

By 2017, the number of subscribers overall is expected to leap to 1.7 billion, with 125 million of those users paying for the service to access premium features while avoiding ads, the researchers said.

The increase in paid subscriptions means revenues earned from music subscription services will represent $2.9 billion, and "will be by far and away the most important source of growth that will, at last, allow the music industry to return to a period of growth, albeit modest."

From 2013 to 2017, the total revenue earned by all record companies worldwide will increase from $16.7 billion to $17.2 billion, or 3.2%. Yet, revenues from physical formats will fall by $2.09 billion and revenues from digital download sales will fall by $663 million. Revenues generated from performance and synchronization rights will increase, but only by $322 million, the report states.

Seeing marked growth by subscribers, private investors over the past decade have injected more than $1 billion to create a music subscription market. However, the only beneficiaries of those investments to date have been the music industry and users.

"Music subscription services providers are all losing money, and that is going to remain the case until they find a way to monetize a worldwide user base," the report states.

"Putting to one side the quality of the actual service, which most users would rate very highly, the facts show that Pandora -- when viewed objectively as a business -- is in dire straits," the report stated. "We are at a loss to know why the company's stock has performed so well, especially over the last 12 months."

Over the past year, Pandora's stock price has jumped from $11.48 to $37.95.
http://www.computerworld.com/s/artic...1&pageNumber=1





Pandora Suit May Upend Century-Old Royalty Plan[
Ben Sisario

As the music industry races toward a future of digital streams and smartphone apps, its latest crisis centers on a regulatory plan that has been in place since “Chattanooga Choo Choo” was a hit.

Since 1941, Ascap and BMI, the two giant licensing organizations that dominate music publishing, have been governed by consent decrees with the Justice Department. These agreements were made to guarantee fair royalty rates for songwriters and for the radio stations, television networks and even restaurants and retail shops that play their music.

But with the industry struggling to make money from digital music, this system has come under attack. The streaming service Pandora is squaring off against Ascap in a closely watched trial over royalty payments. Big music publishers like Sony/ATV and Universal are calling on the government to overhaul the system, and technology companies are accusing the publishers of trying to skirt federal rules meant to protect them.

The outcome could reshape the finances of a large part of the industry.

“What’s happening with these court cases will determine the future of the music publishing and songwriting industries,” said David Israelite, the president of the National Music Publishers’ Association. “It is simply unfair to ask songwriters and publishers to be paid something less than a fair market rate for their intellectual property.”

For nearly a century, Ascap and BMI, known as performing rights organizations, have served an essential middleman function. They grant the licenses that let various outlets use songs, and then funnel royalties from these billions of “performances” back to publishers and songwriters.

Together, the groups process more than $2 billion in licensing fees each year, and represent more than 90 percent of the commercially available songs in the United States. Performance royalties have become critical for songwriters as sales of compact discs and downloads — which pay a different kind of royalty not administered by Ascap and BMI — have fallen.

Ascap, which stands for the American Society of Composers, Authors and Publishers, was founded by a group of composer luminaries including Irving Berlin and Victor Herbert. Its 100th birthday is this week. BMI, or Broadcast Music Inc., was created by broadcasters in 1939 as a competitor.

After federal antitrust investigations, both groups agreed to government supervision in 1941.

This system has hummed along for decades. But with the rise of Internet radio, publishers have complained that the rules are antiquated and unfair. They point to the disparity in the way Pandora compensates the two sides of the music business: Last year, Pandora paid 49 percent of its revenue, or about $313 million, to record companies, but only 4 percent, or about $26 million, to publishers.

“It’s a godawful system that just doesn’t work,” said Martin N. Bandier, the chairman of Sony/ATV, the world’s largest music publisher.

The wider music world has been galvanized by the issue of low royalties from fast-growing streaming companies.

In 2012, for example, when Pandora’s former chief executive testified at a congressional hearing on music licensing, songwriters protested on Capitol Hill. Five writers of hits by stars like Beyoncé and Christina Aguilera showed that 33 million plays on their songs on Pandora yielded just $587.39 in royalties for them.

Music executives argue that the problem is rooted in the Justice Department’s oversight of Ascap and BMI. Under the consent decrees, the performing rights groups are not permitted to refuse licenses to any outlet that applies for them, and rate negotiations can drag on for years. To get around this, some big publishers have tried to change their ties with Ascap and BMI, forcing digital outlets like Pandora to negotiate directly.

Pandora cried foul in its federal lawsuit, saying that the move led to higher rates and violated the Justice Department regulations. That issue is also at play in a separate pending suit filed last year by BMI against Pandora. The two suits, filed in Federal District Court in Manhattan, ask the court to set a royalty rate for Pandora. A third, much smaller performing rights group, Sesac, is not subject to a consent decree.

Pandora argued in court that it had been put in “absolute gun-to-the-head circumstances” in negotiations with publishers, and had fought to keep its rates low. In a statement, the company said that the consent decrees offered important protections for the music world, including “a mechanism to establish a reasonable royalty rate when songwriters and music users cannot agree on a rate.”

Ascap argued that Pandora should pay it more than its current rate of 1.85 percent of revenue as part of the 4 percent that it pays for all publishing rights. When Apple made direct deals with publishers last year for its new Pandora-like service, iTunes Radio, its rates were said to be about 10 percent of revenue.

Closing arguments in the case were heard on Monday, and the judge is expected to rule soon.

In frustration, leading music figures have begun to ask the government to update the consent decrees to allow more flexible licensing. In an opinion article in The Wall Street Journal last month, the songwriter Burt Bacharach said these agreements “are supposed to guarantee us ‘reasonable fees,’ but these aren’t remotely reasonable.”

Ascap and BMI have met with the Justice Department, although they would not say specifically what they had requested. A Justice Department spokeswoman declined to comment.

What will happen if publishers do not get the relief they want from the government is unclear. Some have suggested that they might abandon the performing rights organizations.

“Everything is geared now to what happens with the Department of Justice,” said Zach Horowitz, the chairman of the Universal Music Publishing Group. “If we can’t secure adjustments to the consent decrees, which were last modified before the introduction of the iPod, we’ll have no choice but to consider some radical steps in order to ensure our writers are fairly compensated in the rapidly changing marketplace.”

Two court rulings late last year threw the industry into confusion over how the performing rights organizations would continue to represent publishers. Universal and Sony/ATV have since made short-term arrangements with BMI to stabilize the market, but their involvement in the long run is still an open question.

“If we were forced into the corner and had to do that,” Mr. Bandier said when asked whether his company would withdraw from Ascap and BMI, “it would be catastrophic for those two societies. And we don’t want that to happen.”

A future without Ascap and BMI, or one in which they no longer represent a majority of songs, has the music industry worried. Major publishers could handle responsibilities like issuing licenses to radio networks, but even the biggest of them say that Ascap and BMI have an irreplaceable infrastructure.

“No other organization, and certainly no single publisher, can negotiate, track, collect, distribute and advocate for music creators on the scale that we do, with the same level of accuracy, efficiency and transparency across so many different media platforms,” Paul Williams, the songwriter and president and chairman of Ascap, said in a statement.

On Thursday — the 100th anniversary of its founding meeting — Ascap reported that it had $944 million in revenue in 2013. It also paid $851 million in royalties to its members, up 3 percent from 2012.

For songwriters, Ascap and BMI have also been among the most reliable institutions in the music industry, and few want to see them go. But Rick Carnes, a Nashville songwriter and president of the Songwriters Guild of America, said that while these organizations had served him and his colleagues well, the Justice Department agreements that govern them were outdated and must be changed.

“This is a horse-and-buggy consent decree in a digital environment,” Mr. Carnes said. “There’s no way that works now.”
http://www.nytimes.com/2014/02/14/bu...alty-plan.html
















Until next week,

- js.



















Current Week In Review





Recent WiRs -

February 8th, February 1st, January 25th, January 18th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
JackSpratts is online now   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 05:44 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)