P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 19-08-15, 06:32 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - August 22nd, '15

Since 2002


































"In 1999 there were nearly 53,000 Americans who considered their primary occupation to be that of a musician, a music director or a composer; in 2014, more than 60,000 people were employed writing, singing or playing music. That’s a rise of 15 percent, compared with overall job-market growth during that period of about 6 percent." – Steven Johnson


"NSA metadata collection is not a violation of anyone's freedoms." – Jeb Bush






































August 22nd, 2015




After Internet Companies Protest, MPAA Declares Victory And Walks Away From Attempt To Backdoor SOPA
Mike Masnick

The MPAA has been working on a number of tricks to find a SOPA through the backdoor in the last few months -- more on some of the many attempts coming soon -- but in one attempt, it's suddenly walking away. A few weeks ago, all of the major movie studios filed a lawsuit over the website MovieTube (actually a series of websites). While it may well be that MovieTube was involved in copyright infringement (and thus a lawsuit may be perfectly appropriate), the concerning part was that, as a part of the lawsuit, the studios were demanding a remedy that is not available by law: that anyone who provides any kind of service to MovieTube be forced to stop via a court injunction. This was the kind of tool that was a part of SOPA, which (you may recall) never became law. Among the requests in the lawsuit:

That the Registries and/or Registrars be required to transfer the domain names associated with Defendants’ MovieTube Websites, or any subset of these domain names specified by Plaintiffs, to a registrar to be appointed by Plaintiffs to re-register the domain names in respective Plaintiffs’ names and under Plaintiffs’ respective ownership.

That content delivery networks and domain name server systems be required to cease providing services to the MovieTube Websites and/or domains identified with the MovieTube Websites and disable any access to caches they maintain for the MovieTube Websites and destroy any caches they maintain for the MovieTube Websites.

That third parties providing services used in connection with any of the MovieTube Websites and/or domain names for MovieTube Websites, including without limitation, web hosting providers, cloud services providers, digital advertising service providers, search-based online advertising services (such as through paid inclusion, paid search results, sponsored search results, sponsored links, and Internet keyword advertising), domain name registration privacy protection services, providers of social media services (e.g., Facebook and Twitter), and user generated and online content services (e.g., YouTube, Flickr and Tumblr) be required to cease or disable providing such services to (i) Defendants in relation to Infringing Copies or infringement of Plaintiffs’ Marks; and/or (ii) any and all of the MovieTube Websites.


A few days later, the good folks at EFF reminded everyone that SOPA did not pass, and this attempt to require a SOPA-level block is not actually what the law allows. Of course, as we noted soon after the SOPA fight, it appeared that some courts were pretending SOPA did pass, mainly in a variety of lawsuits involving counterfeit goods (rather than copyright infringement). And the movie studios rely on that in their more detailed argument in favor of this broad censorship order on third parties who aren't even a part of this case:

Courts have granted similar interim relief directed to third-party service providers in cases with similar facts. The first such case, The North Face Apparel Corp. v. Fujian Sharing Import & Export Ltd. (“Fujian ”), 10-Civ-1630 (AKH) (S.D.N.Y.), was brought against defendants in China selling counterfeit goods through the Internet directly to consumers in the United States. In Fujian, the district court granted an ex parte temporary restraining order, seizure order, asset restraining order, and domain-name transfer order, later continued by a preliminary injunction order.

Of course, last week, a bunch of internet companies -- Google, Facebook, Tumblr, Twitter and Yahoo -- filed an amicus brief highlighting how ridiculous the widespread demand is:

Plaintiffs are asking the Court to grant a preliminary injunction not just against the named Defendants, but also against a wide array of online service providers—from search engines, to web hosts, to social networking services—and require them to “cease providing services to the MovieTube Websites and Defendants[.]” None of those providers is a party to this case, and Plaintiffs make no claim that any of them have violated the law or play any direct role in the Defendants’ allegedly infringing activities.

Plaintiffs’ effort to bind the entire Internet to a sweeping preliminary injunction is impermissible. It violates basic principles of due process and oversteps the bounds of Federal Rule of Civil Procedure 65, which restricts injunctions to parties, their agents, and those who actively participate in a party’s violations. The proposed order also ignores the Digital Millennium Copyright Act (“DMCA”), which specifically limits the injunctive relief that can be imposed on online service providers in copyright cases. Even if Plaintiffs had named those providers as defendants and obtained a final judgment against them, the DMCA would not permit the relief that Plaintiffs are asking for at the outset of their case, where they have not even tried to claim that these nonparties have acted unlawfully.


And... just days later, the movie studios tell the judge that they need not rule on this issue at all, and they're happy to drop the request for the preliminary injunction entirely, because the MovieTube websites have already been shut down (h/t to Eriq Gardner, who first reported on the studio's letter).

We represent Plaintiffs in the above-titled action. We write to inform the Court that after Plaintiffs filed their Complaint (and presumably in response thereto), Defendants shut down their infringing websites, and as of today, such websites remain offline. Plaintiffs are no longer seeking preliminary injunctive relief at this time but will seek permanent relief as soon as possible. Defendants’ time to answer or otherwise respond is August 19, 2015.

Moreover, because Plaintiffs have withdrawn their motion for preliminary injunctive relief, the arguments offered by Amici Curiae... in opposition to that motion are not ripe for consideration and are otherwise inapplicable. Accordingly, Plaintiffs have not addressed them here. To the extent Amici are requesting what amounts to an advisory opinion, such a request is improper and should not be entertained.


In short: we had hoped to quietly get a court to pretend SOPA existed so we could point to it as proof that this is perfectly reasonable... but the internet folks spotted it, so we'll just walk away quietly, and hope that next time, those darn internet companies, and those eagle-eyed lawyers at the EFF aren't so quick to spot our plan.
https://www.techdirt.com/articles/20...oor-sopa.shtml





TPP's Copyright Term Extension Isn't Made for Artists—It's Made By and For Big Content Companies

The following comment was written by Canadian filmmaker, Andrew Hunter, sent to party leaders asking them to come out against the 20-year copyright term extension in the Trans-Pacific Partnership (TPP) and stand for fair and balanced innovation policy. He emailed this comment as part of our TPP's Copyright Trap campaign.

I am writing to express my serious concern that the Trans-Pacific Partnership agreement's intellectual property chapter may extend Canada's current length of copyright.

I'm a filmmaker, cinematographer and camera assistant by trade. Copyright is the foundation of how I earn a living. However I see the policy of copyright maximalism that is espoused by dominant players in both Hollywood and Canada as being detrimental to the health of our industry.

Copyright law should uphold a carefully crafted balance of public and private rights that encourages creation, while providing incentives for innovation and access for education, libraries, and other socially beneficial purposes. Excessive copyright term lengths undermine this objective. The foundations of culture is our ability to share, re-tell and rework stories.

Copyright maximalism is the belief that:

1. All of one's work is original.
2. Copyright is an innate right similar to human rights, which should be protected and expanded at any opportunity.

This is ironic as we in the film industry utilize references, be they visual, audio or written word, to communicate ideas and intent before we are "on the day" and actually have to execute the plan. Culture is the sum of the modulation of different mediums to convey human expression. Copyright maximalists do not acknowledge this contradiction, as it serves their interests.

In Canada, as in most countries of the world, the term lasts the lifetime of the creator plus 50 years after their death, or 70 years from the date of publication. The TPP however, threatens to override this and extend our terms by at least another 20 years, even for works that have already entered the public domain.

Extending copyright does not help artists, creatives and those people who rely on *creating*, rather than exploiting, copyrightable work for a living. Like fashion, much of the work that puts food on my table in fact encourages sharing and copying, as it is work meant to be disseminated as far and wide as possible.

Those who do stand to gain from an copyright extension are those who profit from the importation of foreign works for distribution in Canada or those who posses the rights to already profitable properties.

Do not for a moment believe that it will help emerging filmmakers like myself, or my colleagues, become successful. There are much greater hurdles to just creating a work for someone like myself to be concerned than whether an extra 20 years after my death my descendants will help them.

Do not trot out the Canadian old boys club of "artists" to promote what benefits the people they sold their rights to.

Our country has long resisted previous efforts by to lengthen its terms beyond what is required by existing international law, namely, the Berne Convention; most recently during the comprehensive consultations that led to the passage of the Copyright Modernization Act in 2012.

That's why I urge you to stand up for Canadians' right to fair and balanced innovation policy, and speak out against this unwarranted copyright term extension in the TPP. It is not in the interests of film technicians, producers or the general public.

Thank you for your attention.

Sincerely,
Andrew Hunter

~

If you are a Canadian, we urge you to email party leaders and call on them to speak out against the copyright term extension in the TPP.
https://www.eff.org/deeplinks/2015/0...tent-companies





Periscope Complies to 71 Percent of Copyright Takedown Requests
Jordan Valinsky

Periscope’s live streaming capability is increasingly becoming a bigger magnet for copyright takedown requests.

In a newly released Transparency Report, its owner Twitter says it has received 1,391 notices under the Digital Millennium Copyright Act for illegal streams on Periscope.

Since its launch in late March, the number of requests has increased dramatically from fewer than 20 in April to nearly 1,000 in June. Periscope has complied with 71 percent of requests, affecting 864 accounts and removing 1,029 streams.

Twitter released a month-by-month breakdown of the data.

Periscope’s live-streaming abilities has companies worried that users could illegally watch events without them paying for it, such as the case with the boxing match between Floyd Mayweather and Manny Pacquiao in May. Users discovered streams of the fight as a way to bypass to pricey pay-per-view fight that cable operators were charging.

The popularity even prompted former Twitter CEO Dick Costolo to post this eyebrow-raising tweet:

And the winner is… @periscopeco

— dick costolo (@dickc) May 3, 2015

When it first launched, HBO slammed Periscope as a possible app that promotes “mass copyright infringement” because people were using it to stream the premiere of ‘Game of Thrones.’

Compared to Twitter and Vine, Periscope has the highest compliance rate, writes VentureBeat, although that data is measured from January to June. Vine has received 2,405 notices with a 68 percent compliance rate and Twitter has garnered 14,694 takedown requests with a 67 percent compliance rate.

We’ve reached out to see how Periscope’s number compares to Meerkat, but have not yet heard back.
http://digiday.com/platforms/perisco...down-requests/





Germany Says Taking Photos Of Food Infringes The Chef's Copyright
Glyn Moody

Over the years, Techdirt has had a couple of stories about misguided chefs who think that people taking photos of their food are "stealing" something -- their culinary soul, perhaps. According to an article in the newspaper Die Welt, it seems that this is not just a matter of opinion in Germany, but established law (original in German):

In individual cases, shared pictures may be illegal. At worst, a copyright warning notice might come fluttering to the social media user. For carefully-arranged food in a famous restaurant, the cook is regarded as the creator of a work. Before it can be made public on Facebook & Co., permission must first be asked of the master chef.

Apparently, this situation goes back to a German court judgment from 2013, which widened copyright law to include the applied arts too. As a result, the threshold for copyrightability was lowered considerably, with the practical consequence that it was easier for chefs to sue those who posted photographs of their creations without permission. The Die Welt article notes that this ban can apply even to manifestly unartistic piles of food dumped unceremoniously on a plate if a restaurant owner puts up a notice refusing permission for photos to be taken of its food.

It's sad to see this kind of ownership mentality has been accepted by the German courts. As a Techdirt article from 2010 explained, there's plenty of evidence that it is precisely the lack of copyright in food that has led to continuing innovation -- just as it has in other fields that manage to survive without this particular intellectual monopoly, notably in fashion.
https://www.techdirt.com/articles/20...opyright.shtml





Kim Dotcom's Music Streaming Service is Finally Here
Kia Kokalitcheva

A new music streaming service envisoned by Kim Dotcom, the notorious Internet entrepreneur best known as the founder of Megaupload, finally launched on Monday.

Designed as an alternative to popular streaming services like Spotify and Apple Music, Baboom, as it’s called, lets independent artists keep 90% of the proceeds through its “Fair Trade Streaming” agreement. Dotcom originally envisioned the service as an alternative to the music industry through which they could directly distribute their music to fans, but he left the company last fall.

Baboom offers two tiers for customers, and streaming on the Web and on iOS and Android. The free version comes with ads, lets users save up to 100 songs into collections they create, and they have to purchase songs they wish to download. For $10 per month, customers get to skip the ads and save an unlimited amount of songs to their collection. They can also access exclusive content.

Dotcom originally announced the service back in 2011, and said at the time that it would launch within a year. It was delayed more than once, and he eventually left last fall.

It’s not clear yet how large Baboom’s music catalog is, or how well it will do, though it’s sure to resonate with artists who are finding the models used by Spotify and others as unfairly stripping them of much of their earnings.
http://fortune.com/2015/08/17/kim-dotcoms-baboom-music/





FilePizza Does Peer-To-Peer File Sharing In Your Browser
Thorin Klosowski

Peer-to-peer file sharing services like BitTorrent Sync are great ways to share large files without paying for third-party cloud storage, but they still require you to download software. FilePizza shares files using peer-to-peer right in your browser.

When you drag a file into FilePizza, you’re given a link. Send that link to someone and they can start downloading the file right in their browser directly from you. Basically, your file never touches a third-party server, it’s just a direct connection from your computer to the one receiving it. If you close the File Pizza window, that cuts off the transfer. The software’s all open source too, so if you’re worried what — if anything — it’s storing, you can dig through the code.
http://www.lifehacker.com.au/2015/08...-your-browser/





BitTorrent Clients Can be Made to Participate in High-Volume DoS Attacks
Zeljka Zorz

A group of researchers have discovered a new type of DoS attack that can be pulled off by a single attacker exploiting weaknesses in the BitTorrent protocol family.

The weaknesses in the Micro Transport Protocol (uTP), Distributed Hash Table (DHT), Message Stream Encryption (MSE), and BitTorrent Sync (BTSync) protocols allow the attacker to insert the target's IP address instead of his own in the malicious request.

To mount a Distributed Reflective DoS (DRDoS) attack, an attacker must simply send this malformed requests to other BitTorrent users, which then act as reflectors and amplifiers and flood the intended victim with responses.

"Our experiments reveal that an attacker is able to exploit BitTorrent peers to amplify the traffic up to a factor of 50 times and in case of BTSync [app] up to 120 times," the researchers noted.

"With peer-discovery techniques like trackers, DHT or PEX, an attacker can collect millions of amplifiers. An attacker only needs a valid info-hash or secret to exploit the vulnerabilities."

The researchers have found that uTorrent, Mainline and Vuze - the most popular BitTorrent clients - are vulnerable since they use the aforementioned protocols.

While there is no effective security risk for the users of the vulnerable clients, these flaws should be fixed in order to prevent DRDoS attacks in the future.

In the meantime, stopping these attacks requires the deployment of firewalls with Deep Packet Inspection (DPI).
http://www.net-security.org/secworld.php?id=18769





The Flash Storage Revolution Is Here
Brian Barrett

You’ve likely heard about Samsung’s 16TB hard drive, by far the world’s largest. That is an eye-popping number, a large enough leap forward that it’s difficult to fully process. And the most exciting thing about that 16TB hard drive? It’s just a hint of what’s coming next.

The pace of flash storage development has been slow and steady for decades. Finally, though, we’re starting to see breakthroughs of the last few years result in actual products, starting with one mammoth SSD.

Sasquatch Storage

The Samsung drive, called PM1633a, was first reported by Golem.de and announced at last week’s Flash Memory Summit in California. While its size is impressive, it’s all the more astonishing for being a solid state drive—comprising flash memory chips—as opposed to more conventional (and affordable) hard drives that rely on magnetically coated spinning discs.

While SSDs have been faster and more rugged than their HDD counterparts, they have until recently been far more limited in capacity. To this point, the largest 2.5-inch (the size of Samsung’s latest) SSD you could buy was 4TB, at a cost of around $6,000. Even high-capacity spinning disc drives top out at around 10TB. While the PM1633a probably hasn’t remedied the cost situation, a four-fold leap in size is incredible.

What it’s not, though, is unexpected. In fact, Samsung laid the groundwork for this very device years ago.

In 2013, Samsung announced a new way of approaching flash storage manufacturing. Rather than place the cells along a single layer, as had been standard practice since NAND flash was invented in the 1980s, it would stack them vertically. That allows for much greater density, which gives you much more storage space.

Samsung’s solution, called V-NAND, has seen remarkable gains since its introduction. In the first year, the company stacked 24 layers on a single die, while in 2014 it managed 36. The 16TB SSD kicks that up to 48.

The implications of storage breakthroughs like this go beyond just data centers and laptops.

By applying an innovative manufacturing technique to existing flash technology, Samsung has created a hard drive that could store well over 3,000 high-definition copies of Mad Max: Fury Road on your MacBook Pro. Of course, it’s unlikely you’ll ever need 16TB of space or be willing to pay for it. For now these are far more likely to end up in servers; the closest you’ll likely come to using one is if it happens to wind up powering a cloud you tap into.

It won’t be long at all, though, before they find their way into personal computers, even laptops. “I would expect in three to five years, for a 2.5-inch 16TB SSD to be in a workstation-class notebook,” says Patrick Moorhead, president and principal analyst of Moor Insights & Strategy. In the interim, bigger, cheaper storage solutions at the top end help drive prices at the lower end—the stuff you actually use right now—down.

That amount of storage in the home has plenty of obvious applications, but also presents a few surprising use cases. Moorhead notes that despite our recent migration to the cloud, hard drives of that magnitude would obviate much of the need to borrow some massive, faceless tech company’s digital locker to stash our stuff. That doesn’t just mean home movies, either; that amount of room could enable localized smart home solutions that offer more privacy and security than leaning on the cloud currently does.

Even more exciting is that when that level of tech does trickle down to consumers, it won’t necessarily even come from Samsung; V-NAND isn’t the only vertical NAND technology out there. Intel and Micron recently announced that they’re working on something quite similar, though they don’t expect to produce consumer devices based on the technology until early next year. Toshiba has dabbled in 3D NAND, with products expected by the end of next year. All of them have the systems in place to produce equally, if not more, impressive drives. Samsung left the starting block first, but that may not matter much in a race that will be measured in years.

The implications of storage breakthroughs like this go beyond data centers and laptops, though. “Memory and storage are the two things that are holding up huge innovations in biotech, in design, and for that matter even artificial intelligence,” Moorhead says. “They’ve become a fundamental building block for moving the industry forward. These big innovations at the top trickle their way down into cars, into phones, over a five to seven year period.”

The innovations he refers to include the manufacturing smarts flexed by Samsung, Intel and Micron, and Toshiba. It also, though, includes another recent breakthrough, one that hasn’t yet manifested itself as a product, but could far do more to shape the storage and memory industry.

Cross Fire

As exciting as a 16TB SSD may be, it still represents an iterative step, a manufacturing trick that found new ways to stuff the same basic pieces into increasingly smaller spaces. The potentially much bigger breakthrough? Intel and Micro’s 3D XPoint (pronounced “crosspoint”) technology, which completely rethinks the way we’ve been making memory for years.

“I think the design change is more exciting,” says Moorhead. “It’s a radical, different design that nobody has, versus taking your memory to the next node, which is essentially Moore’s Law.”

You can read a more in-depth take on how 3D XPoint works here, but the short version goes something like this: Rather than rely on transistors to store information, as traditional flash memory does, 3D Xpoint deploys a microscopic mesh of wires, coordinated by something called a “selector” that can be stacked on top of one another.

The result is “non-volatile” storage, meaning it holds onto its data even when the power’s off, that’s 1,000 times faster than NAND flash, and 10 times denser than the volatile DRAM (dynamic random access memory) that PCs use to keep track of temporary data. In other words, it’s a single solution that can handle both memory and storage, and do both better, in most ways, than anything currently available.

Intel has said not to expect any 3D Xpoint products until next year, but when they appear they’ll be in a position to transform multiple industries, from the esoteric to the squarely consumer-focused.

“Any artificial intelligence or object recognition you want to have on a device works a lot better with XPoint … The more you can put into that really fast memory space, the better your artificial intelligence is going to be,” says Moorhead. “The simple application is gaming, where you’re waiting two to three minutes on some PCs to get to the next level. You can actually have entirely multiple levels in 3D Xpoint instead of having to wait for all that data.”

Better still, Moorhead projects the same five year timeframe until we hit consumer-friendly XPoint affordability. That may feel like a long time now, but given it’s been nearly that many decades since we’ve had a memory and storage innovation of this magnitude, we can afford a little patience. Besides, that gives everyone else some time to catch up.

“I can guarantee you that both Samsung and Toshiba have their plays as well,” Moorhead says. As well they should. Who wouldn’t want to enlist in a revolution like this?
http://www.wired.com/2015/08/flash-storage/





AT&T Helped N.S.A. Spy on an Array of Internet Traffic
Julia Angwin, Charlie Savage, Jeff Larson, Henrik Moltke, Laura Poitras and James Risen

The National Security Agency’s ability to spy on vast quantities of Internet traffic passing through the United States has relied on its extraordinary, decades-long partnership with a single company: the telecom giant AT&T.

While it has been long known that American telecommunications companies worked closely with the spy agency, newly disclosed N.S.A. documents show that the relationship with AT&T has been considered unique and especially productive. One document described it as “highly collaborative,” while another lauded the company’s “extreme willingness to help.”

AT&T’s cooperation has involved a broad range of classified activities, according to the documents, which date from 2003 to 2013. AT&T has given the N.S.A. access, through several methods covered under different legal rules, to billions of emails as they have flowed across its domestic networks. It provided technical assistance in carrying out a secret court order permitting the wiretapping of all Internet communications at the United Nations headquarters, a customer of AT&T.

The N.S.A.’s top-secret budget in 2013 for the AT&T partnership was more than twice that of the next-largest such program, according to the documents. The company installed surveillance equipment in at least 17 of its Internet hubs on American soil, far more than its similarly sized competitor, Verizon. And its engineers were the first to try out new surveillance technologies invented by the eavesdropping agency.

One document reminds N.S.A. officials to be polite when visiting AT&T facilities, noting, “This is a partnership, not a contractual relationship.”

The documents, provided by the former agency contractor Edward J. Snowden, were jointly reviewed by The New York Times and ProPublica. The N.S.A., AT&T and Verizon declined to discuss the findings from the files. “We don’t comment on matters of national security,” an AT&T spokesman said.

It is not clear if the programs still operate in the same way today. Since the Snowden revelations set off a global debate over surveillance two years ago, some Silicon Valley technology companies have expressed anger at what they characterize as N.S.A. intrusions and have rolled out new encryption to thwart them. The telecommunications companies have been quieter, though Verizon unsuccessfully challenged a court order for bulk phone records in 2014.

At the same time, the government has been fighting in court to keep the identities of its telecom partners hidden. In a recent case, a group of AT&T customers claimed that the N.S.A.’s tapping of the Internet violated the Fourth Amendment protection against unreasonable searches. This year, a federal judge dismissed key portions of the lawsuit after the Obama administration argued that public discussion of its telecom surveillance efforts would reveal state secrets, damaging national security.

The N.S.A. documents do not identify AT&T or other companies by name. Instead, they refer to corporate partnerships run by the agency’s Special Source Operations division using code names. The division is responsible for more than 80 percent of the information the N.S.A. collects, one document states.

Fairview is one of its oldest programs. It began in 1985, the year after antitrust regulators broke up the Ma Bell telephone monopoly and its long-distance division became AT&T Communications. An analysis of the Fairview documents by The Times and ProPublica reveals a constellation of evidence that points to AT&T as that program’s partner. Several former intelligence officials confirmed that finding.

A Fairview fiber-optic cable, damaged in the 2011 earthquake in Japan, was repaired on the same date as a Japanese-American cable operated by AT&T. Fairview documents use technical jargon specific to AT&T. And in 2012, the Fairview program carried out the court order for surveillance on the Internet line, which AT&T provides, serving the United Nations headquarters. (N.S.A. spying on United Nations diplomats has previously been reported, but not the court order or AT&T’s involvement. In October 2013, the United States told the United Nations that it would not monitor its communications.)

The documents also show that another program, code-named Stormbrew, has included Verizon and the former MCI, which Verizon purchased in 2006. One describes a Stormbrew cable landing that is identifiable as one that Verizon operates. Another names a contact person whose LinkedIn profile says he is a longtime Verizon employee with a top-secret clearance.

After the terrorist attacks of Sept. 11, 2001, AT&T and MCI were instrumental in the Bush administration’s warrantless wiretapping programs, according to a draft report by the N.S.A.’s inspector general. The report, disclosed by Mr. Snowden and previously published by The Guardian, does not identify the companies by name but describes their market share in numbers that correspond to those two businesses, according to Federal Communications Commission reports.

AT&T began turning over emails and phone calls “within days” after the warrantless surveillance began in October 2001, the report indicated. By contrast, the other company did not start until February 2002, the draft report said.

In September 2003, according to the previously undisclosed N.S.A. documents, AT&T was the first partner to turn on a new collection capability that the N.S.A. said amounted to a “ ‘live’ presence on the global net.” In one of its first months of operation, the Fairview program forwarded to the agency 400 billion Internet metadata records — which include who contacted whom and other details, but not what they said — and was “forwarding more than one million emails a day to the keyword selection system” at the agency’s headquarters in Fort Meade, Md. Stormbrew was still gearing up to use the new technology, which appeared to process foreign-to-foreign traffic separate from the post-9/11 program.

In 2011, AT&T began handing over 1.1 billion domestic cellphone calling records a day to the N.S.A. after “a push to get this flow operational prior to the 10th anniversary of 9/11,” according to an internal agency newsletter. This revelation is striking because after Mr. Snowden disclosed the program of collecting the records of Americans’ phone calls, intelligence officials told reporters that, for technical reasons, it consisted mostly of landline phone records.

That year, one slide presentation shows, the N.S.A. spent $188.9 million on the Fairview program, twice the amount spent on Stormbrew, its second-largest corporate program.

After The Times disclosed the Bush administration’s warrantless wiretapping program in December 2005, plaintiffs began trying to sue AT&T and the N.S.A. In a 2006 lawsuit, a retired AT&T technician named Mark Klein claimed that three years earlier, he had seen a secret room in a company building in San Francisco where the N.S.A. had installed equipment.

Mr. Klein claimed that AT&T was providing the N.S.A. with access to Internet traffic that AT&T transmits for other telecom companies. Such cooperative arrangements, known in the industry as “peering,” mean that communications from customers of other companies could end up on AT&T’s network.

After Congress passed a 2008 law legalizing the Bush program and immunizing the telecom companies for their cooperation with it, that lawsuit was thrown out. But the newly disclosed documents show that AT&T has provided access to peering traffic from other companies’ networks.

AT&T’s “corporate relationships provide unique accesses to other telecoms and I.S.P.s,” or Internet service providers, one 2013 N.S.A. document states.

Because of the way the Internet works, intercepting a targeted person’s email requires copying pieces of many other people’s emails, too, and sifting through those pieces. Plaintiffs have been trying without success to get courts to address whether copying and sifting pieces of all those emails violates the Fourth Amendment.

Many privacy advocates have suspected that AT&T was giving the N.S.A. a copy of all Internet data to sift for itself. But one 2012 presentation says the spy agency does not “typically” have “direct access” to telecoms’ hubs. Instead, the telecoms have done the sifting and forwarded messages the government believes it may legally collect.

“Corporate sites are often controlled by the partner, who filters the communications before sending to N.S.A.,” according to the presentation. This system sometimes leads to “delays” when the government sends new instructions, it added.

The companies’ sorting of data has allowed the N.S.A. to bring different surveillance powers to bear. Targeting someone on American soil requires a court order under the Foreign Intelligence Surveillance Act. When a foreigner abroad is communicating with an American, that law permits the government to target that foreigner without a warrant. When foreigners are messaging other foreigners, that law does not apply and the government can collect such emails in bulk without targeting anyone.

AT&T’s provision of foreign-to-foreign traffic has been particularly important to the N.S.A. because large amounts of the world’s Internet communications travel across American cables. AT&T provided access to the contents of transiting email traffic for years before Verizon began doing so in March 2013, the documents show. They say AT&T gave the N.S.A. access to “massive amounts of data,” and by 2013 the program was processing 60 million foreign-to-foreign emails a day.

Because domestic wiretapping laws do not cover foreign-to-foreign emails, the companies have provided them voluntarily, not in response to court orders, intelligence officials said. But it is not clear whether that remains the case after the post-Snowden upheavals.

“We do not voluntarily provide information to any investigating authorities other than if a person’s life is in danger and time is of the essence,” Brad Burns, an AT&T spokesman, said. He declined to elaborate.
http://www.nytimes.com/2015/08/16/us...t-traffic.html





A Few Thoughts on Cryptographic Engineering
Matthew Green

Yesterday the New York Times and ProPublica posted a lengthy investigation based on leaked NSA documents, outlining the extensive surveillance collaboration between AT&T and the U.S. government. This surveillance includes gems such as AT&T's assistance in tapping the main fiber connection supporting the United Nations, and that's only the start.

The usual Internet suspects are arguing about whether this is actually news. The answer is both yes and no, though I assume that the world at large will mostly shrug at this point. After all, we've learned so much about the NSA's operations at this point that we're all suffering from revelation-fatigue. It would take a lot to shock us now.

But this isn't what I want to talk about. Instead, the effect of this story was to inspire me to look back on the NSA leaks overall, to think about what they've taught us. And more importantly -- what they mean for the design of the Internet and our priorities as security engineers. That's what I'm going to ruminate about below.

The network is hostile

Anyone who has taken a network security class knows that the first rule of Internet security is that there is no Internet security. Indeed, this assumption is baked into the design of the Internet and most packet-switched networks -- systems where unknown third parties are responsible for handling and routing your data. There is no way to ensure that your packets will be routed as you want them, and there's absolutely no way to ensure that they won't be looked at.

Indeed, the implications of this were obvious as far back as ARPANET. If you connect from point A to point B, it was well known that your packets would traverse untrusted machines C, D and E in between. In the 1970s the only thing preserving the privacy of your data was a gentleman's agreement not to peek. If that wasn't good enough, the network engineers argued, you had to provide your own security between the endpoints themselves.

My take from the NSA revelations is that even though this point was 'obvious' and well-known, we've always felt it more intellectually than in our hearts. Even knowing the worst was possible, we still chose to believe that direct peering connections and leased lines from reputable providers like AT&T would make us safe. If nothing else, the NSA leaks have convincingly refuted this assumption.

We don't encrypt nearly enough

The most surprising lesson of the NSA stories is that 20 years after the development of SSL encryption, we're still sending vast amounts of valuable data in the clear.

Even as late as 2014, highly vulnerable client-to-server connections for services like Yahoo Mail were routinely transmitted in cleartext -- meaning that they weren't just vulnerable to the NSA, but also to everyone on your local wireless network. And web-based connections were the good news. Even if you carefully checked your browser connections for HTTPS usage, proprietary extensions and mobile services would happily transmit data such as your contact list in the clear. If you noticed and shut down all of these weaknesses, it still wasn't enough -- tech companies would naively transmit the same data through vulnerable, unencrypted inter-datacenter connections where the NSA could scoop them up yet again.

There is a view in our community that we're doing much better now, and to some extent we may be. But I'm less optimistic. From an attacker's point of view, the question is not how much we're encrypting, but rather, which valuable scraps we're not protecting. As long as we tolerate the existence of unencrypted protocols and services, the answer is still: way too much.

It's the metadata, stupid

Even if we, by some miracle, manage to achieve 100% encryption of communications content, we still haven't solved the whole problem. Unfortunately, today's protocols still leak a vast amount of useful information via session metadata. And we have no good strategy on the table to defend against it.

Examples of metadata leaked by today's protocols include protocol type, port number, and routing information such as source and destination addresses. It also includes traffic characteristics, session duration, and total communications bandwidth. Traffic analysis remains a particular problem: even knowing the size of the files requested by a TLS-protected browser connection can leak a vast amount of information about the user's browsing habits.

Absolutely none of this is news to security engineers. The problem is that there's so little we can do about it. Anonymity networks like Tor protect the identity of endpoints in a connection, but they do so at a huge cost in additional bandwidth and latency -- and they offer only limited protection in the face of a motivated global adversary. IPSec tunnels only kick the can to a different set of trusted components that themselves can be subverted.

'Full take' culture

Probably the most eye-opening fact of the intelligence leaks is the sheer volume of data that intelligence agencies are willing to collect. This is most famously exemplified by the U.S. bulk data collection and international call recording programs -- but for network engineers the more worrying incarnation is "full take" Internet collection devices like TEMPORA.

If we restrict our attention purely to the collection of such data -- rather than how it's accessed -- it appears that the limiting factors are almost exclusively technical in nature. In other words, the amount of data collected is simply a function of processing power, bandwidth and storage. And this is bad news for our future.

That's because while meaningful human communication bandwidth (emails, texts, Facebook posts, Snapchats) continues to increase substantially, storage and processing power increase faster. With some filtration, and no ubiquitous encryption, 'full take' is increasingly going to be the rule rather than the exception.

We've seen the future, and it's not American

Even if you're not inclined to view the NSA as an adversary -- and contrary to public perception, that view is not uniform even inside Silicon Valley -- America is hardly the only intelligence agency capable of subverting the global communications network. Nations like China are increasingly gaining market share in telecommunications equipment and services, especially in developing parts of the world such as Africa and the Middle East.

While it's cheap to hold China out as some sort of boogeyman, it's significant that someday a large portion of the world's traffic will flow through networks controlled by governments that are, at least to some extent, hostile to the core values of Western democracies.

If you believe that this is the future, then the answer certainly won't involve legislation or politics. The NSA won't protect us through cyber-retaliation or whatever plan is on the table today. If you're concerned about the future, then the answer is to finally, truly believe our propaganda about network trust. We need to learn to build systems today that can survive such an environment. Failing that, we need to adjust to a very different world.
http://blog.cryptographyengineering....s-hostile.html





Jeb Bush Wants “a New Arrangement with Silicon Valley” to Ease Crypto

Y'know, because only "evildoers" want to protect their communications.
Cyrus Farivar

The former Florida governor's statement puts him not only at odds with rival Republican candidates like Rand Paul, but also against a number of government committees and federal judges.

“If you create encryption, it makes it harder for the American government to do its job—while protecting civil liberties—to make sure that evildoers aren’t in our midst,” Bush said in South Carolina at an event sponsored by Americans for Peace, Prosperity, and Security, according to The Intercept.

Bush claimed that there was “no evidence” that the bulk collection by the National Security Agency violated civil liberties, despite the fact that the Privacy and Civil Liberties Oversight Board and others have done just that.

.@JebBush: NSA metadata collection is not a violation of anyone's freedoms #APPSForum

— APPS (@APPSUSA) August 18, 2015

He concluded by saying that there needs to be a “a new arrangement with Silicon Valley in this regard” without providing any salient details.

So far, Bush has not proved himself to be particularly technologically adept during the presidential campaign. Earlier this year in an attempt to be more transparent, the brother of President George W. Bush published six Outlook files full of all of his unredacted correspondence. In the process, Jeb Bush created a trove of full names connected with personal e-mail addresses, home addresses, phone numbers, and even social security numbers.
http://arstechnica.com/tech-policy/2...to-do-its-job/





Sen. Dianne Feinstein is Worried Net Neutrality Might Help the Terrorists
Russell Brandom

In a remarkable feat, internet providers have apparently succeeded in making the net neutrality fight about terrorism. In a newly-published letter delivered to the Federal Communications Commission in May, Sen. Dianne Feinstein (D-Ca) raised concerns that the new net neutrality rules might be used to shield terrorists. In particular, Feinstein was concerned that Dzhokar Tsarnaev had studied bomb-making materials on the internet — specifically, online copies of AQAP's Inspire magazine — and that many broadband providers had complained to her that net neutrality rules would prevent them from honoring any orders to block that content.

It's quite a bind, and in the letter, Feinstein entreats FCC chair Tom Wheeler to assure providers that it isn't true. The senator acknowledges that there are laws against material support for terrorism, and Title II only applies to legal web traffic, but "nonetheless, there is apparently confusion among at least some broadband providers on whether they may take such actions in order to promote national security and law enforcement purposes."

"Fast lane or no, you can still pull someone over"

This argument is nonsense for at least three different reasons. For one, there's no current effort to wipe Inspire off the internet entirely, nor is it clear what those grounds would be. If law enforcement agencies do want to take down a network of sites as a result of criminal activity, there's a clear process for them to do so. In fact, this happens all the time! Here's one example; here's another. This is not a real problem facing law enforcement agencies, and even if it were, it has nothing to do with Title II. The same Title II regulations have applied to landline telephones for years, and that hasn't stopped cops from singling out specific phone numbers for wiretaps or more drastic measures. Fast lane or no, you can still pull someone over if you've got the evidence to justify it.

In other words, this isn't about terrorism; it's about broadband providers doing whatever they can to throw a wrench in the FCC's net neutrality proposals. After countless ill-fated lawsuits, providers seem to have decided that making a counter-terrorism case is their best bet, and Senator Feinstein, never one to back down from a counter-terrorism fight, seems to have taken the bait. Of course, it's alarming to see the specter of recent terrorist killings being used to cynically further an unrelated domestic policy agenda, but hopefully this is just a one-off kind of thing.
https://www.theverge.com/2015/8/14/9...net-neutrality





How New 'White Space' Rules Could Lead to an Urban Super-Wi-Fi
Erika Morphy

The underutilized UHF band is perfect for wireless data and can carry for miles, not blocked by walls or trees, researchers at Rice University say. And they have the test to prove it.

Earlier this month, the Federal Communications Commission adopted rules for unlicensed services in TV and 600 MHz bands — a.k.a. television's "white space." The new rules, as described by the FCC, "will permit unlicensed fixed and personal/portable white space devices and unlicensed wireless microphones to use channels in the 600 MHz and television broadcast bands."

Television and other licensed services — patient-monitoring devices, for example — will be protected from harmful interference.

It must be noted, though, that these licensed users, a category that includes a hospital's neonatal care unit, have voiced fears that interference won't be protected.

These very valid concerns aside — and we will revisit them — the rules do open up a new aspect of broadband and mobile connectivity.
A second life for buffer channels

First, though, a primer for those who still associate white space with 1970s-era television.

White space, or buffer channels, refers to the unused channels between the VHF and UHF spectrum. In the pre-cable era, when over-the-air broadcasts ruled the day, these buffers were used to prevent broadcasters from interfering with one another. We all know how prevalent over-the-air broadcasting is now; today this spectrum is largely unused.

Or is it?

As Commissioner Jessica Rosenworcel noted in her own comments after the rules were passed, the universe of products that use this spectrum is surprisingly large. The odds are, she said, that any given individual will use, has used or uses an unlicensed device that became authorized under the new rules.

It could be the shiny new tablet or laptop you used to go online with coffee and Wi-Fi this morning. Or maybe it was the old cordless phone you dusted off to make a quick call. It could have been the baby monitor you used overnight or the remote control you pressed in the morning to get out of the garage.

The use of unlicensed spectrum is a part of everyday modern life, she essentially said, so better for the FCC to establish the parameters for those devices that are already operating in these bands.

The makers of the aforementioned unlicensed devices are surely pleased with the FCC move. But its action also opens another door for greater broadband connectivity.

It's all about broadband

Even two years ago, it was recognized that the white space spectrum could offer cost-effective wireless broadband connectivity in rural areas and for machine-to-machine communications, according to a Strategy Analytics report at that time.

Indeed, there has been interest in this spot on the spectrum for a while, from both companies and the FCC, which first approved its use in 2008.
Super Wi-Fi in the city

A recent test at Rice University shows that the white space spectrum can be used in urban areas as well. Carriers are all over city markets, of course, so broadband is not a problem. What the researchers are posing however, is something that may not be too much to carriers' liking: a super Wi-Fi network knitted together with next-generation TV or smart remotes.

In June engineers from the school demonstrated that wireless data could be transmitted over UHF channels during active TV broadcasts without interference.

The UHF spectrum, which ranges from 400 to 700 MHz, is superior to the higher-frequency signals used for existing Wi-Fi hotspots, the researchers said, as these signals carry for miles and are not blocked by walls or trees.

The technology that lead researcher Edward Knightly and Rice graduate student Xu Zhang developed is called "Wi-Fi in Active TV Channels," or WATCH. They received FCC approval to test it at the Rice campus in 2014, basing it on WARP, or "wireless open-access research platform."

The bottom line about WATCH: it requires no coordination with or changes to legacy TV transmitters, according to the researchers. It also solves a very practical problem. Most of the UHF band is already taken in U.S. cities, but still is largely underutilized.

Instead, according to their paper:

TV signals are broadcast as normal and the WATCH system actively monitors whenever a nearby TV is tuned to a channel to avoid interfering with reception. The technology to allow this comes in two parts. One aspect of WATCH monitors TV broadcasts on a channel and uses sophisticated signal-canceling techniques to insert wireless data transmissions into the same channel; that eliminates TV broadcasts from interfering with the super Wi-Fi data signals being sent to computer users.

"Perfectly suited for wireless data"

It should be noted that carriers such as AT&T and related associations such as the National Association of Broadcasters objected to the FCC rules in the run up to the commission's August meeting, citing concerns that new unlicensed uses in the 600 MHz band would create interference.

Another commissioner, Ajit Pai, acknowledged these concerns in his own published comments, even though he did vote to approve the rules. He pointed to tests that showed even a 5% loss of spectrum capacity due to interference will lower spectrum values by 9%.

Little was said, however, about the competitive threat these rules might pose to mobile broadband providers.

As Knightly said, "The UHF band is perfectly suited for wireless data."
http://www.computerworld.com/article...per-wi-fi.html





Comcast VP: 300GB Data Cap is “Business Policy,” Not Technical Necessity

Exec who manages data cap measurements doesn't know why Comcast caps are so low.
Jon Brodkin

Why does Comcast Internet service have a 300GB monthly data cap?

When asked that question today, Comcast's vice president of Internet services, Jason Livingood, said that he doesn't know, because setting the monthly data limit is a business decision, not one driven by technical necessity.

"Cable Cares," a parody account on Twitter, asked Livingood, "Serious question, why are Comcast's caps set so low compared to the speeds they're being sold at? 100mbps can hit 300GB in 6hr~."

"No idea—I'm involved on the engineering side to manage the measurement systems but don't weigh in on the business policies," Livingood responded.

We've asked Comcast officials if there are any technology benefits from imposing the caps or technology reasons for the specific limits chosen but haven't heard back yet.

Livingood's statement probably won't come as any surprise to critics of data caps who argue that the limits raise prices and prevent people from making full use of the Internet without actually preventing congestion.

Comcast, the nation's largest cable and broadband provider, has claimed that its data caps are not actually "data caps," because customers can use as much data as they want. They just get charged extra when they exceed the monthly limit. This is the same setup used by wireless carriers; pretty much everyone calls them data caps, and that's what we call them, too.

For now, Comcast only imposes the caps in parts of its territory and doesn't charge the overage penalty until the fourth time in each 12-month period that customers exceed the cap. This is to help customers "get accustomed to the new data usage plan," Comcast says. The overage penalties are $10 for each additional 50GB.

A year ago, Comcast Executive VP David Cohen said the data caps could be rolled out across Comcast's entire territory within five years.

The caps aren't applied identically to every service plan. Some higher-speed services have caps of up to 600GB, while Comcast's new 2Gbps fiber-to-the-home service reportedly doesn't have a cap. On the other hand, Comcast offers a "flexible-data option" with extremely low limits. Instead of 300GB per month, the flexible-data option provides just 5GB of data each month and charges customers an extra $1 for each gigabyte they use beyond the 5GB. Customers on this plan get a $5 credit if they do not exceed the 5GB limit.

Comcast set a data cap of 250GB per month in 2008, saying that its policy was to "contact the top users of our high-speed Internet service and ask them to curb their usage." Comcast suspended the 250GB cap in 2012, replacing it with the current system with overage fees.

The data caps of Comcast and other providers may soon come under additional scrutiny at the Federal Communications Commission, which is required by Congress to take action if broadband isn't being deployed to all Americans in a reasonable and timely fashion. In the past, the FCC's annual broadband deployment analyses have focused on speeds and availability. But the commission's next analysis may evaluate whether pricing and data caps are also preventing adoption of broadband.

Late last year, the US Government Accountability Office urged the FCC to examine data caps closely, saying that providers who face little competition may abuse caps to impose higher prices on consumers.

When the GAO surveyed Internet providers, wireline companies told government officials that "congestion is not currently a problem," but that usage-based pricing would generate more revenue to help fund network capacity upgrades
http://arstechnica.com/business/2015...cal-necessity/





Barcelona: The Most Wired City in the World
Vivienne Walt

It’s a showcase for the “smart” metropolis of the future—in which tech giants like Cisco, Microsoft, and IBM see big profits in helping governments save by tracking data on everything from garbage to traffic to selfies. But not everyone is happy about this new urban reality.

The sun is still high in the sky on a June evening in Barcelona when Juan Blanco, Cisco’s CSCO -0.71% business development director for southern Europe, takes me on a walk through the city’s medieval quarter, with its twisting alleyways and sidewalk cafés spilling over with people. As we enter a cobblestone square where the centuries-old El Born market hall stands, Blanco asks, “Notice anything unusual here?” At first I don’t. It looks like a typical Mediterranean city in full summer splendor, with nothing out of place—nothing, that is, until I look up and spot the curved plastic shields affixed to the lampposts at a height of about 30 feet, each with a few metal boxes inside. “Those?” I ask.

Those, indeed. The boxes are no regular electricity meters. They are fine-tuned computer systems, capable of measuring noise, traffic, pollution, crowds, even the number of selfies posted from the street. They are the future of Barcelona, and in some sense they are the future for all of us too. The hard drives are just one piece of what is “unusual” on this street, in fact. Cast your eyes down, and you might spot the digital chips plugged into garbage containers, or the soda-can-size sensors rammed into the asphalt under the parking spaces.

Then again, you might not notice anything. Discreet and largely unannounced, the changes in Barcelona have slipped by even observant residents and the millions of tourists who pour into Spain’s second-biggest city every summer to soak up its tapas, music, and beaches. Yet the stealthy transformation is profound and potentially so sweeping that no one is sure where it will lead. “Our lives have changed totally in the past 10 years thanks to smartphones,” says Josép Ramon Ferrer, a telecom engineer who until late June was Barcelona’s smart-city director, charged with shepherding its digital overhaul. “The management of cities has not changed that much until now. But in the next 10 years, cities will change totally.”

If you want a glimpse of the very near future, one good place to start is this graceful, breezy seaside city of about 2 million people. In times past, Barcelona was famous for its revolutionary artists, like the painter Joan Miró and the architect Antoni Gaudí. But in just the last four years it has carved out a role in a revolution of a different kind: creating a blueprint for the city of the 21st century at a moment when urban dwelling is ever more predominant. According to the UN, about 84% of people will live in cities by the end of the century.

Whether in cities or villages, our modern lives are already saturated in vast amounts of data. The dimensions are almost impossible to grasp: 104,000 YouTube videos are streamed every second, and 2.4 million emails are sent per second. Or rather, those were the figures at the time that sentence was written; they are accelerating at warp speed. The market intelligence organization International Data Corp. (IDC) estimates that by 2020, about 30 billion embedded devices—the Internet of Everything—will monitor and manage countless activities in our lives, from the moment we awake to the moment we fall asleep, from catching the bus to filling the refrigerator, walking the dog, and watering the garden.

For cities the possibilities seem endless. Officials around the world who find themselves grappling with tight budgets and rocketing bills have seized on this tsunami of data as a way to cut costs and overhaul systems that have barely changed in decades. Juniper Research, which this year ranked Barcelona its No. 1 smart city, estimates cities will save about $17 billion a year in energy bills by 2019 by installing smart streetlights and devices like parking and garbage sensors. “The smart-city concept is barely off the ground,” says Juniper senior analyst Steffen Sorrell. “The endgame figure will be much larger.” Indeed, McKinsey Global Institute says in a June report that by 2025 cities will save up to $1.7 trillion a year in delivering services if they deploy new digital systems on a large scale.

For all the eye-popping estimates of future savings, the financial promise remains largely abstract, however. Cities sign on, many at little cost, believing the projects will pay off big. In Europe the EU has committed funds for some cities to upgrade their systems, masking the true expense. “Many of these are pilot projects funded by suppliers or R&D funds,” says Eric Woods, research director in London for the U.S. analytics company Navigant Research NCI 0.25% . “It will become so cheap to include a sensor in a waste collection point, and to collect the data, that service operators will do it as a default.”

Clearly city officials are hoping the outsize estimates of savings prove true. Boston, for example, has inserted sensors to monitor transportation, parking, and energy use, and installed solar-powered street benches that measure pollution and noise. London is developing 3-D maps of its underground wires and pipes to try to stop different utilities from repeatedly digging up the same roads. Hamburg’s port, which handles about 10,000 ships a year, recently computerized its loading systems to synchronize offloading and reduce diesel-choking traffic jams.

The smart-city rush extends beyond the affluent West. Indian Prime Minister Narendra Modi promised during his election campaign last year to build or retrofit 100 wired metropolises by 2022 at a cost of $1 trillion. In June the Indian government published a call for companies to compete for a major rollout of smart-city systems. Its “request for interest” report makes enticing reading for tech companies: The government estimates that about 500 million people—40% of the country’s population—will live in Indian cities by 2030. And all of those, it says, have a “crying need” for high-tech infrastructure. The benefits for cities seem clear: more ordered, clean, coordinated services, at lower cost.

The tech companies that build this new infrastructure stand to gain even more. IBM IBM 0.36% , Cisco, and Microsoft MSFT 0.68% , all of which have invested heavily in developing and manufacturing pieces of the infrastructure, see cities as a key to growth. Navigant estimates that by 2023 technology companies will do about $27.5 billion a year in smart-city business.

To hear Blanco of Cisco tell it, the world is moving to a business equivalent of the clash between ancient city-states. “In the 19th century empires competed. In the 20th century countries competed,” he says. “In the 21st century cities compete.” He tears a piece of paper from my notebook and draws a graph with a straight upward slope. “Cisco’s revenue over the last 20 years has very closely tracked the growth of Internet use,” Blanco says. “This is very simple math. The more people there are on the Internet, the more we grow.”

The term “smart cities” was barely in use when Cisco began testing its ideas in Barcelona in 2011. The company had invested heavily in Songdo, a South Korean business district built from scratch almost as a high-tech experiment, with a network of sensors controlling everything from escalators (they move only when someone steps on them) to classrooms (remote connections with schools abroad). But there were limited numbers of brand-new “green field” places in which to invest. Cisco knew it would need to sell systems to creaking old cities if it was to grow its smart-city business.

Barcelona, which dates back to the ancient Romans, proved an ideal candidate. Its mayor, Xavier Trias (one of Fortune’s 50 World’s Greatest Leaders last year), had argued during his election campaign in 2011 that Barcelona’s economic future would increasingly depend on digitizing its public services. Spain was then languishing through its worst recession in decades, deep in debt, with one in four young Spaniards out of work. Hundreds of thousands of protesters calling themselves indignados were regularly storming the streets of Barcelona and Madrid, burning barricades and raging against austerity budgets.

To tech companies, none of that mattered. Barcelona already had a thriving startup scene: As part of hosting the Summer Olympics in 1992, it had converted its abandoned textile-factory district into a tech hub called @22, which now houses dozens of startups, and laid a network of fiber-optic cables that today covers 310 square miles. The existing fiber-optic cables alone cut the upfront cost of the smart-city programs from what might have been 300 million euros to about 30 million euros, according to Ferrer. Another factor made Barcelona gold for tech giants: Barcelona Football Club, one of the richest, best soccer teams on the planet, and the Mobile World Congress, which about 90,000 tech executives and journalists attend each March. “This city has international branding,” Blanco says. “So anything we develop we can expose to the rest of the world.”

Like most cities, Barcelona had added services haphazardly over the decades. Vicente Guillart, who runs the Institution for Advanced Architecture of Catalonia, says he was initially skeptical, believing smart-city tech was “just about companies trying to sell you something.” He was won over after studying Barcelona’s tangled services and signed on to become chief architect under Trias. “Before, the city was organized like silos,” he says. “Lighting didn’t talk to traffic didn’t talk to water. Each had its own budget with its own data and its own cameras.”

Underground, the wastefulness looked worse. “Five years ago you could go into a tunnel under Barcelona, and there would be four or five different telephone cables,” says Cisco’s Blanco. “Each was fiber optic. And each was using about 5% of its capacity.” The solution was to knit the services into one system under a single company—the Spanish tower operator Cellnex Telecom won the bid—to run the network and sell spare capacity, generating revenues for the city.

Only parts of Barcelona have been rewired so far. But the results are already visible. Sensors measure how full trash containers are, allowing garbage trucks to empty them only once they’re filled. Parking-space sensors tell drivers, via a phone app, which are vacant, so they avoid circling around. Barcelona reworked its bus routes into an efficient grid rather than the confusing tangle that existed before, increasing ridership 30% in four years. Electronic bus stops now show schedules and local sights, and could soon have ads tailored to the neighborhood.

Those sleek new lampposts along the grand avenues? They are not for aesthetic beauty. Hollow inside, they have fiber-optic cables running up them. Each has its own IP address, turning it into a telecommunications tower, with the capability to monitor crowds, noise, weather, and traffic from a Wi-Fi router on top. Now if a crowd of drunken tourists wakes up El Born neighborhood at 2 a.m. (a frequent gripe), there is no need to call the cops: They already know the precise decibel level.

Once in place, all this technology is astonishingly simple to manage. One afternoon Jordi Alvinyà, commercial-strategy director for Cellnex, takes me into Barcelona’s high-security control center, which adjoins the tower Norman Foster designed for the Olympics, with jaw-dropping views. Inside, some 10 engineers in shorts and sneakers sit at screens, monitoring beeps and flickers that tell them in real time whether a streetlight is dead or a pipe is leaking. As we walk out, Alvinyà glances at a refrigerator-size box. “All the policing and security of the city is in that,” he says.

Barcelona officials believed another step was crucial: creating an operating system to run the entire city within one interface. In a café on the Passeig de Grácia, the city’s main boulevard, the outgoing smart-city director, Ferrer, whips out his iPhone 6 to show how he hit on the concept. “For me the cities of the future will be like a smartphone,” he says. “We have a lot of hardware, but it isn’t anything unless it interfaces with the OS. If there are 200 platforms and 200 providers, it is a mess and not sustainable.”

In 2012, Barcelona put out a tender for tech companies to create its OS. The bidding was fierce: 18 companies competed in a process that dragged on for months. In the end, Ferrer signed a contract in May with a consortium comprising Accenture ACN 0.03% , GDF Suez, and Cellnex to build the system, for the pittance of about $1.6 million. “It was nothing,” Ferrer says. “But for the companies, it was a chance for them to deploy solutions for a lot of cities in the world.”

It all seems to make a lot of sense. Certainly it’s logical that digital systems would drive down costs. But the smart-city world is replete with fuzzy projections, and the technology is so new that there are no concrete results to point to. Cisco estimates that Barcelona will see “cumulative economic benefits of 832 million euros by 2025,” including 86.4 million euros in extra tourist spending, but offered little explanation of how it arrived at those figures.

That makes Barcelona a testing lab for a transformation that has only just begun. Four years after Cisco invested in Barcelona, Blanco now shuttles between Barcelona and his home down the coast in Valencia, meeting a parade of city officials who have flown in from New York, Los Angeles, Buenos Aires, Dubai, Qatar, China, Kazakhstan, and elsewhere to examine how they can replicate the smart-city ideas; about 200 delegations have visited during the past year alone. “We don’t necessarily make money in Barcelona,” Blanco says, “but we will make it elsewhere, in other cities.”

Companies are betting there will be billions to gain once Barcelona’s OS is operating and can make sense of the mountains of data the new technology sucks up. Consider for a moment the possibilities of Wi-Fi-fitted lampposts, each with its own IP address, monitoring the numbers of Facebook posts, tweets, or credit card swipes as you stroll by with your smartphone (which identifies where you are from) and withdraw cash, buy shoes, drink a soda, and visit a museum. “We have millions of people coming off cruise ships, many of them Americans with higher incomes,” Cellnex’s Alvinyà tells me.

When I tell Alvinyà I don’t want my movements tracked, he says, “Then use cash and leave your phone at home.” Of course, most people do neither. “It is almost impossible for tourists not to send a photograph to friends back home,” says Alvinyà, describing the gold mine of data the city will finally tap. Then, he says, “we can know where you are from, where you are shopping, and at what hours.”

The prospect of a new incarnation of Big Brother leaves many people uneasy—as Rio de Janeiro discovered. As part of Rio’s World Cup last year, IBM built a command center to knit together 30 service agencies and monitor floods, fires, and other potential disasters. It sounded positive until Mayor Eduardo Paes boasted that it “allows us to have people looking at every corner of the city, 24 hours a day, seven days a week.” His words sparked outrage among some residents.

So far Barcelona has avoided the resentment. And by 2021 it will have its biggest data hub yet: Barcelona Football Club’s new state-of-the-art complex. The team, which has star players Neymar Jr. and Lionel Messi, packs every game in its current 90,000-seat stadium, giving the club an extraordinary 600 million euros in yearly revenues, according to Josep Maria Bartomeu, the club president. But its new 105,000-seat stadium will offer new revenue streams by linking fans directly to merchandising with free Wi-Fi. It will also remove the fences around its 54-acre compound, which draws about 1.7 million tourists a year, creating a smart-city neighborhood in the middle of Barcelona. “We will have a permanent connection with technology,” Bartomeu says. “Companies will get information about people who are there from all over the world.”

That data collection is already underway. In September, Microsoft assigned Bismart, a local big-data startup, to analyze the spending of some of the 2 million people partying at Barcelona’s annual four-day festival. Bismart monitored the credit card swipes of 448,000 tourists. The results were revealing. “We found that French people camp [rather than stay in hotels], and British people don’t spend anything,” Bismart CEO Albert Isern says. In years past, marketing companies would have tried to find those results by questioning tourists. “Manual surveys generally analyze 1% of data. Here we analyze 50%.”

For four years now, Barcelona’s smart-city program has seemed unstoppable. Then came May, when residents ousted Mayor Trias and voted in Ada Colau. She could scarcely be a more jolting contrast. At 41, she is 28 years younger than the business-friendly Trias and rose to fame as a feisty indignado, whom police arrested more than once during protests over apartment evictions. Colau vowed to rein in gentrification and tourism, which many feel are threatening to engulf Barcelona. She triumphed largely by casting Trias’s government as too closely tied to business, too focused on branding the city as a magnet for tech companies. “We have a real commitment to new technologies that go beyond just TV ads titled ‘Smart City,’ ” she told a reporter in April.

When I arrived in mid-June, one day after Colau’s inauguration, it was clear the smart-city brand had taken a knock. In her first month in office, Colau canceled Barcelona’s bid for the 2026 Winter Olympics and announced that the city would not do business with banks implicated in evicting delinquent mortgage holders. “We are going to change our focus to social issues, like analyzing which apartments are unoccupied, for example,” Bismart CEO Isern says. “Data is like a knife,” he says. “We can use it to cut food or kill people.”

So far, Barcelona’s new leaders are not sure what kind of knife smart-city technology is. When I catch Colau outside a meeting one morning, she says she “still needs to study the issue.” But the previous evening Barcelona’s new deputy mayor, Gerardo Pisarello, told me the new regime had a very different view from the smart-city cheerleaders they ousted. “We’ve spoken about smart citizens. That is what we need—not just a smart city,” he tells me. “We want technology to reach the poorest neighborhoods. That is what a ‘smart city’ is to us.”

Barcelona’s political earthquake has shaken smart-city devotees into realizing that not all politicians may share their vision about digitizing their cities, especially given the perpetual lack of cash. Those concerns are not unique to Barcelona. Indians have questioned whether Modi’s new smart cities will exclude poor people. Londoners have pooh-poohed some digital overhauls as a waste of their tax money. The technology might need to prove its true value (which, of course, it can’t do until a sizable locale rolls it out and gives it time to succeed or fail). “The need is clear in the cities,” says Woods of Navigant. “What is less clear is how cities roll out these solutions at scale, and how they will find the financial means to do so.”

In truth, it might be too late for Barcelona’s new mayor to stop the clock, given that many smart-city programs are already underway. The real wrangle will be over whether profits and social good can co-exist in harmony. “The Internet of Everything is going to happen,” says Guillart, the architect. “The only question now is, Who is going to rule it?”
http://fortune.com/2015/07/29/barcelona-wired-city/





The Farmer Who was so Sick of Poor Internet Signal he Built a DIY Mast - and Now it's Giving Him Superfast Broadband

• Farmer Richard Guy, 60, battled for years with slow internet signal at home
• He noticed his mobile's 4G was faster than broadband provided by BT
• So he built his a wooden telephone mast, on which he set up a 4G adaptor
• Father-of-two is now enjoying 'perfect' internet access at super-fast speeds

Alisha Rouse

Living on a farm nestled in a remote area of Salisbury Plain, Richard Guy had battled for years with an unbearably slow internet signal.

But the 60-year-old farmer decided enough was enough and resolved to take on telecoms giant BT and find an alternative source.

Mr Guy noticed that his mobile phone's 4G signal – a wireless internet connection – was significantly faster than the broadband link provided by BT to his home, but he needed to find a way to route the signal to his farmhouse.

So the savvy father-of-two built his own makeshift wooden telephone mast, on which he set up a 4G adaptor inside a toolbox.

He then connected this to his home via a system of wires – and he was soon enjoying 'perfect' internet access at super-fast speeds.

The father of two said: 'It's a big problem for people in rural areas. The Government told us that the Olympics would bring fast broadband to everyone in Britain.

'Well, the Olympics were some time ago now. The world assumes that everyone is online, but the 5 per cent who can't connect are just dismissed.

'So I decided to take matters into my own hands. We only had a 1 Mbps [megabits per second] speed, which means everything is far too slow. Now I run at 69 Mbps, it runs everything perfectly.'

The average speed households across the country enjoy is around 25 Mbps.

Mr Guy added: 'When I spoke to the fibre-optic people [who provide wires to transmit an internet connection], they were very intrigued.

'They said 'you're going to do what? Put it in a box up a pole, are you crazy?' They normally deal with people like Google and IBM.'

Mr Guy, who has worked in IT since the 1980s, had found that the strongest 4G signal was on farmland miles away from his house.

He fitted a 4G dongle, which is a type of adaptor, inside a waterproof toolbox two thirds of the way up a pair of wooden poles.

The adaptor, which is powered by a 12V battery topped up by two small solar panels, then converts the internet signal into a form that allows it to run along relatively cheap fibre-optic cables, costing £1 per metre, to his home.

Mr Guy and his wife Gilly, who is also 60, have now started a company called Agri-Broadband, which aims to get super-fast internet connections to Britain's most rural homes.

'I think at the start Gilly didn't think it would work,' Mr Guy said. 'But she's very supportive and helps with all sides of the business.

'I just love seeing the expression on someone's face when you show them it's possible that they, having been left out in the middle of nowhere, can get serious broadband.

'But I turn up in a dirty Range Rover and this old geezer gets out and people think 'he's not going to solve this'. I think they're expecting some young techie, but then it works and they're amazed.'

The farmer uses Ofcom's mobile network website to determine where the best signals can be found in rural areas, usually within small valleys and hills.

Mr Guy said his next customer, who will have a specialist trench dug on his farm in the Cotswolds in September, has a connection of just 0.4 Mbps, adding: 'He's trying to run a business on that, so he's delighted.'
http://www.dailymail.co.uk/news/arti...-DIY-mast.html





Company Pays FCC $750,000 for Blocking Wi-Fi Hotspots at Conventions

Wanted to force convention-goers to purchase $80/day Wi-Fi access.
Dan Goodin

A Wi-Fi service provider has agreed to pay the Federal Communications Commission $750,000 for blocking personal mobile hotspots used by convention visitors and exhibitors so they could avoid paying the company's $80-per-day fee.

Smart City Holdings automatically blocked users from using their personal cell phone data plans to establish mobile Wi-Fi networks, according to a statement published Tuesday by FCC officials. After the FCC took action against Smart City Holdings, the company pledged to stop the practice and pay the $750,000 fee to settle the matter.

It's the second enforcement action by the FCC taking aim at the blocking of FCC-approved Wi-Fi connections. In October, Marriott Hotel Services reached a $600,000 agreement with the FCC to settle allegations it interfered with and disabled Wi-Fi networks established by consumers in the hotel's conference facilities in Nashville. In January, the FCC issued an enforcement advisory that stated unequivocally Wi-Fi blocking was prohibited. Taken together, the moves should put hotels, convention centers, and just about everyone else on notice that it's unlawful to block FCC-approved Wi-Fi connections.

The FCC's action against Smart City Holdings stemmed from a complaint filed in June 2014 from a company that allows people to establish hotspots as an alternative to paying Wi-Fi service fees charged by a venue. The complaining company said customers couldn't connect to its equipment at several venues where Smart City operated. In responses to FCC investigators, Smart City later revealed it "automatically transmitted deauthentication frames to prevent Wi-Fi users whose devices produced a received signal strength above a present power level at Smart City access points from establishing or maintaining a Wi-Fi network independent of Smart City's network," according to a consent decree filed in the case.

In a statement, Smart City Holdings president Mark Haley said his company in the past used equipment that prevented wireless devices from interfering with operations of exhibitors on convention floors. The activity resulted in less than one percent of all devices being deauthenticated.

"We have always acted in good faith, and we had no prior notice that the FCC considered the use of this standardized, 'available-out-of-the-box' technology to be a violation of its rules. But when we were contacted by the FCC in October 2014, we ceased using the technology in question."

Smart City Holdings charged as much as $80 per day for Wi-Fi connectivity, the FCC said.
http://arstechnica.com/tech-policy/2...t-conventions/





Welcome to the Beginning of the End for Pay TV Bundles
Zach Epstein

It’s the end of the world as we know it, and pay TV companies do not feel fine. In fact, they are hurting quite badly, and things are only going to get worse. Perhaps it’s karmic retribution for years of anti-consumer policies. Or perhaps these behemoths just thought that their lobbying dollars could shield them from reality forever.

Whatever the case, the bottom line is clear and simple: U.S. media giants are in trouble, and we won’t see many tears shed if they topple.

Actually, that’s not true at all — we’ll see plenty of tears shed. As noted by Sector & Sovereign Research’s Paul Sagawa in a recent research note, eight giant U.S. media companies lost a combined $46.2 billion in market capitalization in one day recently, following the news that even pay TV darling ESPN is losing subscribers.

Indeed, we can expect investors to shed plenty of tears on the way down.

Meanwhile, Sagawa noted that Netflix’s subscriber base was up 17% in the most recent quarter, and Google’s monthly YouTube viewership increased 40% on-year. The days of linear TV are unquestionably numbered at this point, as are the days of the traditional pay TV bundle.

“The TV industry’s response has been cautious, licensing live feeds to DISH, SONY and, likely, AAPL for skinny bundle OTT services, but refusing to allow cloud-based DVR functionality,” Sagawa wrote. “Trends suggest that online linear TV may prove less than popular. Hub Entertainment Research recently reported that 53% of all US video viewing is time-shifted – DVR, on-demand, or streaming – with millennials even less likely to watch linear TV. TWX and CBS have jumped in with on-demand streaming versions of their premium channels, but at price points too high to encourage cord cutting.”

The analyst continued, “We believe network TV is at the beginning of a long squeeze between weakening fees and ad sales on one side and rising content costs on the other. Streaming rivals will have increasingly larger scale, better data, and deeper pockets to buy more of the best content, including the life blood of linear TV – live sports. We acknowledge that the exodus that has begun will take many years to complete, but it is, nonetheless, inevitable. Media players that are diversified away from the cable bundle, e.g. DIS, or already moving to solidify their bona fides as streamers, e.g. TWX, may fare better than others, but all will suffer. Meanwhile, NFLX and GOOG should continue to reap the rewards of their dominance for online video.”
https://bgr.com/2015/08/18/pay-tv-su...tting-bundles/





Will Hollywood's Whining Thwart Better TPP Copyright Rules?
Maria Sutton

As far as secret, corporate-driven trade agreements go, the Trans-Pacific Partnership (TPP) is a particularly terrible deal for users, not least because it empowers Hollywood and other big publishers at the expense of everyone else. But there seems to be a glimmer of hope that one critical part of it could be improved. Some tech companies and policymakers are lobbying hard to increase the flexibility of the TPP's language on exceptions and limitations to copyright. According to reports, lobbyists representing companies like Google and other members of the Internet Association and lawmakers like Sen. Ron Wyden have been working behind the scenes to pressure the U.S. Trade Representative (USTR) to reopen the text for amendment.

The USTR first introduced copyright exceptions and limitations language in the TPP in 2012. At that time we called out the proposal as being too weak, noting that it could actually restrict rather than encourage the broader adoption of fair use around the world. A few years and more than a dozen negotiation rounds later, we've been proved right. The provisions that U.S. trade officials first lauded as a huge step towards bringing balance to its copyright proposals will in fact do little to promote new safeguards for user rights.

First, according to the most recently leaked text, the provision is merely a suggestion that TPP nations' copyright rules should balance the needs of rightsholders and the public interest. The language says that countries only "shall endeavor" to achieve a balance in their copyright rules. In every other part of the agreement, countries are actually required to adopt certain rules, or at least provide for the passage of stricter copyright rules.

Second, the framework for nations to be able to enact new user rights in copyrighted work—such as for security research, accessibility, or remixing—is very restrictive. It uses a framework called the three-step test [.pdf]. That test is often used in international copyright agreements and has consistently limited the creation of new usage rights of copyrighted works. Through the TPP, the three-step test could undermine efforts to enact fair use in all the other 11 TPP countries.

As of last month, it seemed that all of the TPP countries had agreed to this language. In late July, however, tech companies' renewed pressure seemed to have changed the game. The USTR offered to go back in and revise these provisions ahead of the last negotiation round. According to a spokesperson for the U.S. Chamber of Commerce, in exchange for support for the controversial Fast Track legislation, the USTR promised to make the TPP's exceptions and limitations language more permissive and be a requirement, rather than being purely a suggestion, for all TPP countries.

That's when Hollywood began to throw a fit.

According to Inside U.S. Trade, rightsholder groups like the Motion Picture Association of America (MPAA) are "livid" about the USTR's move to revisit the language on exceptions and limitations. They're pushing back hard, urging members of Congress—including every House member from California—to pressure the USTR not to touch these closed provisions. Why? Probably not because revisiting the language will actually cause any real harm to creators. The more likely explanation is that the copyright maximalists are worried that their tight grip over the USTR is slipping.

The big media lobbyists' theatrics over this minor amendment are embarrassing, but they do raise one important issue: our trade negotiators are a lot less interested in the needs of ordinary users and creators than the needs of powerful companies. Why else was a last-minute intervention by Google sufficient to bring the USTR back to the negotiating table on this topic, where the sustained interventions of EFF and 10 other major public interest groups from around the world were not?

That said, we're glad that the tech companies are doing what they can to improve the text in a way that will help protect and empower users. What they're advocating for is completely reasonable language that would enable people to use and modify copyrighted works and content in ways that don't harm the commercial interests of the copyright holders. Of course tech policy should not be driven by competing powerful corporate interests—but in the absence of legitimate, transparent, public-interest policymaking, the tech industry's challenge to big copyright's control over U.S. trade policy is a welcome change. At the very least, it forces officials to question the prerogatives of entrenched legacy industries.

Hollywood groups, for their part, are behaving like spoiled children: if they don't get exactly what they want, they'll whine to policymakers until they do. Ironically enough their complaints may actually undermine their own long-term interests. After all, creative artists of all kinds depend on fair use to make new works—from blockbuster pictures to music to fiction.

The USTR and Members of Congress ought to wake up to Hollywood's antics. Innovation, creativity, and free speech depend on limitations and exceptions like fair use. Making those exceptions and limitations as strong as possible benefits everyone, including Hollywood.
https://www.eff.org/deeplinks/2015/0...opyright-rules





The Creative Apocalypse That Wasn’t

In the digital economy, it was supposed to be impossible to make money by making art. Instead, creative careers are thriving — but in complicated and unexpected ways.
Steven Johnson

On July 11, 2000, in one of the more unlikely moments in the history of the Senate Judiciary Committee, Senator Orrin Hatch handed the microphone to Metallica’s drummer, Lars Ulrich, to hear his thoughts on art in the age of digital reproduction. Ulrich’s primary concern was a new online service called Napster, which had debuted a little more than a year before. As Ulrich explained in his statement, the band began investigating Napster after unreleased versions of one of their songs began playing on radio stations around the country. They discovered that their entire catalog of music was available there for free.

Ulrich’s trip to Washington coincided with a lawsuit that Metallica had just filed against Napster — a suit that would ultimately play a role in the company’s bankruptcy filing. But in retrospect, we can also see Ulrich’s appearance as an intellectual milestone of sorts, in that he articulated a critique of the Internet-#era creative economy that became increasingly commonplace over time. ‘‘We typically employ a record producer, recording engineers, programmers, assistants and, occasionally, other musicians,’’ Ulrich told the Senate committee. ‘‘We rent time for months at recording studios, which are owned by small-#business men who have risked their own capital to buy, maintain and constantly upgrade very expensive equipment and facilities. Our record releases are supported by hundreds of record companies’ employees and provide programming for numerous radio and television stations. ... It’s clear, then, that if music is free for downloading, the music industry is not viable. All the jobs I just talked about will be lost, and the diverse voices of the artists will disappear.’’

The intersection between commerce, technology and culture has long been a place of anxiety and foreboding. Marxist critics in the 1940s denounced the assembly-line approach to filmmaking that Hollywood had pioneered; in the ’60s, we feared the rise of television’s ‘‘vast wasteland’’; the ’80s demonized the record executives who were making money off violent rap lyrics and ‘‘Darling Nikki’’; in the ’90s, critics accused bookstore chains and Walmart of undermining the subtle curations of independent bookshops and record stores.

But starting with Ulrich’s testimony, a new complaint has taken center stage, one that flips those older objections on their heads. The problem with the culture industry is no longer its rapacious pursuit of consumer dollars. The problem with the culture industry is that it’s not profitable enough. Thanks to its legal troubles, Napster itself ended up being much less important as a business than as an omen, a preview of coming destructions. Its short, troubled life signaled a fundamental rearrangement in the way we discover, consume and (most importantly) pay for creative work. In the 15 years since, many artists and commentators have come to believe that Ulrich’s promised apocalypse is now upon us — that the digital economy, in which information not only wants to be free but for all practical purposes is free, ultimately means that ‘‘the diverse voices of the artists will disappear,’’ because musicians and writers and filmmakers can no longer make a living.

Take a look at your own media consumption, and you can most likely see the logic of the argument. Just calculate for a second how many things you used to pay for that now arrive free of charge: all those Spotify playlists that were once $15 CDs; the countless hours of YouTube videos your kids watch each week; online articles that once required a magazine subscription or a few bucks at the newsstand. And even when you do manage to pull out a credit card, the amounts are shrinking: $9 for an e-book that used to be a $20 hardcover. If the prices of traditional media keep falling, then it seems logical to critics that we will end up in a world in which no one has an economic incentive to follow creative passions. The thrust of this argument is simple and bleak: that the digital economy creates a kind of structural impossibility that art will make money in the future. The world of professional creativity, the critics fear, will soon be swallowed by the profusion of amateurs, or the collapse of prices in an age of infinite and instant reproduction will cheapen art so that no one will be able to quit their day jobs to make it — or both.

The trouble with this argument is that it has been based largely on anecdote, on depressing stories about moderately successful bands that are still sharing an apartment or filmmakers who can’t get their pictures made because they refuse to pander to a teenage sensibility. When we do see hard data about the state of the culture business, it usually tracks broad industry trends or the successes and failures of individual entertainment companies. That data isn’t entirely irrelevant, of course; it’s useful to know whether the music industry is making more or less money than it did before Ulrich delivered his anti-#Napster testimony. But ultimately, those statistics only hint at the most important question. The dystopian scenario, after all, isn’t about the death of the record business or Hollywood; it’s about the death of music or movies. As a society, what we most want to ensure is that the artists can prosper — not the record labels or studios or publishing conglomerates, but the writers, musicians, directors and actors themselves.

Their financial fate turns out to be much harder to measure, but I endeavored to try. Taking 1999 as my starting point — the year both Napster and Google took off — I plumbed as many data sources as I could to answer this one question: How is today’s creative class faring compared with its predecessor a decade and a half ago? The answer isn’t simple, and the data provides ammunition for conflicting points of view. It turns out that Ulrich was incontrovertibly correct on one point: Napster did pose a grave threat to the economic value that consumers placed on recorded music. And yet the creative apocalypse he warned of has failed to arrive. Writers, performers, directors and even musicians report their economic fortunes to be similar to those of their counterparts 15 years ago, and in many cases they have improved. Against all odds, the voices of the artists seem to be louder than ever.

The closest data set we have to a bird’s-eye view of the culture industry can be found in the Occupational Employment Statistics, an enormous compendium of data assembled by the Labor Department that provides employment and income estimates. Broken down by general sector and by specific professions, the O.E.S. lets you see both the forest and the trees: You can track employment data for the Farming, Fishing and Forestry Occupations (Group 45-0000), or you can zoom in all the way to the Fallers (Group 45-4021) who are actually cutting down the trees. The O.E.S. data goes back to the 1980s, though some of the category definitions have changed over time. This, and the way the agency collects its data, can make specific year-to-year comparisons less reliable. The best approximation of the creative-class group as a whole is Group 27-0000, or Arts, Design, Entertainment, Sports and Media Occupations. It’s a broader definition than we’re looking for — I think we can all agree that professional athletes are doing just fine, thank you very much — but it gives us a place to start.

The first thing that jumps out at you, looking at Group 27-0000, is how stable it has been over the past decade and a half. In 1999, the national economy supported 1.5 million jobs in that category; by 2014, the number had grown to nearly 1.8 million. This means the creative class modestly outperformed the rest of the economy, making up 1.2 percent of the job market in 2001 compared with 1.3 percent in 2014. Annual income for Group 27-0000 grew by 40 percent, slightly more than the O.E.S. average of 38 percent. From that macro viewpoint, it hardly seems as though the creative economy is in dust-bowl territory. If anything, the market looks as if it is rewarding creative work, not undermining it, compared with the pre-#Napster era.

The problem with the O.E.S. data is that it doesn’t track self-#employed workers, who are obviously a large part of the world of creative production. For that section of the culture industry, the best data sources are the United States Economic Census, which is conducted every five years, and a firm called Economic Modeling Specialists International, which tracks detailed job numbers for self-#employed people in specific professions. If anything, the numbers from the self-#employed world are even more promising. From 2002 to 2012, the number of businesses that identify as or employ ‘‘independent artists, writers and performers’’ (which also includes some athletes) grew by almost 40 percent, while the total revenue generated by this group grew by 60 percent, far exceeding the rate of inflation.

What do these data sets have to tell us about musicians in particular? According to the O.E.S., in 1999 there were nearly 53,000 Americans who considered their primary occupation to be that of a musician, a music director or a composer; in 2014, more than 60,000 people were employed writing, singing or playing music. That’s a rise of 15 percent, compared with overall job-#market growth during that period of about 6 percent. The number of self-#employed musicians grew at an even faster rate: There were 45 percent more independent musicians in 2014 than in 2001. (Self-#employed writers, by contrast, grew by 20 percent over that period.)

Of course, Baudelaire would have filed his tax forms as self-#employed, too; that doesn’t mean he wasn’t also destitute. Could the surge in musicians be accompanied by a parallel expansion in the number of broke musicians? The income data suggests that this just isn’t true. According to the O.E.S., songwriters and music directors saw their average income rise by nearly 60 percent since 1999. The census version of the story, which includes self-#employed musicians, is less stellar: In 2012, musical groups and artists reported only 25 percent more in revenue than they did in 2002, which is basically treading water when you factor in inflation. And yet collectively, the figures seem to suggest that music, the creative field that has been most threatened by technological change, has become more profitable in the post-#Napster era — not for the music industry, of course, but for musicians themselves. Somehow the turbulence of the last 15 years seems to have created an economy in which more people than ever are writing and performing songs for a living.

How can this be? The record industry’s collapse is real and well documented. Even after Napster shut down in 2002, music piracy continued to grow: According to the Recording Industry Association of America, 30 billion songs were illegally downloaded from 2004 to 2009. American consumers paid for only 37 percent of the music they acquired in 2009. Artists report that royalties from streaming services like Spotify or Pandora are a tiny fraction of what they used to see from traditional album sales. The global music industry peaked just before Napster’s debut, during the heyday of CD sales, when it reaped what would amount today to almost $60 billion in revenue. Now the industry worldwide reports roughly $15 billion in revenue from recorded music, a financial Armageddon even if you consider that CDs are much more expensive to produce and distribute than digital tracks. With such a steep decline, how can the average songwriter or musician be doing better in the post-#Napster era? And why does there seem to be more musicians than ever?

Part of the answer is that the decline in recorded-#music revenue has been accompanied by an increase in revenues from live music. In 1999, when Britney Spears ruled the airwaves, the music business took in around $10 billion in live-#music revenue internationally; in 2014, live music generated almost $30 billion in revenue, according to data assembled from multiple sources by the live-music service Songkick. Starting in the early 1980s, average ticket prices for concerts closely followed the rise in overall consumer prices until the mid-1990s, when ticket prices suddenly took off: From 1997 to 2012, average ticket prices rose 150 percent, while consumer prices grew less than 100 percent. It’s elemental economics: As one good — recorded music — becomes ubiquitous, its price plummets, while another good that is by definition scarce (seeing a musician play a live performance) grows in value. Moreover, as file-#sharing and iTunes and Spotify have driven down the price of music, they have also made it far easier to envelop your life with a kind of permanent soundtrack, all of which drives awareness of the musicians and encourages fans to check them out in concert. Recorded music, then, becomes a kind of marketing expense for the main event of live shows.

It’s true that most of that live-#music revenue is captured by superstar acts like Taylor Swift or the Rolling Stones. In 1982, the musical 1-#percenters took in only 26 percent of the total revenues generated by live music; in 2003, they captured 56 percent of the market, with the top 5 percent of musicians capturing almost 90 percent of live revenues. But this winner-#takes-#all trend seems to have preceded the digital revolution; most 1-#percenters achieved their gains in the ’80s and early ’90s, as the concert business matured into a promotional machine oriented around marquee world tours. In the post-#Napster era, there seems to have been a swing back in a more egalitarian direction. According to one source, the top 100 tours of 2000 captured 90 percent of all revenue, while today the top 100 capture only 43 percent.

The growth of live music isn’t great news for the Brian Wilsons of the world, artists who would prefer to cloister themselves in the studio, endlessly tinkering with the recording process in pursuit of a masterpiece. The new economics of the post-#Napster era are certainly skewed toward artists who like to perform in public. But we should remember one other factor here that is often forgotten. The same technological forces that have driven down the price of recorded music have had a similar effect on the cost of making an album in the first place. We easily forget how expensive it was to produce and distribute albums in the pre-#Napster era. In a 2014 keynote speech at an Australian music conference, the indie producer and musician Steve Albini observed: ‘‘When I started playing in bands in the ’70s and ’80s, most bands went through their entire life cycle without so much as a note of their music ever being recorded.’’ Today, musicians can have software that emulates the sound of Abbey Road Studios on their laptops for a few thousand dollars. Distributing music around the world — a process that once required an immense global corporation or complex regional distribution deals — can now be performed by the artist herself while sitting in a Starbucks, simply through the act of uploading a file.

The vast machinery of promoters and shippers and manufacturers and A&R executives that sprouted in the middle of the 20th century, fueled by the profits of those high-#margin vinyl records and CDs, has largely withered away. What remains is a more direct relationship between the musicians and their fans. That new relationship has its own demands: the constant touring and self-#promotion, the Kickstarter campaigns that have raised $153 million dollars to date for music-#related projects, the drudgery that inevitably accompanies a life without handlers. But the economic trends suggest that the benefits are outweighing the costs. More people are choosing to make a career as a musician or a songwriter than they did in the glory days of Tower Records.

Of the big four creative industries (music, television, movies and books), music turns out to be the business that has seen the most conspicuous turmoil: None of the other three has seen anywhere near the cratering of recorded-#music revenues. The O.E.S. numbers show that writers and actors each saw their income increase by about 50 percent, well above the national average. According to the Association of American Publishers, total revenues in the fiction and nonfiction book industry were up 17 percent from 2008 to 2014, following the introduction of the Kindle in late 2007. Global television revenues have been projected to grow by 24 percent from 2012 to 2017. For actors and directors and screenwriters, the explosion of long-form television narratives has created a huge number of job opportunities. (Economic Modeling Specialists International reports that the number of self-#employed actors has grown by 45 percent since 2001.) If you were a television actor looking for work on a multiseason drama or comedy in 2001, there were only a handful of potential employers: the big four networks and HBO and Showtime. Today there are Netflix, Amazon, AMC, Syfy, FX and many others.

What about the economics of quality? Perhaps there are more musicians than ever, and the writers have collectively gotten a raise, but if the market is only rewarding bubble-#gum pop and ‘‘50 Shades Of Grey’’ sequels, there’s a problem. I think we can take it as a given that television is exempt from this concern: Shows like ‘‘Game Of Thrones,’’ ‘‘Orange Is The New Black,’’ ‘‘Breaking Bad’’ and so on confirm that we are living through a golden age of TV narrative. But are the other forms thriving artistically to the same degree?

Look at Hollywood, and at first blush the picture is deeply depressing. More than half of the highest grossing movies of 2014 were either superhero films or sequels; it’s clearly much harder to make a major-#studio movie today that doesn’t involve vampires, wizards or Marvel characters. This has led a number of commentators and filmmakers to publish eulogies for the classic midbudget picture. ‘‘Back in the 1980s and 1990s,’’ Jason Bailey wrote on Flavorwire, ‘‘it was possible to finance — either independently or via the studio system — midbudget films (anywhere from $5 million to $60 million) with an adult sensibility. But slowly, quietly, over roughly the decade and a half since the turn of the century, the paradigm shifted.’’ Movies like ‘‘Blue Velvet,’’ ‘‘Do the Right Thing’’ or ‘‘Pulp Fiction’’ that succeeded two or three decades ago, the story goes, would have had a much harder time in the current climate. Steven Soderbergh apparently felt so strongly about the shifting environment that he abandoned theatrical moviemaking altogether last year.

Is Bailey’s criticism really correct? If you make a great midbudget film in 2015, is the marketplace less likely to reward your efforts than it was 15 years ago? And has it become harder to make such a film? Cinematic quality is obviously more difficult to measure than profits or employment levels, but we can attempt an estimate of artistic achievement through the Rotten Tomatoes rankings, which aggregate critics’ reviews for movies. Based on my analysis, using data on box-#office receipts and budgets from IMDB, I looked at films from 1999 and 2013 that met three categories. First, they were original creations or adaptations, not based on existing franchises, and were intended largely for an adult audience; second, they had a budget below $80 million; and third, they were highly praised by the critics, as defined by their Rotten Tomatoes score — in other words, the best of the cinematic midlist. In 1999, the most highly rated films in these categories combined included ‘‘Three Kings,’’ ‘‘Being John Malkovich,’’ ‘‘American Beauty’’ and ‘‘Election.’’ The 2013 list included ‘‘12 Years a Slave,’’ ‘‘Her,’’ ‘‘Zero Dark Thirty,’’ ‘‘American Hustle’’ and ‘‘Nebraska.’’ In adjusted dollars, the class of 1999 brought in roughly $430 million at the box office. But the 2013 group took in about $20 million more. True, individual years can be misleading: All it takes is one monster hit to skew the numbers. But if you look at the blended average over a three-year window, there is still no evidence of decline. The 30 most highly rated midbudget films of 1999 to 2001 took in $1.5 billion at the domestic box office, adjusted for inflation; the class of 2011 to 2013 took in the exact same amount. Then as now, if you make a small or midsize movie that rates on the Top 10 lists of most critics, you’ll average roughly $50 million at the box office.

The critics are right that big Hollywood studios have abandoned the production of artistically challenging films, part of a broader trend since the 1990s of producing fewer films over all. (From 2006 to 2011, the combined output of major Hollywood studios declined by 25 percent.) And yet the total number of pictures released in the United States — nearly 600 in 2011 — remains high. A recent entertainment research report, The Sky Is Rising, notes that most of that growth has come from independent production companies, often financed by wealthy individuals from outside the traditional studio system. ‘‘Her,’’ ‘‘12 Years a Slave,’’ ‘‘Dallas Buyers Club,’’ ‘‘American Hustle’’ and ‘‘The Wolf of Wall Street’’ were all funded by major indies, though they usually relied on distribution deals with Hollywood studios. At the same time, of course, some of the slack in adventurous filmmaking has been taken up by the television networks. If Francis Ford Coppola were making his ‘‘Godfather’’ trilogy today, he might well end up at HBO or AMC, with a hundred hours of narrative at his disposal, instead of 10.

How have high-#quality books fared in the digital economy? If you write an exceptional novel or biography today, are you more or less likely to hit the best-#seller list than you might have in the pre-#Kindle age? Here the pessimists might have a case, based on my analysis. Every year, editors at The New York Times Book Review select the 100 notable books of the year. In 2004 and 2005, the years before the first Kindles were released, those books spent a combined 2,781 weeks on The Times’s best-#seller list and the American Booksellers Association’s IndieBound list, which tracks sales in independent bookstores. In 2013 and 2014, the notable books spent 2,531 weeks on the best-#seller lists — a decline of 9 percent. When you look at the two lists separately, the story becomes more complicated still. The critical successes of 2013 and 2014 actually spent 6 percent more weeks on the A.B.A. list, but 30 percent fewer weeks on the broader Times list. The numbers seem to suggest that the market for books may be evolving into two distinct systems. Critically successful works seem to be finding their audience more easily among indie-#bookstore shoppers, even as the mainstream market has been trending toward a winner-#takes-#all sweepstakes.

This would be even more troubling if independent bookstores — traditional champions of the literary novel and thoughtful nonfiction — were on life support. But contrary to all expectations, these stores have been thriving. After hitting a low in 2007, decimated not only by the Internet but also by the rise of big-box chains like Borders and Barnes & Noble, indie bookstores have been growing at a steady clip, with their number up 35 percent (from 1,651 in 2009 to 2,227 in 2015); by many reports, 2014 was their most financially successful year in recent memory. Indie bookstores account for only about 10 percent of overall book sales, but they have a vastly disproportionate impact on the sale of the creative midlist books that are so vital to the health of the culture.

How do we explain the evolutionary niche that indie bookstores seem to have found in recent years? It may be as simple as the tactile appeal of books and bookstores themselves. After several years of huge growth, e-book sales have plateaued over the past two years at 25 to 30 percent of the market, telegraphing that a healthy consumer appetite for print remains. To many of us, buying music in physical form is now simply an inconvenience: schlepping those CDs home and burning them and downloading the tracks to our mobile devices. But many of the most ardent Kindle converts — and I count myself among them — still enjoy browsing shelves of physical books, picking them up and sitting back on the couch with them. The trend might also reflect the social dimension of book culture: If you’re looking for literary community, you head out to the weekly reading series at the indie bookstore and buy something while you’re there. (Arguably, it’s the same phenomenon that happened with music, only with a twist. If you’re looking for musical community, you don’t go out on a CD-#buying binge. You go to a show instead.)

All these numbers, of course, only hint at whether our digital economy rewards quality. Or — even better than that milquetoast word ‘‘quality’’ — at whether it rewards experimentation, boundary-#pushing, satire, the real drivers of new creative work. It could be that our smartphone distractions and Kardashian celebrity culture have slowly but steadily lowered our critical standards, the aesthetic version of inflation: The critics might like certain films and books today because they’re surrounded by such a vast wasteland of mediocrity, but if you had released them 15 years ago, they would have paled beside the masterpieces of that era. But if you scan the titles, it is hard to see an obvious decline. A marketplace that rewarded ‘‘American Beauty,’’ ‘‘The Corrections’’ or ‘‘In the Heart of the Sea’’ doesn’t seem glaringly more sophisticated than one that rewards ‘‘12 Years a Slave,’’ ‘‘The Flamethrowers’’ or ‘‘The Sixth Extinction.’’

If you believe the data, then one question remains. Why have the more pessimistic predictions not come to pass? One incontrovertible reason is that — contrary to the justifiable fears of a decade ago — people will still pay for creative works. The Napsterization of culture turned out to be less of a threat to prices than it initially appeared. Consumers spend less for recorded music, but more for live. Most American households pay for television content, a revenue stream that for all practical purposes didn’t exist 40 years ago. Average movie-#ticket prices continue to rise. For interesting reasons, book piracy hasn’t taken off the way it did with music. And a whole new creative industry — video games — has arisen to become as lucrative as Hollywood. American households in 2013 spent 4.9 percent of their income on entertainment, the exact same percentage they spent in 2000.

At the same time, there are now more ways to buy creative work, thanks to the proliferation of content-#delivery platforms. Practically every device consumers own is tempting them at all hours with new films or songs or shows to purchase. Virtually no one bought anything on their computer just 20 years ago; the idea of using a phone to buy and read a 700-page book about a blind girl in occupied France would have sounded like a joke even 10 years ago. But today, our phones sell us every form of media imaginable; our TVs charge us for video-#on-#demand products; our car stereos urge us to sign up for SiriusXM.

And just as there are more avenues for consumers to pay for creative work, there are more ways to be compensated for making that work. Think of that signature flourish of 2000s-#era television artistry: the exquisitely curated (and usually obscure) song that signals the transition from final shot to the rolling credits. Having a track featured during the credits of ‘‘Girls’’ or ‘‘Breaking Bad’’ or ‘‘True Blood’’ can be worth hundreds of thousands of dollars to a songwriter. (Before that point, the idea of licensing a popular song for the credits of a television series was almost unheard-#of.) Video-#game budgets pay for actors, composers, writers and song licenses. There are YouTube videos generating ad revenue and Amazon Kindle Singles earning royalties, not to mention those emerging studios (like Netflix and Yahoo) that are spending significant dollars on high-#quality video. Filmmakers alone have raised more than $290 million on Kickstarter for their creations. Musicians are supplementing their income with instrument lessons on YouTube. All of these outlets are potential sources of revenue for the creative class, and all of them are creatures of the post-#Napster era. The Future of Music Coalition recently published a list of all the revenue streams available to musicians today, everything from sheet-#music sales at concerts to vinyl-#album sales. They came up with 46 distinct sources, 13 of which — including YouTube partner revenue and ringtone royalties — were nonexistent 15 years ago, and six of which, including film and television licensing, have greatly expanded in the digital age.

The biggest change of all, perhaps, is the ease with which art can be made and distributed. The cost of consuming culture may have declined, though not as much as we feared. But the cost of producing it has dropped far more drastically. Authors are writing and publishing novels to a global audience without ever requiring the service of a printing press or an international distributor. For indie filmmakers, a helicopter aerial shot that could cost tens of thousands of dollars a few years ago can now be filmed with a GoPro and a drone for under $1,000; some directors are shooting entire HD-#quality films on their iPhones. Apple’s editing software, Final Cut Pro X, costs $299 and has been used to edit Oscar-#winning films. A musician running software from Native Instruments can recreate, with astonishing fidelity, the sound of a Steinway grand piano played in a Vienna concert hall, or hundreds of different guitar-#amplifier sounds, or the Mellotron proto-#synthesizer that the Beatles used on ‘‘Strawberry Fields Forever.’’ These sounds could have cost millions to assemble 15 years ago; today, you can have all of them for a few thousand dollars.

From the bird’s-#eye perspective, it may not look as though all that much has changed in terms of the livelihoods of the creative class. On the whole, creators seem to be making slightly more money, while growing in number at a steady but not fast pace. I suspect the profound change lies at the boundaries of professionalism. It has never been easier to start making money from creative work, for your passion to undertake that critical leap from pure hobby to part-time income source. Write a novel or record an album, and you can get it online and available for purchase right away, without persuading an editor or an A&R executive that your work is commercially viable. From the consumer’s perspective, blurring the boundaries has an obvious benefit: It widens the pool of potential talent. But it also has an important social merit. Widening the pool means that more people are earning income by doing what they love.

These new careers — collaborating on an indie-#movie soundtrack with a musician across the Atlantic, uploading a music video to YouTube that you shot yourself on a smartphone — require a kind of entrepreneurial energy that some creators may lack. The new environment may well select for artists who are particularly adept at inventing new career paths rather than single-#mindedly focusing on their craft. There are certainly pockets of the creative world, like those critically acclaimed books dropping off the mainstream best-#seller lists, where the story is discouraging. And even the positive trends shouldn’t be interpreted as a mindless defense of the status quo. Most full-time artists barely make enough money to pay the bills, and so if we have levers to pull that will send more income their way — whether these take the form of government grants, Kickstarter campaigns or higher fees for the music we stream — by all means we should pull those levers.

But just because creative workers deserve to make more money, it doesn’t mean that the economic or technological trends are undermining their livelihoods. If anything, the trends are making creative livelihoods more achievable. Contrary to Lars Ulrich’s fear in 2000, the ‘‘diverse voices of the artists’’ are still with us, and they seem to be multiplying. The song remains the same, and there are more of us singing it for a living.
http://www.nytimes.com/2015/08/23/ma...hat-wasnt.html

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

August 15th, August 8th, August 1st, July 25th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 05:33 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)