|29-05-19, 10:03 AM||#1|
Join Date: May 2001
Location: New England
Peer-To-Peer News - The Week In Review - June 1st, ’19
June 1st, 2019
YTS and YIFY to Be Dragged to Court by Filmmakers
• YTS and YIFY under fire again, as they are too big for the filmmaking industry to ignore.
• The plaintiffs demand the seizure of the domains, as well as colossal damage compensation.
• For the time being, the operators of the websites remain anonymous.
A coalition of seven movie production studios and copyright holders are after the operators of YTS and YIFY, the widely popular movie torrent downloading websites. The relevant complaint was submitted on the US District Court of Hawaii and demands the revelation of the identities of the people behind those two websites by Cloudflare. Moreover, they call for the seizure of the associated domains, an injunction that will prevent hosting companies and search engines from facilitating access to the websites, and a damage compensation of $150k for each pirated movie that is to be found on the platforms.
The original YIFY/YTS website succumbed to the immense legal pressure of copyright holders back in 2015, but the brand name continued to live through other domains and numerous supporting proxies. YTS.am and yifymovies.is are the two main wagons that carry the hugely popular movie pirating brand, managing to surpass the success of even the legendary PirateBay torrent platform. While the sites seem to be operating independently from each other, they use the same logos and often release titles simultaneously, so the plaintiffs believe that they act in concert.
However, the filmmakers are not only after the website operators but several of their users as well, also targeting two Hawaiians who downloaded a copy of their movies through the aforementioned platforms. As we saw last week, the same court signed off a DMCA subpoena against individual pirates, so the selection of the particular court was not random. Already, the defendants have invoked the confusing nature of the websites, claiming that they mistakenly believed that they were legitimately downloading the movies from an official distribution channel.
The movies that are represented by their respective owners, and that support the complaint are “Hunter Killer”, “Boyka: Undisputed”, “I Feel Pretty”, “The Hitman’s Bodyguard”, “Mechanic: Resurrection”, “Singularity”, and “Once Upon a Time in Venice”. So far, there has been no injunctive action from the court, and both of the targeted websites remain online and operational. The same applies to the operators of the two platforms, who remain anonymous thus far. However, with the plaintiffs’ demand to go on a trial by jury to be almost certainly approved by the Hawaii court, the situation is about to change soon.
If you’re looking for alternatives, check our list with the 15 best torrent sites to use right now. Also, don’t hesitate to leave your comments down below, or hop to our socials on Facebook and Twitter to check what else is on in the tech world today.
New AI-Powered Anti-Piracy System Soon to be Deployed by MCU
• A union of Russian companies is ready to deploy a new AI piracy detecting tool.
• Its developers claim that the system will soon require no human reviewers.
• Machine learning piracy detectors are the future in the fight against piracy.
The Media Communication Union (MCU) has presented a new anti-piracy system that is expected to enter full operation this upcoming July. The innovative element of this new system is the fact that it’s able to learn through the utilization of special neural networks, making gradually fewer mistakes in its copyright infringement reports, and requiring much less contribution from human curators. As the MCU is a union of telecom and media companies in Russia, this machine learning anti-piracy system will be deployed in the country to fight piracy and all illicit activities that accompany it.
In order for this new tool to “learn”, MCU’s members created a humongous database that contains links to copyright infringing material. This way, the tool will learn what constitutes infringement based on the actual reports and findings of the people who were doing this job manually in the past few years. MCU’s tool will modify this database live, adding new entries and finding all pirating links in a matter of minutes, if not less. The main benefit from this is that search engines that operate in Russia can connect to this live blacklist and remove the links from their search results as often as every five minutes.
Right now, the database contains about 300,000 links that were deemed by the system as “copyright infringing domains”. With the neural network boost, the database could get quickly populated with many more pirate links, even if we’re talking about a complicated website that requires in-depth analysis of their contents. That doesn’t mean that all of the results will definitely be accurate, as this can only come gradually through the continuous training that will take place for the system over the next couple of months. The developers are hopeful that in the end, the system will be fully capable of operating independently, requiring no human intervention.
The main benefit of this system, as well as automated piracy-targeting systems like this one, is that they can handle the pirate links that pop up like mushrooms after rain during live sports events. As the NBA and UFC representatives discussed earlier this month, playing the Whac-A-Mole game with pirate links has grown into a problem of asymmetrical proportions that can’t be handled efficiently by human employees, no matter the size of the team that reviews the various links. Systems like the one that MCU presented are the only way to deal with the problem when combined with live search engine de-indexing.
New Report Says Music Piracy On The Rise With 17 Million Stream-Rippers In 2018
Findings from market research firm MusicWatch’s Annual Music Study show that there were 17 million stream-rippers in the U.S. last year. That number is up 2 million from the 15 million stream-rippers reported in 2017.
For those unacquainted with the term “stream-ripping,” it refers to the type of music piracy where users transform a file from a streaming site like YouTube or Spotify into a downloadable copy.
In the age of streaming, the reasoning behind music piracy has adapted to technological changes, diverging from the Napster era. Although some streaming services like Spotify have a free tier, piracy enables users to access songs offline. An estimated 46% of stream-rippers pirate music for that reason, while 37% simply want to own songs they don’t like enough to buy.
“Streaming, and easy, efficient access to music was supposed to have solved many of the issues around unsanctioned sharing and piracy,” the report said. “Unfortunately a segment of music fans continue to acquire music in unsanctioned forms. Legacy forms of piracy through P2P file sharing applications has faded, but the use of websites and apps that facilitate the downloading of music licensed only for streaming is thriving.”
As of January, popular stream-ripping apps include YouTube Downloader, Free YouTube to Mp3 Converter, and Mp3 Video Converter. A significant number of users find stream-ripping services through app stores or search engines. Half of stream-rippers use YouTube to listen to music and watch music videos.
A big percentage of users are reportedly stream-ripping in bulk. “The top 30 percent of streamrippers are copying 112 files, on average – the equivalent of more than 10 full music albums,” according to the report.
But stream-rippers don’t pirate for lack of resources. MusicWatch’s demographic statistics peg them as largely well-educated, well-off, and white collar. An estimated 48% have a household income between $75,000 and $199,000, and 34% are between the ages of 25 and 34.
The MusicWatch report also points out that stream-rippers are likely to consume other forms of entertainment, amplifying the risk of piracy for films, video games, and television. “If they’ll pirate music they’ll likely also take movies, TV shows and other forms of intellectual property,” the report stated. “Discouraging stream-ripping isn’t just good for music; it’s good for the entire entertainment ecosystem.”
Read MusicWatch’s full music report here.
Not as Many Students Admit to Pirating their Favorite TV Shows as You’d Think!
Students are often viewed as the ones who will pirate TV shows due to cost. However, not as many are willing to admit to it as you would think, even in anonymous surveys!
Game of Thrones has come to an end and has been known as one of the most pirated TV shows ever. Yet, it still gained almost 20 million viewers for its series finale across all networks on debut night – more when you count the DVRs and reruns the follow day/week. While it may be the most pirated show, students aren’t admitting to it.
And it’s not just Game of Thrones. In a survey by Grand Canyon University, which also looked at the favorite TV shows for students in every state (The Office and Brooklyn Nine-Nine are unsurprisingly among the favorites), only 3% of students actually admitted to pirating TV shows.
This is certainly due ot the rise of streaming services. Almost half of all students use Netflix to watch their favorite TV shows. Cable/satellite services and Hulu are the next favorites. Just a tiny amount pirate – well, at least, admit to pirating!
However, it’s worth pointing out that only 4% of students use Amazon Prime, 3% have HBO Now and HBO Go, 2% say they use online streaming sites and 1% use Roku (and likely other similar devices). The numbers don’t quite add up to me considering the popularity of the likes of Game of Thrones, which is only on HBO (or Amazon Prime/Hulu with their Channels options).
Of course, it’s possible that a university has partnered with HBO to offer the streaming service for free. Then again, it’s also possible that the students are piggybacking off their parents’ services, which is definitely a viable option.
Poland has Filed a Complaint Against the European Union’s Copyright Directive
The directive was approved in April, and goes into force in June
Poland has officially challenged the European Union’s recently-approved controversial copyright directive, according to Reuters, saying that the legislation would bring unwanted censorship. The country filed its complaint yesterday with the the Court of Justice of the European Union.
Poland’s Deputy Foreign Minister Konrad Szymanski said that the “system may result in adopting regulations that are analogous to preventive censorship, which is forbidden not only in the Polish constitution but also in the EU treaties.” Polish MPs predominantly rejected the measure (Two abstentions, eight for, 33 against, six no-votes, and two missing) when it was voted on.
The Council of the European Union officially approved the directive in April, and it goes into force on June 7th, 2019. Following that action, EU member states will have until June 7th, 2021 to produce their own laws to implement it. The legislation is designed to update copyright law, and contains a number of controversial clauses, such as Article 11, the so-called “link tax,” which will allow publishers to charge platforms such as Google to display news stories, and Article 13, which says that platforms would be liable for content that infringes on someone’s copyright.
Users for platforms such as Facebook, Google, YouTube, Wikipedia, and others fear that the directive could be detrimental to how they use the site — content platforms aren’t liable for what they’re hosting, provided they make the effort to remove anything that is infringing on one’s copyright, like music or pirated movies. Sites would now have to proactively ensure that copyrighted content isn’t making it onto the site. As my colleagues James Vincent and Russell Brandom noted last year, sites might have to resort to implementing a filter, which “would be ripe for abuse by copyright trolls and would make millions of mistakes. The technology simply doesn’t exist to scan the internet’s content in this way.”
Sofar Sounds House Concerts Raises $25M, But Bands Get Just $100
Tired of noisy music venues where you can hardly see the stage? Sofar Sounds puts on concerts in people’s living rooms where fans pay $15 to $30 to sit silently on the floor and truly listen. Nearly 1 million guests have attended Sofar’s more than 20,000 gigs. Having attended a half dozen of the shows, I can say they’re blissful… unless you’re a musician trying to make a living. In some cases, Sofar pays just $100 per band for a 25 minute set, which can work out to just $8 per musician per hour or less. Hosts get nothing, and Sofar keeps the rest, which can range from $1,100 to $1,600 or more per gig — many times what each performer takes home. The argument was that bands got exposure, and it was a tiny startup far from profitability.
Today, Sofar Sounds announced it’s raised a $25 million round led by Battery Ventures and Union Square Ventures, building on the previous $6 million it’d scored from Octopus Ventures and Virgin Group. The goal is expansion — to become the de facto way emerging artists play outside of traditional venues. The 10-year-old startup was born in London out of frustration with pub-goers talking over the bands. Now it’s throwing 600 shows per month across 430 cities around the world, and more than 40 of the 25,000 artists who’ve played its gigs have gone on to be nominated for or win Grammys. The startup has enriched culture by offering an alternative to late-night, dark and dirty club shows that don’t appeal to hard-working professionals or older listeners.
But it’s also entrenching a long-standing problem: the underpayment of musicians. With streaming replacing higher-priced CDs, musicians depend on live performances to earn a living. Sofar is now institutionalizing that they should be paid less than what gas and dinner costs a band. And if Sofar sucks in attendees that might otherwise attend normal venues or independently organized house shows, it could make it tougher for artists to get paid enough there too. That doesn’t seem fair, given how small Sofar’s overhead is.
By comparison, Sofar makes Uber look downright generous. A source who’s worked with Sofar tells me the company keeps a lean team of full-time employees who focus on reserving venues, booking artists and promotion. All the volunteers who actually put on the shows aren’t paid, and neither are the venue hosts, though at least Sofar pays for insurance. The startup has previously declined to pay first-time Sofar performers, instead providing them a “high-quality” video recording of their gig. When it does pay $100 per act, that often amounts to a tiny shred of the total ticket sales.
“Sofar, however, seems to be just fine with leaving out the most integral part: paying the musicians,” writes musician Joshua McClain. “This is where they willingly step onto the same stage as companies like Uber or Lyft — savvy middle-men tech start-ups, with powerful marketing muscle, not-so-delicately wedging themselves in-between the customer and merchant (audience and musician in this case). In this model, everything but the service-provider is put first: growth, profitability, share-holders, marketers, convenience, and audience members — all at the cost of the hardworking people that actually provide the service.” He’s urged people to #BoycottSofarSounds
A deeply reported KQED expose by Emma Silvers found many bands were disappointed with the payouts, and didn’t even know Sofar was a for-profit company. “I think they talk a lot about supporting local artists, but what they’re actually doing is perpetuating the idea that it’s okay for musicians to get paid shit,” Oakland singer-songwriter Madeline Kenney told KQED.
Sofar CEO Jim Lucchese, who previously ran Spotify’s Creator division after selling it his music data startup The Echo Nest, and has played Sofar shows himself, declares that “$100 for a showcase slot is definitely fair,” but admits that “I don’t think playing a Sofar right now is the right move for every type of artist.” He stresses that some Sofar shows, especially in international markets, are pay-what-you-want and artists keep “the majority of the money.” The rare sponsored shows with outside corporate funding like one for the Bohemian Rhapsody film premiere can see artists earn up to $1,500, but these are a tiny fraction of Sofar’s concerts.
Otherwise, Lucchese says, “the ability to convert fans is one of the most magical things about Sofar,” referencing how artists rely on asking attendees to buy their merchandise or tickets for their full shows and follow them on social media to earn money. He claims that if you pull out what Sofar pays for venue insurance, performing rights organizations and its full-time labor, “a little over half the take goes to the artists.” Unfortunately that makes it sound like Sofar’s few costs of operation are the musicians’ concern. As McClain wrote, “First off, your profitability isn’t my problem.”
Now that it has ample funding, I hope to see Sofar double down on paying artists a fair rate for their time and expenses. Luckily, Lucchese says that’s part of the plan for the funding. Beyond building tools to help local teams organize more shows to meet rampant demand, he says “Am I satisfied that this is the only revenue we make artists right now? Absolutely not. We want to invest more on the artist side.” That includes better ways for bands to connect with attendees and turn them into monetizable fans. Even just a better followup email with Instagram handles and upcoming tour dates could help.
We don’t expect most craftspeople to work for “exposure.” Interjecting a middleman like Sofar shouldn’t change that. The company has a chance to increase live music listening worldwide. But it must treat artists as partners, not just some raw material they can burn through even if there’s always another act desperate for attention. Otherwise musicians and the empathetic fans who follow them might leave Sofar’s living rooms empty.
Metadata is the Biggest Little Problem Plaguing the Music Industry
It’s a crisis that has left, by some estimations, billions on the table unpaid to musicians
Recently, a musician signed to a major indie label told me they were owed up to $40,000 in song royalties they would never be able to collect. It wasn’t that they had missed out on payments for a single song — it was that they had missed out on payments for 70 songs, going back at least six years.
The problem, they said, was metadata. In the music world, metadata most commonly refers to the song credits you see on services like Spotify or Apple Music, but it also includes all the underlying information tied to a released song or album, including titles, songwriter and producer names, the publisher(s), the record label, and more. That information needs to be synchronized across all kinds of industry databases to make sure that when you play a song, the right people are identified and paid. And often, they aren’t.
Metadata sounds like one of the smallest, most boring things in music. But as it turns out, it’s one of the most important, complex, and broken, leaving many musicians unable to get paid for their work. “Every second that goes by and it’s not fixed, I’m dripping pennies,” said the musician, who asked to remain anonymous because of “the repercussions of even mentioning that this type of thing happens.”
Entering the correct information about a song sounds like it should be easy enough, but metadata problems have plagued the music industry for decades. Not only are there no standards for how music metadata is collected or displayed, there’s no need to verify the accuracy of a song’s metadata before it gets released, and there’s no one place where music metadata is stored. Instead, fractions of that data is kept in hundreds of different places across the world.
As a result, the problem is way bigger than a name being misspelled when you click a song’s credits on Spotify. Missing, bad, or inconsistent song metadata is a crisis that has left, by some estimations, billions on the table that never gets paid to the artists who earned that money. And as the amount of music created and consumed continues to increase at a faster pace, it’s only going to get messier.
It’s critical that metadata is distributed and entered accurately, not just for a song or album’s discoverability, but because metadata helps direct money to all the folks who made that music when a song is played, purchased, or licensed. Documenting everyone’s work is also important because, “That attribution could be how someone gets their next gig,” says Joshua Jackson, who leads business development for Jaxsta, an Australian company that authenticates music information.
There are multiple ways this process can go awry. The first is that, because there’s no standardized format for metadata, information often gets discarded or entered incorrectly as it’s written down or moved between people and databases.
A label’s database is likely different from Spotify’s database, which is likely different from the databases of critical collection societies, like ASCAP and BMI, which pay public performance royalties to musicians. “Part of the problem is the fields everyone has chosen to write into their software to populate these credits are all different,” says entertainment lawyer Jeff Becker of Swanson, Martin & Bell. “So if a credit is sent to a database that says ‘Pro Tools engineer,’ but that database doesn’t have that field, they either choose to change it or ignore it altogether. Typically they ignore it, and that credit has nowhere to go.”
Each database has its own set of rules. If Ariana Grande, Nicki Minaj, and Jessie J collaborated on a new track, and it was delivered to Apple Music with all of their names in the same artist field, that would cause what Apple Music and Spotify call a “compound artist error.” Entering an artist’s name as “last name, first name” would also result in a rejection. There are ways to embed metadata in a song file to ensure everything travels together, but distributors generally request that it be removed since it can cause “issues with the upload.”
The second big problem is that the information being entered in the first place is frequently wrong. A song can pass through multiple songwriters, producers, and engineers before it gets released by an artist, and every new contributor adds the potential to screw things up. The longer the chain of custody for the data, the greater chance a portion of it will be incorrect. A songwriter could fat-finger a name inside one of these databases, or a producer who briefly worked on the track could be left out, or a faulty merge between two databases could cause a technical error that erases information.
Even on one song, metadata can get complicated in ways you might not expect. In a guest post for HypeBot, Annie Lin, senior corporate counsel at Twitch, uses Katy Perry’s “Firework” to show how messy a song’s data can be. Capitol Records owns the recording for “Firework,” but five different songwriters with five different music publishers own percentages of the composition rights, and all their information needs to be included in the metadata so they can get credited and paid.
Having this many people working on one track is not uncommon, says Niclas Molinder, founder of music metadata company Auddly (now Session). In 2016, the average hit song had over four songwriters and six publishers. That creates a lot of opportunity for metadata to be submitted incorrectly. And if someone’s credit is missing, spelled wrong, or doesn’t match a streaming platform’s style guide, that can muck up payments for everyone involved. All these little errors add up. It’s estimated that as much as 25 percent of royalty payments aren’t paid to publishers at all, or are paid to the wrong entity.
“You may get your data correct in your database,” Molinder says, “but if you don’t get the others’ 100 percent correct too, and if they don’t get yours, no one gets paid.”
In an ideal world, once a song is finished, the metadata would be crafted by the artist or the artist’s producer, and they would submit that data to the record label, distributor, or publisher(s) involved for verification and distribution. In reality, the process is frequently more rushed and haphazard — artists and labels hurry the process along in order to get songs out, and metadata is frequently cleaned up later as mistakes are noticed. ”A lot of these credits and negotiations don’t happen on a single piece of paper, and also happen after the fact,” says Joe Conyers III, co-founder of digital rights management platform Songtrust.
It’s possible to correct metadata errors afterward, but that’s reliant on someone catching that error and then correcting it in every database where it appears. Even if it does get fixed, that doesn’t mean an artist gets all the payments they’re due — every company and collection society has different rules about how long they hold on to unclaimed royalties. The musician who was owed $40,000 missed out because a glitch between two databases removed many of his credits. It wasn’t the musician’s fault, but too much time had gone by before anyone noticed. The companies involved declined to pay him.
“We take it for granted that we can look up movie or TV credits on IMDb and see everything, down to production assistants,” says Jackson, who recently hosted a standing-room-only panel on metadata at the Music Biz 2019 conference in Nashville. “But the changes to music metadata and the standards are so slow.”
Having a centralized database and set standards for music metadata — Jackson’s idea of an IMDb for music — sounds like a straightforward goal, but getting there has stumped many of music’s largest and most powerful entities for decades. There are many reasons for this, but the tectonic shift to streaming is a major contributor. “There was not only an explosion in the number of releases, but the unbundling of the album,” says Vickie Nauman, consultant for music tech firm CrossBorderWorks. “We went from 100,000 physical albums released in a year to 25,000 digital songs uploaded a day to the streaming services.”
Additionally, songs are now being consumed and monetized in many different ways that weren’t available just decades ago. “If you think back to when people primarily bought CDs, the only version of a major song that mattered was the major song itself,” says Simon Dennett, chief product officer at Kobalt. Today, a major hit could have hundreds of different versions, like remixes, covers, sample packs, YouTube lyric videos, recordings in other languages, and more, all of which can, in total, generate “trillions and trillions of transactions” that each bring in fractions of a cent. “The volume of data that now has to be managed has unfolded into a massive problem,” Dennett says.
Not only is there way more content to catalog, music rights are very fragmented to begin with, and so slices of a song’s metadata are often kept across a variety of databases. Labels, publishers, collection societies, and others all maintain their own databases, none of which come close to having all of the information about all the works that exist in the music industry. (To see how truly complicated music data is, here’s a horrifying flow chart from The Music Maze and an explainer from Sonicbids on how to track down song ownership, which ends with “consider paying for research.”)
The creation of a global centralized database for song metadata has been attempted multiple times, but has always ended in failure. Among the numerous reasons: in-fighting between different arms of the music industry, international governance challenges, reluctance to share information, and funding issues. There are other, more practical roadblocks as well, like varying languages, differing copyright laws, and music industry cultures and traditions across the globe, which are often at odds with each other.
There isn’t much agreement on if any particular arm of the music industry should lead the way or be responsible for fixing music metadata. Some think digital music distribution companies like TuneCore or DistroKid could do more to educate artists, as it’s often an artist’s only touchpoint before their music is live on streaming platforms. Others think the streaming platforms themselves could set an example for better metadata by displaying more credits, which would encourage everyone involved to make sure the data is right. Some, like Jackson, suggest educating songwriters and producers to keep metadata records at the point of creation. “I imagine in the long term that’s only going to make all our jobs a lot easier, when we’re getting this [metadata] from the source as early as possible,” Jackson says.
But a lot of artists don’t even know they should care about metadata, or that possible metadata issues could be affecting their paychecks, because royalties are so complicated. One Grammy-nominated artist I talked to said, “Honestly, I wouldn’t even know where to look to find out.” Lots of startups are trying to make artists more aware of metadata, but it’s an uphill battle. Splits, a free mobile app, lets artists create a digital agreement that manages a song’s collaborators and their percentages of ownership. There’s also Creator Credits, a technology that works within music production software Pro Tools to embed song credits within the Pro Tools files themselves.
What everyone does agree on is that while things are starting to get slightly better, there’s a long way to go. “I remember putting things out on TuneCore, and it didn’t ask you for any metadata. Maybe a song title and that’s it,” says Doug Mitchell, director of customer success at music tech firm Exactuals. “Now it asks you for more info like genre. As the stores are displaying more metadata, then [TuneCore] asks for that information. That’s a start.”
Although the idea of crafting centralized and standardized metadata is daunting, many say it’s not something to give up on. Aside from cleaning up record-keeping errors, it would help prevent other musicians from “dripping pennies,” and connect them with the money they’re due. “The process of taking hugely dispersed geographic data, hugely dispersed ownership data, and hugely erratic data quality, and pushing that together into a coherent aggregated global view is a challenging, but incredibly noble mission,” says Dennett. Conyers III puts it even simpler: “It’s a good dream.”
In Baltimore and Beyond, a Stolen N.S.A. Tool Wreaks Havoc
Nicole Perlroth and Scott Shane
For nearly three weeks, Baltimore has struggled with a cyberattack by digital extortionists that has frozen thousands of computers, shut down email and disrupted real estate sales, water bills, health alerts and many other services.
But here is what frustrated city employees and residents do not know: A key component of the malware that cybercriminals used in the attack was developed at taxpayer expense a short drive down the Baltimore-Washington Parkway at the National Security Agency, according to security experts briefed on the case.
Since 2017, when the N.S.A. lost control of the tool, EternalBlue, it has been picked up by state hackers in North Korea, Russia and, more recently, China, to cut a path of destruction around the world, leaving billions of dollars in damage. But over the past year, the cyberweapon has boomeranged back and is now showing up in the N.S.A.’s own backyard.
It is not just in Baltimore. Security experts say EternalBlue attacks have reached a high, and cybercriminals are zeroing in on vulnerable American towns and cities, from Pennsylvania to Texas, paralyzing local governments and driving up costs.
The N.S.A. connection to the attacks on American cities has not been previously reported, in part because the agency has refused to discuss or even acknowledge the loss of its cyberweapon, dumped online in April 2017 by a still-unidentified group calling itself the Shadow Brokers. Years later, the agency and the Federal Bureau of Investigation still do not know whether the Shadow Brokers are foreign spies or disgruntled insiders.
Thomas Rid, a cybersecurity expert at Johns Hopkins University, called the Shadow Brokers episode “the most destructive and costly N.S.A. breach in history,” more damaging than the better-known leak in 2013 from Edward Snowden, the former N.S.A. contractor.
“The government has refused to take responsibility, or even to answer the most basic questions,” Mr. Rid said. “Congressional oversight appears to be failing. The American people deserve an answer.”
The N.S.A. and F.B.I. declined to comment.
Since that leak, foreign intelligence agencies and rogue actors have used EternalBlue to spread malware that has paralyzed hospitals, airports, rail and shipping operators, A.T.M.s and factories that produce critical vaccines. Now the tool is hitting the United States where it is most vulnerable, in local governments with aging digital infrastructure and fewer resources to defend themselves.
Before it leaked, EternalBlue was one of the most useful exploits in the N.S.A.’s cyberarsenal. According to three former N.S.A. operators who spoke on the condition of anonymity, analysts spent almost a year finding a flaw in Microsoft’s software and writing the code to target it. Initially, they referred to it as EternalBluescreen because it often crashed computers — a risk that could tip off their targets. But it went on to become a reliable tool used in countless intelligence-gathering and counterterrorism missions.
EternalBlue was so valuable, former N.S.A. employees said, that the agency never seriously considered alerting Microsoft about the vulnerabilities, and held on to it for more than five years before the breach forced its hand.
The Baltimore attack, on May 7, was a classic ransomware assault. City workers’ screens suddenly locked, and a message in flawed English demanded about $100,000 in Bitcoin to free their files: “We’ve watching you for days,” said the message, obtained by The Baltimore Sun. “We won’t talk more, all we know is MONEY! Hurry up!”
Today, Baltimore remains handicapped as city officials refuse to pay, though workarounds have restored some services. Without EternalBlue, the damage would not have been so vast, experts said. The tool exploits a vulnerability in unpatched software that allows hackers to spread their malware faster and farther than they otherwise could.
North Korea was the first nation to co-opt the tool, for an attack in 2017 — called WannaCry — that paralyzed the British health care system, German railroads and some 200,000 organizations around the world. Next was Russia, which used the weapon in an attack — called NotPetya — that was aimed at Ukraine but spread across major companies doing business in the country. The assault cost FedEx more than $400 million and Merck, the pharmaceutical giant, $670 million.
The damage didn’t stop there. In the past year, the same Russian hackers who targeted the 2016 American presidential election used EternalBlue to compromise hotel Wi-Fi networks. Iranian hackers have used it to spread ransomware and hack airlines in the Middle East, according to researchers at the security firms Symantec and FireEye.
“It’s incredible that a tool which was used by intelligence services is now publicly available and so widely used,” said Vikram Thakur, Symantec’s director of security response.
One month before the Shadow Brokers began dumping the agency’s tools online in 2017, the N.S.A. — aware of the breach — reached out to Microsoft and other tech companies to inform them of their software flaws. Microsoft released a patch, but hundreds of thousands of computers worldwide remain unprotected.
Hackers seem to have found a sweet spot in Baltimore, Allentown, Pa., San Antonio and other local, American governments, where public employees oversee tangled networks that often use out-of-date software. Last July, the Department of Homeland Security issued a dire warning that state and local governments were getting hit by particularly destructive malware that now, security researchers say, has started relying on EternalBlue to spread.
Microsoft, which tracks the use of EternalBlue, would not name the cities and towns affected, citing customer privacy. But other experts briefed on the attacks in Baltimore, Allentown and San Antonio confirmed the hackers used EternalBlue. Security responders said they were seeing EternalBlue pop up in attacks almost every day.
Amit Serper, head of security research at Cybereason, said his firm had responded to EternalBlue attacks at three different American universities, and found vulnerable servers in major cities like Dallas, Los Angeles and New York.
The costs can be hard for local governments to bear. The Allentown attack, in February last year, disrupted city services for weeks and cost about $1 million to remedy — plus another $420,000 a year for new defenses, said Matthew Leibert, the city’s chief information officer.
He described the package of dangerous computer code that hit Allentown as “commodity malware,” sold on the dark web and used by criminals who don’t have specific targets in mind. “There are warehouses of kids overseas firing off phishing emails,” Mr. Leibert said, like thugs shooting military-grade weapons at random targets.
The malware that hit San Antonio last September infected a computer inside Bexar County sheriff’s office and tried to spread across the network using EternalBlue, according to two people briefed on the attack.
This past week, researchers at the security firm Palo Alto Networks discovered that a Chinese state group, Emissary Panda, had hacked into Middle Eastern governments using EternalBlue.
“You can’t hope that once the initial wave of attacks is over, it will go away,” said Jen Miller-Osborn, a deputy director of threat intelligence at Palo Alto Networks. “We expect EternalBlue will be used almost forever, because if attackers find a system that isn’t patched, it is so useful.”
Until a decade or so ago, the most powerful cyberweapons belonged almost exclusively to intelligence agencies — N.S.A. officials used the term “NOBUS,” for “nobody but us,” for vulnerabilities only the agency had the sophistication to exploit. But that advantage has hugely eroded, not only because of the leaks, but because anyone can grab a cyberweapon’s code once it’s used in the wild.
Some F.B.I. and Homeland Security officials, speaking privately, said more accountability at the N.S.A. was needed. A former F.B.I. official likened the situation to a government failing to lock up a warehouse of automatic weapons.
In an interview in March, Adm. Michael S. Rogers, who was director of the N.S.A. during the Shadow Brokers leak, suggested in unusually candid remarks that the agency should not be blamed for the long trail of damage.
“If Toyota makes pickup trucks and someone takes a pickup truck, welds an explosive device onto the front, crashes it through a perimeter and into a crowd of people, is that Toyota’s responsibility?” he asked. “The N.S.A. wrote an exploit that was never designed to do what was done.”
At Microsoft’s headquarters in Redmond, Wash., where thousands of security engineers have found themselves on the front lines of these attacks, executives reject that analogy.
“I disagree completely,” said Tom Burt, the corporate vice president of consumer trust, insisting that cyberweapons could not be compared to pickup trucks. “These exploits are developed and kept secret by governments for the express purpose of using them as weapons or espionage tools. They’re inherently dangerous. When someone takes that, they’re not strapping a bomb to it. It’s already a bomb.”
Brad Smith, Microsoft’s president, has called for a “Digital Geneva Convention” to govern cyberspace, including a pledge by governments to report vulnerabilities to vendors, rather than keeping them secret to exploit for espionage or attacks.
Last year, Microsoft, along with Google and Facebook, joined 50 countries in signing on to a similar call by French President Emmanuel Macron — the Paris Call for Trust and Security in Cyberspace — to end “malicious cyber activities in peacetime.”
Notably absent from the signatories were the world’s most aggressive cyberactors: China, Iran, Israel, North Korea, Russia — and the United States.
Exclusive: Behind Grindr's Doomed Hookup in China, a Data Misstep and Scramble to Make Up
Echo Wang, Carl O'Donnell
Early last year, Grindr LLC’s Chinese owner gave some Beijing-based engineers access to personal information of millions of Americans such as private messages and HIV status, according to eight former employees, prompting U.S. officials to ask it to sell the dating app for the gay community.
After taking full control of Grindr in January 2018, Beijing Kunlun Tech Co Ltd stepped up management changes and consolidated operations to cut costs and expand operations in Asia, one former employee familiar with the decision said.
In the process, some of the company’s engineers in Beijing got access to the Grindr database for several months, eight former employees said.
While it is known that data privacy concerns prompted the crackdown on Kunlun, interviews with over a dozen sources with knowledge of Grindr’s operations, including the former employees, for the first time shed light on what the company actually did to draw U.S. ire and how it then tried to save its deal.
Reuters found no evidence that the app’s database was misused. Nevertheless, the decision to give its engineers in Beijing access to Grindr’s database proved to be a misstep for Kunlun, one of the largest Chinese mobile gaming companies.
In early 2018, the Committee on Foreign Investment in the United States (CFIUS), a government panel that scrutinizes foreign acquisitions of U.S. companies, started looking into the Grindr deal to see whether it raised any national security risks, one source close to the company said.
Last September, it ordered Kunlun to restrict access of its Beijing-based engineers to Grindr’s database, the source said.
Kunlun did not respond to requests for comment. A Treasury spokesman declined to comment on behalf of CFIUS.
A Grindr spokeswoman said “the privacy and security of our users’ personal data is and always will be a top priority.”
DATA PRIVACY FOCUS
Two former national security officials said the acquisition heightened U.S. fears about the potential of data misuse at a time of tense China-U.S. relations. CFIUS has increased its focus on safety of personal data. In the last two years, it blocked Chinese companies from buying money transfer company MoneyGram International Inc and mobile marketing firm AppLovin.
Based in West Hollywood, California, Grindr is especially popular among gay men and has about 4.5 million daily active users. CFIUS likely worried that Grindr’s database may include compromising information about personnel who work in areas such as military or intelligence and that it could end up in the hands of the Chinese government, the former officials said.
“CFIUS operates under the assumption that, whether through legal or political means, Chinese intelligence agencies could readily access information held by private Chinese companies if they wanted to,” said Rod Hunter, an attorney at Baker & McKenzie LLP who managed CFIUS reviews during President George W. Bush’s administration.
In a faxed statement to Reuters, China’s foreign ministry said it was aware of the situation with Grindr and urged the United States to allow fair competition and not politicize economic issues.
“The Chinese government always encourages Chinese companies to conduct economic and trade cooperation overseas in accordance with international rules and local laws,” it said.
Kunlun first acquired 60% of Grindr in 2016 for $93 million, amid a wave of acquisitions of U.S. technology companies by Chinese firms. At the time CFIUS focused on traditional national security concerns, such as the use of technology for potential military applications, the former U.S. security officials said.
Submissions of deals to CFIUS for review were entirely voluntary then, and Kunlun did not think it needed to submit its purchase of Grindr because it was convinced the deal posed no national security risk, two sources close to the company said.
After that deal was completed Kunlun tasked engineers in Beijing to improve the app, former employees said. The team worked out of the second floor of Ming Yang International Center, Kunlun’s 11-story headquarters east of the Palace Museum in Beijing, one former employee said.
At first, they did not have access to Grindr’s database, six former employees said. But that changed when Kunlun bought out the remainder of Grindr for $152 million, and the dating app’s founder and CEO, Joel Simkhai, left.
Kunlun shifted a significant portion of Grindr’s operations to Beijing, seven former employees said. Some outside contractors ended their work, and most of Grindr’s U.S. engineers were subsequently let go or resigned, they said.
Some U.S. employees who learned that the database access had been given to colleagues in China raised concerns about privacy with management, but they were told that they should not worry, two former employees said.
About a month after CFIUS’ September order, Kunlun told the panel the Beijing team’s access to Grindr’s database had been restricted, the source close to the company said.
Grindr also hired a cyber forensic firm and a third-party auditor at CFIUS’s behest to report on its compliance and to make sure the data was secure, the source said.
Kunlun started to operationally separate Grindr as well, making Grindr Beijing a different legal entity, transferring some Chinese employees from Kunlun to Grindr, and finding separate office space for Grindr in Beijing, former employees said.
Reuters could not determine what triggered CFIUS’ initial concerns about the Grindr deal, or whether Kunlun’s steps were directly aimed at allaying the panel’s fears.
By February, Kunlun had decided to shut down Grindr’s Beijing office, parting ways with some of the roughly two dozen employees there, two former employees said.
It told them the decision was taken because of policy reasons and concerns about data privacy, they said.
In March, Reuters first reported that CFIUS had asked Kunlun to divest Grindr.
Behind the scenes, the source close to the company said, Kunlun kept trying to salvage the Grindr deal until as recently as last week, when it said it would sell it by June next year.
Reporting by Echo Wang and Carl O'Donnell in New York; Additional reporting by Stella Qiu and Liangping Gao in Beijing ; Editing by Greg Roumeliotis and Paritosh Bansal
Snapchat Employees Abused Data Access to Spy on Users
Multiple sources and emails also describe SnapLion, an internal tool used by various departments to access Snapchat user data.
Several departments inside social media giant Snap have dedicated tools for accessing user data, and multiple employees have abused their privileged access to spy on Snapchat users, Motherboard has learned.
Two former employees said multiple Snap employees abused their access to Snapchat user data several years ago. Those sources, as well as an additional two former employees, a current employee, and a cache of internal company emails obtained by Motherboard, described internal tools that allowed Snap employees at the time to access user data, including in some cases location information, their own saved Snaps and personal information such as phone numbers and email addresses. Snaps are photos or videos that, if not saved, typically disappear after being received (or after 24 hours if posted to a user's Story).
Motherboard granted multiple sources in this story anonymity to speak candidly about internal Snap processes.
Although Snap has introduced strict access controls to user data and takes abuse and user privacy very seriously according to several sources, the news highlights something that many users may forget: behind the products we use everyday there are people with access to highly sensitive customer data, who need it to perform essential work on the service. But, without proper protections in place, those same people may abuse it to spy on users' private information or profiles.
One of the internal tools that can access user data is called SnapLion, according to multiple sources and the emails. The tool was originally used to gather information on users in response to valid law enforcement requests, such as a court order or subpoena, two former employees said. Both of the sources said SnapLion is a play on words with the common acronym for law enforcement officer LEO, with one of them adding it is a reference to the cartoon character Leo the Lion. Snap's "Spam and Abuse" team has access, according to one of the former employees, and a current employee suggested the tool is used to combat bullying or harassment on the platform by other users. An internal Snap email obtained by Motherboard says a department called "Customer Ops" has access to SnapLion. Security staff also have access, according to the current employee. The existence of this tool has not been previously reported.
SnapLion provides "the keys to the kingdom," one of the former employees who described the abuse of accessing user data said.
Many of Snapchat's 186 million users turn to the app in part of the ephemerality of videos and photos users send to one another. Users may not be aware of the sort of data that Snapchat can store, however. In 2014, the Federal Trade Commission fined Snapchat for failing to disclose that the company collected, stored, and transmitted geolocation data.
Snap's publicly available guide to law enforcement for requesting information about users elaborates on the sort of data available from the company, including the phone number linked to an account; the user's location data (such as when the user has turned on that setting on their phone and enabled location services on Snapchat); their message metadata, which may show who they spoke to and when; and in some cases limited Snap content, such as the user's "Memories," which are saved versions of their usually ephemeral Snaps, as well as other photos or videos the user backs-up.
An internal email obtained by Motherboard shows a Snap employee legitimately using SnapLion to look up the email address linked to an account in a non-law enforcement context, and a second email shows how the tool can be used in investigations against child abuse.
Do you work at Snap? Did you work at Snap? We'd love to hear from you. You can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, OTR chat on firstname.lastname@example.org, or email email@example.com.
Tools like SnapLion are an industry standard in the tech world, as companies need to be able to access user data for various legitimate purposes. Although Snap said it has several tools that the company uses to help with customer reports, comply with laws, and to enforce the network's terms and policies, employees have used data access processes for illegitimate reasons to spy on users, according to two former employees.
One of the former employees said that data access abuse occurred "a few times" at Snap. That source and another former employee specified the abuse was carried out by multiple individuals. A Snapchat email obtained by Motherboard also shows employees broadly discussing the issue of insider threats and access to data, and how they need to be combatted.
Motherboard was unable to verify exactly how the data abuse occurred, or what specific system or process the employees leveraged to access Snapchat user data.
A Snap spokesperson wrote in an emailed statement “Protecting privacy is paramount at Snap. We keep very little user data, and we have robust policies and controls to limit internal access to the data we do have. Unauthorized access of any kind is a clear violation of the company's standards of business conduct and, if detected, results in immediate termination."
When asked if abuse ever took place, one former senior information security Snap employee said, "I can't comment but we had good systems early on, actually most likely earlier than any startup in existence." The former senior employee did not deny employees abused their data access, and stopped responding to messages asking whether abuse occurred.
"Logging isn't perfect."
One of the former employees thought that a number of years ago SnapLion did not have a satisfactory level of logging to track what data employees accessed. Logging, generally speaking, is when a company will track who uses a system and what data they access to make sure it is being used appropriately. The company then implemented more monitoring, the former employee added. Snap said it currently monitors access to user data.
"Logging isn't perfect," the second former employee who described the data access abuse said.
Snap said it limits internal access to tools to only those who require it, but SnapLion is no longer a tool purely intended to help law enforcement. It is now used more generally across the company. A former employee who worked with SnapLion said the tool is used for resetting passwords of hacked accounts and "other user administration."
One current employee emphasized the company's strides for user privacy, and two former employees stressed the controls Snap has in place for protecting user privacy. Snap introduced end to end encryption in January of this year.
Insiders leveraging their access to data for illegitimate purposes happens across the tech industry. Last year, Motherboard reported that Facebook has fired multiple employees for using their privileged access to user data to stalk exes. Uber showed off at parties its so-called 'God View' mode, which displays the real-time location of real users and drivers, and Uber employees used internal systems to spy on ex-partners, politicians, and celebrities.
"For the normal user, they need to understand that anything they're doing that is not encrypted is, at some point, available to humans," Alex Stamos, the former chief information security officer at Facebook and now a Stanford adjunct professor, said in a phone call, talking about the threat of malicious insiders at tech giants in general.
"It's not exceptionally rare," Stamos added, referring to insider data access abuse.
Leonie Tanczer, a lecturer in International Security and Emerging Technologies at University College London, said in an online chat this episode "really resonates with the idea that one should not perceive companies as monolithic entities but rather set together by individuals all who have flaws and biases of their own. Thus, it is important that access to data is strictly regulated internally and that there are proper oversights and checks and balances needed."
Additional reporting by Lorenzo Franceschi-Bicchierai.
Newly Released Amazon Patent Shows Just How Much Creepier Alexa Can Get
A newly revealed patent application filed by Amazon is raising privacy concerns over an envisaged upgrade to the company's smart speaker systems. This change would mean that, by default, the devices end up listening to and recording everything you say in their presence.
Alexa, Amazon's virtual assistant system that runs on the company's Echo series of smart speakers, works by listening out for a 'wakeword' that tells the device to turn on its extended speech recognition systems in order to respond to spoken commands.
On Amazon's devices, the wakeword is 'Alexa', but similar systems control how Apple devices work ('Hey Siri') and also Google's ('Hey Google'), not to mention products from other tech companies.
In theory, Alexa-enabled devices will only record what you say directly after the wakeword, which is then uploaded to Amazon, where remote servers use speech recognition to deduce your meaning, then relay commands back to your local speaker.
But one issue in this flow of events, as Amazon's recently revealed patent application argues, is it means that anything you say before the wakeword isn't actually heard.
"A user may not always structure a spoken command in the form of a wakeword followed by a command (eg. 'Alexa, play some music')," the Amazon authors explain in their patent application, which was filed back in January, but only became public last week.
"Instead, a user may include the command before the wakeword (eg. 'Play some music, Alexa') or even insert the wakeword in the middle of a command (eg. 'Play some music, Alexa, the Beatles please'). While such phrasings may be natural for a user, current speech processing systems are not configured to handle commands that are not preceded by a wakeword."
To overcome this barrier, Amazon is proposing an effective workaround: simply record everything the user says all the time, and figure it out later.
Rather than only record what is said after the wakeword is spoken, the system described in the patent application would effectively continuously record all speech, then look for instances of commands issued by a person.
"The [proposed] system is configured to capture speech that precedes and/or follows a wakeword," the application explains, "such that the speech associated with the command and wakeword can be included together and considered part of a single utterance that may be processed by a system."
It's actually a clever idea, similar as others have noted to Apple's introduction of its Live Photos feature in the iPhone in 2015.
In that implementation, as soon as you open the iPhone's Camera app, the camera starts surreptitiously filming footage, even before you hit the shutter button icon to take your photo.
In fact, even once you've hit the shutter button, the camera keeps recording, and you ultimately end up with a mini movie (aka 'Live Photo') that extends for a moment on either side of the still image you manually snapped.
The proposed Alexa upgrade – which isn't necessarily something Amazon will ever roll out in its products – brings the same kind of thinking to recorded audio, ostensibly just so it never misunderstands you when you say something like, "Play some music, Alexa, the Beatles please".
It's worth noting, too, that the envisaged feature wouldn't send everything it records to Amazon's servers.
In the patent application, the authors explain that your Echo device would only ever record between 10–30 seconds of audio at a time, before wiping it from the local memory buffer, and recording a new 10–30 seconds of audio over it (again and again).
In each of these 10–30 second recordings, the device would continuously scan looking for commands involving the wakeword, and if it didn't find any, they'd get deleted forever – in theory, at least.
But because of the potential privacy implications of having a device that records you all the time, it's understandable that some people might not be thrilled about what this patent application represents, especially since Amazon has a mixed track record with Alexa recording things it wasn't ever supposed to.
As for whether this always-recording feature will ever see the light of day, it's too early to tell right now, and Amazon isn't giving anything away.
"The technology in this patent is not in use, and referring to the potential use of patents is highly speculative," a spokesperson for the company told Engadget.
"Like many companies, we file a number of forward-looking patent applications that explore new scientific ideas that may not make it into customer-facing products."
If you want to read more about the patent, the full listing is here.
The Future of AT&T is an Ad-Tracking Nightmare Hellworld
Everything you watch, everywhere you go
There’s a long, excellent profile of the new AT&T and its CEO Randall Stephenson in Fortune today, which you should read. AT&T has transformed itself into a media colossus by buying Time Warner, and understanding how the company plans to use its incredible array of content from HBO, CNN, TNT, and others in combination with its huge distribution networks across mobile broadband, DirecTV, and U-verse is important for anyone who cares about tech, media, or both. Seriously, go read it.
Here’s the part I want you to pay attention to: two quick paragraphs describing how AT&T sees the future of advertising across those media properties and networks. It’s the same plan AT&T has laid out before, but it’s more specific now, and that specificity makes it chilling. I’ve bolded the scary part:
“Say you and your neighbor are both DirecTV customers and you’re watching the same live program at the same time,” says Brian Lesser, who oversees the vast data-crunching operation that supports this kind of advertising at AT&T. “We can now dynamically change the advertising. Maybe your neighbor’s in the market for a vacation, so they get a vacation ad. You’re in the market for a car, you get a car ad. If you’re watching on your phone, and you’re not at home, we can customize that and maybe you get an ad specific to a car retailer in that location.”
Such targeting has caused privacy headaches for Yahoo, Google, and Facebook, of course. That’s why AT&T requires that customers give permission for use of their data; like those other companies, it anonymizes that data and groups it into audiences—for example, consumers likely to be shopping for a pickup truck—rather than targeting specific individuals. Regardless of how you see a directed car ad, say, AT&T can then use geolocation data from your phone to see if you went to a dealership and possibly use data from the automaker to see if you signed up for a test-drive—and then tell the automaker, “Here’s the specific ROI on that advertising,” says Lesser. AT&T claims marketers are paying four times the usual rate for that kind of advertising.
So, yeah. This is a terrifying vision of permanent surveillance.
In order to make this work, AT&T would have to:
• Own the video services you’re watching so it can dynamically place targeted ads in your streams
• Collect and maintain a dataset of your personal information and interests so it can determine when it should target this car ad to you
• Know when you’re watching something so it can actually target the ads
• Track your location using your phone and combine it with the ad-targeting data to see if you visit a dealership after you see the ads
• Collect even more data about you from the dealership to determine if you took a test-drive
• Do all of this tracking and data collection repeatedly and simultaneously for every ad you see
• Aggregate all of that data in some way for salespeople to show clients and justify a 4x premium over other kinds of advertising, including the already scary-targeted ads from Google and Facebook.
If this was a story about Mark Zuckerberg and Facebook, this scheme would cause a week-long outrage cycle. It is outrageous, especially when you consider that AT&T also routinely hands over customer information to the government, is under investigation for illegally selling customer location data to shady third parties, and is generally about as protective of your data as a hotel front desk guarding a bowl of mints.
AT&T can claim up and down that it’s asked for permission to use customer information to do this, but there is simply no possible way the average customer has ever even read their AT&T contracts, let alone puzzled out that they’re signing up to be permanently tracked and influenced by targeted media in this way. People are already convinced that Facebook is secretly listening to them through their phones; if you explicitly offered them the choice of AT&T tracking everything they watch and everywhere they go, it’s a safe bet that they would say no.
In fact, this plan might sound broadly familiar to you because it is largely similar to the plan former AOL CEO Tim Armstrong proposed to Verizon when he combined AOL and Yahoo under the Oath brand at that carrier: build an ad network on the billions of page views served by Oath properties, combine that data with Verizon’s network data, and sell better-targeted ads at a premium. (I also called that plan a nightmare at the time, you’ll recall.)
You know why that never worked out? Verizon executives refused to turn over the network data, citing customer privacy. To repeat: Verizon, which aggressively tracks customers using unremovable supercookies, thought a plan like this was a step too far.
Maybe AT&T should compete with Verizon on that front, instead of putting Iron Thrones in their cellphone stores.
ISPs Must Now Ask for Permission Before Selling Your Data, Maine Rules
Internet providers will not be able to penalize those who refuse, either.
The state of Maine is taking a stand against Internet service providers (ISPs) that are monetizing customer information without consent by voting to pass a bill which will require express permission for such data harvesting.
On Thursday, the Maine Senate voted 35-0 to pass the bill, LD 946, which will require consumer consent before ISPs can sell their private information to third-parties.
ISPs, as the gatekeepers to our Internet access, are able to collect a vast array on our online activities if protections such as the use of virtual private networks (VPNs) are not in place. This may include website visits, browser histories, location data, and usage.
By creating a digital profile of our activities, ISPs can then sell this information on to advertisers and data brokers which are able to use this information in targeted business campaigns and tailored advertising.
LD 946 demands that ISPs secure the "express consent" of customers before "using, disclosing, selling or permitting access to customer personal information."
There may be the concern that ISPs could attempt to force the issue and apply pressure on customers in Maine by capping bandwidth or otherwise penalizing them if they refuse. However, the bill specifically prohibits this -- as well as offering a carrot in the form of discounts in return for permission.
The bill does make allowances for some exceptions, such as when consent is granted, the sale of data which is not personal, or when court orders have been issued, or also in cases of emergency.
Under the terms of LD 946, customers are able to withdraw their permission for data gathering by their ISP at any time.
The decision builds upon earlier approval in the house, with votes rolling in at 96-45 in favor, and now only requires Governor Mills to agree and sign for the proposal to become law in the state.
Maine's decision flies in the face of US Congress overturning the same rule in 2017, promoted by the Federal Communications Commission (FCC) statewide, which would have ensured ISPs gained customer approval for the monetization of their information. This decision was considered a major loss for consumer privacy in the United States and also hamstrung the FCC by preventing the commission from being able to regulate the sale of data gathered by ISPs in the future.
"Today, the Maine legislature did what the United States Congress has thus far failed to do and voted to put consumer privacy before corporate profits," said Oamshri Amarasingham, advocacy director at the ACLU of Maine. "Nobody should have to choose between using the Internet and protecting their own data. Lest we forget, internet providers work for us. We pay them -- a lot -- for their services, and it is outrageous that they would turn around and sell our most private information without our consent."
Apple, Google and WhatsApp Condemn UK Proposal to Eavesdrop on Encrypted Messages
• In an open letter to GCHQ (Government Communications Headquarters), 47 signatories including Apple, Google and WhatsApp have jointly urged the U.K. cybersecurity agency to abandon its plans for a so-called "ghost protocol."
• Details of the initiative were first published in an essay by two of the U.K.'s highest cybersecurity officials in November 2018.
• In practice, the proposal suggests a technique which would require encrypted messaging services — such as WhatsApp — to direct a message to a third recipient, at the same time as sending it to its intended user.
Tech giants, civil society groups and Ivy League security experts have condemned a proposal from Britain's eavesdropping agency as a "serious threat" to digital security and fundamental human rights.
In an open letter to GCHQ (Government Communications Headquarters), 47 signatories including Apple, Google and WhatsApp have jointly urged the U.K. cybersecurity agency to abandon its plans for a so-called "ghost protocol."
It comes after intelligence officials at GCHQ proposed a way in which they believed law enforcement could access end-to-end encrypted communications without undermining the privacy, security or confidence of other users.
Details of the initiative were first published in an essay by two of the U.K.'s highest cybersecurity officials in November 2018. Ian Levy, the technical director of Britain's National Cyber Security Centre, and Crispin Robinson, GCHQ's head of cryptanalysis (the technical term for codebreaking), put forward a process that would attempt to avoid breaking encryption.
The pair said it would be "relatively easy for a service provider to silently add a law enforcement participant to a group chat or call."
In practice, the proposal suggests a technique which would require encrypted messaging services — such as WhatsApp — to direct a message to a third recipient, at the same time as sending it to its intended user.
Levy and Robinson argued the proposal would be "no more intrusive than the virtual crocodile clips" which are currently used in wiretaps of non-encrypted communications. This refers to the use of chat and call apps that can silently copy call data during digital exchanges.
Opposing this plan, signatories of the open letter argued that "to achieve this result, their proposal requires two changes to systems that would seriously undermine user security and trust."
'Completely undermines' authentication process
"First, it would require service providers to surreptitiously inject a new public key into a conversation in response to a government demand. This would turn a two-way conversation into a group chat where the government is the additional participant, or add a secret government participant to an existing group chat," signatories of the open letter, which was first sent to GCHQ on May 22, said Thursday.
"Second, in order to ensure the government is added to the conversation in secret, GCHQ's proposal would require messaging apps, service providers, and operating systems to change their software so that it would 1) change the encryption schemes used, and/or 2) mislead users by suppressing the notifications that routinely appear when a new communicant joins a chat."
Apple, one of the signatories of the open letter to GCHQ, previously took a stand over data privacy in a widely publicized standoff with the FBI in 2015 and 2016.
Apple publicly opposed the FBI when it asked for access to the iPhone of the San Bernardino shooter, Syed Farook. The technology giant refused to help the FBI, citing issues of data privacy. Eventually, the FBI backed down, finding another way into the device without Apple's help.
"The overwhelming majority of users rely on their confidence in reputable providers to perform authentication functions and verify that the participants in a conversation are the people they think they are, and only those people," the letter said.
"The GCHQ's ghost proposal completely undermines this trust relationship and the authentication process."
In response to the open letter, the National Cyber Security Centre's Ian Levy said: "We welcome this response to our request for thoughts on exceptional access to data — for example to stop terrorists. The hypothetical proposal was always intended as a starting point for discussion."
"We will continue to engage with interested parties and look forward to having an open discussion to reach the best solutions possible," Levy said, in an emailed statement to CNBC on Thursday.
The Books of College Libraries Are Turning Into Wallpaper
University libraries around the world are seeing precipitous declines in the use of the books on their shelves.
When Yale recently decided to relocate three-quarters of the books in its undergraduate library to create more study space, the students loudly protested. In a passionate op-ed in the Yale Daily News, one student accused the university librarian—who oversees 15 million books in Yale’s extensive library system—of failing to “understand the crucial relationship of books to education.” A sit-in, or rather a “browse-in,” was held in Bass Library to show the administration how college students still value the presence of books. Eventually the number of volumes that would remain was expanded, at the cost of reducing the number of proposed additional seats in a busy central location.
Little-noticed in this minor skirmish over the future of the library was a much bigger story about the changing relationship between college students and books. Buried in a slide deck about circulation statistics from Yale’s library was an unsettling fact: There has been a 64 percent decline in the number of books checked out by undergraduates from Bass Library over the past decade.
Yale’s experience is not at all unique—indeed, it is commonplace. University libraries across the country, and around the world, are seeing steady, and in many cases precipitous, declines in the use of the books on their shelves. The University of Virginia, one of our great public universities and an institution that openly shares detailed library circulation stats from the prior 20 years, is a good case study. College students at UVA checked out 238,000 books during the school year a decade ago; last year, that number had shrunk to just 60,000.
Before you tsk-tsk today’s kids for their lack of bookishness, note that the trend lines are sliding southward for graduate students and faculty members, too: down 61 percent and 46 percent, respectively, at UVA. Overall, across its entire network of libraries, UVA circulated 525,000 books during the 2007–08 school year, but last year there were only 188,000 loans—nearly 1,000 fewer books checked out a day. The Association of Research Libraries’ aggregated statistics show a steady decrease of the same proportion across its membership, even as student enrollment at these universities has grown substantially.
Maybe students aren’t checking the books out but are still consulting them regularly within the library? This also does not appear to be true. Many libraries also track such in-house uses, by tallying the books that need to be reshelved, and the trends are the same. At my library at Northeastern University, undergraduate circulations declined 50 percent from 2013 to 2017—before we decided to do our own book relocation—and our logged number of books removed from shelves but not checked out also dropped by half.
These stark statistics present a conundrum for those who care about libraries and books. At the same time that books increasingly lie dormant, library spaces themselves remain vibrant—Snell Library at Northeastern now receives well over 2 million visits a year—as retreats for focused study and dynamic collaboration, and as sites of an ever wider array of activities and forms of knowledge creation and expression, including, but also well beyond, the printed word. It should come as no surprise that library leadership, in moments of dispassionate assessment often augmented by hearing from students who have trouble finding seats during busy periods, would seek to rezone areas occupied by stacks for more individual and group work. Yet it often does come as an unwelcome surprise to many, especially those with a powerful emotional attachment to what libraries should look like and be.
What’s happening here is much more complicated than an imagined zero-sum game between the defenders of books and library futurists. The decline in the use of print books at universities relates to the kinds of books we read for scholarly pursuits rather than pure pleasure, the rise of ebooks and digital articles, and the changing environment of research. And it runs contrary to the experience of public libraries and bookstores, where print continues to thrive.
Unlike most public libraries, the libraries of colleges and universities have always been filled with an incredibly wide variety of books, including works of literature and nonfiction, but also bound scientific journals and other highly specialized periodicals, detailed reference works, and government documents—different books for different purposes. Although many of these volumes stand ready for immersive, cover-to-cover reading, others await rarer and often brief consultations, as part of a larger network of knowledge. Even many monographs, carefully and slowly written by scholars, see only very sporadic consultation, and it is not uncommon for the majority of college collections to be unused for a decade or more. This is as it should be: Research libraries exist to collect and preserve knowledge for the future as well as for the present, not to house just the latest and most popular works.
But there is a difference between preservation and access, and a significant difference, often unacknowledged, in the way we read books for research instead of pleasure. As the historian Michael O’Malley humorously summarized the nature of much scholarly reading and writing, “We learn to read books and articles quickly, under pressure, for the key points or for what we can use. But we write as if a learned gentleman of leisure sits in a paneled study, savoring every word.” Or as he more vividly described the research process, academics often approach books like “sous-chefs gutting a fish.”
With the rapidly growing number of books available online, that mode of slicing and dicing has largely become digital. Where students or faculty once pulled volumes off the shelf to scan a table of contents or index, grasp a thesis by reading an introduction, check a reference, or trace a footnote, today they consult the library’s swiftly expanding ebook collection (our library’s ebook collection has multiplied tenfold over the past decade), Google Books, or Amazon’s Look Inside. With each of these clicks, a print circulation or in-house use of a book is lost. UVA’s ebook downloads totaled 1.7 million in 2016, an order of magnitude larger than e-circulations a decade ago. Our numbers at Northeastern are almost identical, as scholars have become comfortable with the use of digital books for many purposes.
I’ve seen my own book usage change over time. When I was a graduate student studying Victorian history at Yale, the university’s towering collection in Sterling Library, next door to Bass (then called Cross Campus Library), allowed me to find and leaf through relevant books easily. Now almost all of the texts I consulted for my dissertation are available online in repositories such as HathiTrust, which stores digitized books from research libraries, many of them freely available for download since they were published before 1924, the cutoff for public-domain works. If I were doing the same scholarly project today, I would likely check out only a small subset of books that I needed to pay careful attention to, and annotate others digitally in my PDF reader.
The decline in print circulation also coincides with the increasing dominance of the article over the monograph, and the availability of most articles online. In many fields, we now have the equivalent of Spotify for research: vast databases that help scholars search millions of articles and connect them—often through highly restrictive and increasingly unsustainable subscriptions, but that is another story—instantly to digital copies. (There is also a Napster for research articles, of which we shall not speak.) Very few natural and social scientists continue to consult bound volumes of journals in their field, especially issues that are more than a few years old. UVA recorded nearly 3 million e-journal downloads in 2016, a massive and growing number that is typical of most universities.
In addition, the nature of scholarship is also changing, still with significant reading and writing, of course, but also involving the use and processing of data in a wide array of disciplines. To serve these emerging needs, Northeastern University Library has added full-time specialists in data visualization and systematic review (the process of synthesizing, statistically, exhaustive research from multiple studies), and an entire division dedicated to new forms of digital scholarship.
Our research library, like many others, has also seen a surge in group work rather than the solitary pursuit of the canonical research paper. More classes are assigning team-based projects instead of individual essays, as many urgent problems, such as climate change, call for large-scale interdisciplinary work and multiple perspectives. University libraries have correspondingly seen reservations for collaboration spaces surge. Last year, we had a record 100,000 hours of group-room bookings in our library, meaning that these spaces were occupied constantly from 8 a.m. to midnight.
At the same time—and perhaps this is one of the feel-good stories related to physical collections—there is an increasing use of archives. Many students still find the direct encounter with primary sources thrilling, and instructors and library staff have found creative ways for them to use these special collections. We have doubled our archival holdings in the past five years, focusing on Boston-related materials such as our recent acquisition of millions of photographs and negatives from The Boston Globe, and have greatly expanded our program of teaching with these artifacts.
A positive way of looking at these changes is that we are witnessing a Great Sorting within the library, a matching of different kinds of scholarly uses with the right media, formats, and locations. Books that are in high demand; or that benefit from physical manifestations, such as art books and musical scores; or that are rare or require careful, full engagement, might be better off in centralized places on campus. But multiple copies of common books, those that can be consulted quickly online or are needed only once a decade, or that are now largely replaced by digital forms, can be stored off site and made available quickly on demand, which reduces costs for libraries and also allows them to more easily share books among institutions in a network. Importantly, this also closes the gap between elite institutions such as Yale and the much larger number of colleges with more modest collections.
These trends around research collections are likely to continue. A small number of regional pools of books at a monumental scale—tens of millions of books from scores of universities working together—are already envisioned in the United States, which will ensure preservation and access for future generations and effectively act as gigantic shared libraries, or what David Prosser, the executive director of Research Libraries UK, has called “collective collections.” “Print books are historical artefacts … but some are more valuable artefacts than others,” Prosser has argued. “No library can be completely universal and decisions need to be made about what to collect and where to store material. By looking at collections collectively we can better serve the needs of readers, ensuring that what we have is well looked after (and yes, sometimes that means in ‘remote-storage’).”
Unfortunately, more troubling factors are also at work in the decline of print books within colleges. Statistics show that today’s undergraduates have read fewer books before they arrive on campus than in prior decades, and just placing students in an environment with more books is unlikely to turn that around. (The time to acquire the reading bug is much earlier than freshman year.) And while correlation does not equal causation, it is all too conspicuous that we reached Peak Book in universities just before the iPhone came out. Part of this story is undoubtedly about the proliferation of electronic devices that are consuming the attention once devoted to books.
The sharp decrease in the circulation of books also obviously coincides with the Great Recession and with the steady decline of humanities majors, as students have shifted from literature, philosophy, and history to STEM disciplines—from fields centered on the book to fields that emphasize the article.
When I tweeted about this under-discussed decline in the use of print books in universities, several respondents wondered if, regardless of circulation statistics, we should keep an ample number of books in the library for their beneficial ambience. Even if books are ignored by undergraduates, maybe just having them around will indirectly contribute to learning. If books are becoming wallpaper, they are rather nice wallpaper, surrounding students with deep learning and with some helpful sound-deadening characteristics to boot. If that helps students get into the right mind-set in a quiet, contemplative space, so be it. Maybe they will be more productive, get away from their distracting devices, and perhaps serendipitously discover a book or two along the way.
You can certainly see this theory at work in new library designs in which the number of volumes is more quietly reduced than at Yale, with books lining the walls of study spaces but not jutting out perpendicularly like the old, high-capacity stacks, so as to leave most of the floor open for tables, chairs, and spaces for group work. Perhaps that is the right approach, the right compromise, for some schools and students. Of course, you can also find students who love spaces without books, or who work better with some background noise—alas, whenever you discuss these matters, all students tend to generalize from the study space that works for themselves.
But there is another future that these statistics and our nostalgic reaction to them might produce: the research library as a Disneyland of books, with banker’s lamps and never-cracked spines providing the suggestion of, but not the true interaction with, knowledge old and new. As beautiful as those libraries appear—and I, too, find myself unconsciously responding to such surroundings, having grown up studying in them—we should beware the peril of books as glorified wallpaper. The value of books, after all, is what lies beneath their covers, as lovely as those covers may be.
File-Sharing Legend “Napster” Turns 20 Years Old Today
On July 1, 1999, a new application was uploaded to the Internet. Named Napster, it was the first tool that created a file-sharing network of millions of people, something that had never been done before. Two years later that network shut down, but its impact still resonates today, two decades on.
Somewhere in the fall of 1998 a user named ‘Napster’ joined the w00w00 IRC channel, a chatroom on the EFnet network populated by a few dozen elite ‘hackers’.
‘Napster’ shared a new idea with the group. The then 17-year-old developer wanted to create a network of computers that could share files with each other. More specifically, music tracks.
To many people, including some in the IRC channel, that idea sounded absurd. At the time people could already download files from the fringes of the Internet but on a very limited scale. And even then, the choice was limited, and transfers were very unreliable.
Creating a network of hundreds, thousands, or even millions of people who would all open up their hard drives to the rest and offer up bandwidth, was something that was entirely alien. ‘Napster’, however, had a feeling that people might be interested.
This feeling was shared by another teenage computer fanatic named ‘Man0War’. The two shared ideas online and eventually decided to meet up.
That’s when Shawn Fanning (aka Napster), who got the Napster nickname for his ‘nappy’ hair, first saw Sean Parker (aka Man0War). Together, they came up with a plan to bring the idea to fruition.
Fast forward a few months and it’s June 1, 1999. What started as a distant vision was now a fully-fledged application that was ready to shake the world. The software, which carried the name of its inventor, Napster, soon found its way to millions of computers all over the world.
From there, things developed quickly. After roughly three months, Napster already provided access to four million songs and in less than a year, 20 million people had downloaded the application.
What started as a simple idea quickly transformed into a multi-million dollar business. The company, which employed several people that were in the w00w00 IRC channel, changed the way millions of people enjoyed music.
For many of Napster’s users, the application represented something magical. It was a gateway for musical exploration that dwarfed even the largest record stores in town. And all for free.
Initially, the novelty concealed the fact that people were not supposed to share their music libraries with the rest of the world, but this would quickly change. Within a year, the RIAA sued Napster Inc. and soon after several artists including Metallica and Dr. Dre followed.
Like most record labels, these artists saw the file-sharing software as a threat. They felt that it would destroy the music industry, which was at its peak at the time. However, there were also more positive sounds from artists who recognized the promotional effect of Napster.
While Dr. Dre said “Fuck Napster,” Chuck D famously described it as “the new radio.”
Napster’s users were not concerned about what the labels and artists thought. They were interested in expanding their music libraries. While there are no official numbers, Napster was responsible for a significant portion of the global Internet traffic at the time.
University campuses were soon transformed into file-sharing hotspots. At some campuses over half of all bandwidth was consumed by MP3-sharing students and staff. This eventually led to a ban of the application at several universities, even before copyright issues arose.
Meanwhile, the user base swelled to a peak of more than 26.4 million users worldwide in February 2001. But despite the epidemic growth and backing from investors, the small file-sharing empire couldn’t overcome the legal challenges.
The RIAA case resulted in an injunction from the Ninth Circuit Court, which ordered the network to shut down. This happened during July 2001, little more than two years after Napster launched. By September that year, the case had been settled for millions of dollars.
While the Napster craze was over, file-sharing had mesmerized the masses and the cat was out of the bag. Grokster, KaZaa, Morpheus, LimeWire, and many others popped up and provided sharing alternatives, for as long as they lasted. Meanwhile, BitTorrent was also knocking on the door.
While the aforementioned software was often associated with piracy, Napster had a momentous impact on the development of legal services. People clearly signaled that there were interested in downloading music, so the first download stores were launched, with iTunes taking the lead.
These download portals never came close to what Napster offered though. Many music fans were not interested in buying a few tracks here and there, they wanted millions of files at their fingertips, ready to be played. This included a Swedish teenager named Daniel Ek.
The Napster experience eventually triggered Ek to come up with a legal alternative that would replicate his first experience with piracy. That application was Spotify, which for its part sparked a music streaming subscription boom.
Interestingly, music streaming is now the most important source of income for the music industry. These Napster-inspired services are good for roughly half of all the music revenues worldwide, completing the circle, in a way.
Even the Napster brand, which has switched owners several times, lives on as a music subscription service today, owned by US retailer Best Buy.
Napster’s founders, meanwhile, went on to create several other successful companies.
Sean Parker is a multi-billionaire now, in part thanks to his early involvement with Facebook. Fanning, aka Napster, is not doing badly either, with a net worth of more than 100 million, much like many other members of the w00w00 IRC channel.
Until next week,
Current Week In Review
Recent WiRs -
May 25th, May 18th, May 11th, May 4th
Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.
"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public." - Hugo Black
Thanks For Sharing
|Thread Tools||Search this Thread|
|Thread||Thread Starter||Forum||Replies||Last Post|
|Peer-To-Peer News - The Week In Review - July 16th, '11||JackSpratts||Peer to Peer||0||13-07-11 06:43 AM|
|Peer-To-Peer News - The Week In Review - July 9th, '11||JackSpratts||Peer to Peer||0||06-07-11 05:36 AM|
|Peer-To-Peer News - The Week In Review - January 30th, '10||JackSpratts||Peer to Peer||0||27-01-10 07:49 AM|
|Peer-To-Peer News - The Week In Review - January 16th, '10||JackSpratts||Peer to Peer||0||13-01-10 09:02 AM|
|Peer-To-Peer News - The Week In Review - December 5th, '09||JackSpratts||Peer to Peer||0||02-12-09 08:32 AM|