P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 21-08-19, 06:15 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - August 24th, ’19

Since 2002



































August 24th, 2019




Ion Fury Dev Suggests Fans Pirate Game After Public Outcry, Says “F*ck Politics”
Ryan Pearson

Mere hours after 3D Realms and Voidpoint apologized for outcry caused by allegedly homophobic content in Ion Fury (and allegedly transphobic comments by the developers on Discord), it seems Voidpoint’s social media is condoning pirating the game.

In case you missed it, ResetEra and others had drawn attention to content within Ion Fury (some of which was in areas you could not access without noclip cheats) and comments made by the developers in Voidpoint on Discord, that they deemed offensive.

Both 3D Realms CEO (the publisher) and Voidpoint’s co-founders apologized- the latter stating they would be instituting “a zero-tolerance policy for this type of language and all employees and contractors will undergo mandatory sensitivity training,” donating “$10,000 from Ion Fury’s release day proceeds to The Trevor Project,” and “patching Ion Fury ASAP to remove all unacceptable language.”

Voidpoint’s Twitter account quickly clarified this only applied to the material that had drawn ire:

“We’ve become aware of some confusion about whether Ion Fury will be censored in response or relation to recent controversy. The answer is no; we are only tweaking a rarely seen decorative sprite and removing some offensive text found outside the game world in leftover map data.”

The social media account has also been dealing with users expressing their dissatisfaction of the content being removed, claiming it is censorship. Many of the replies claim it is not censorship, and that the developers had not compromised their vision:

“We didn’t compromise our artistic integrity. We’re changing a bottle of soap that said “Ogay” on it to something less offensive. That’s it. We’re 100% with you on the “fuck censorship” standpoint but a stupid joke about gay soap isn’t the hill to die on.”

Among the replies, one in particular stands out. In reply to a user stating they would not be buying the game:

“If you want to pirate it, it’s only like 95MB. You should check it out. We worked really hard on it and it’s a cool game. Fuck politics.”

At this time, 3D Realms has not issued a statement on the above comment. We will keep you informed as we learn more.

It is also worth mentioning back in May 2015, the European Union conducted a 307 page report on “Estimating displacement rates of copyrighted content in the EU.”

The report also included research into piracy of media such as games. The report found that piracy effects did not cause a hindrance to sales, and in some cases could increase sales:

“This positive effect of illegal downloads and streams on the sales of games may be explained by the industry being successful in converting illegal users to paying users,”

[…] For games the reason for the positive effects may be that players may get hooked to a game and access a game legally to play the game with all bonuses, at higher levels or whatever makes playing the game legally more interesting.

[…] “The overall conclusion is that for games, illegal online transactions induce more legal transactions.”

Editor’s Note: .pdf hosted by netzpolitik.org

Curiously, the report was never officially published. It was leaked by European MP Julia Reda on September 20th, 2017 (Editor’s Note: We here at Niche Gamer do not condone or support piracy in any shape or form).

Ion Fury is available now for Windows and Linux. It will launch in 2019 for PlayStation 4, Nintendo Switch, and Xbox One.
https://nichegamer.com/2019/08/20/io...-fck-politics/





Denuvo Mobile Game Protection Unveiled, an Anti-Piracy Solution to Prevent Cheating and Cracking

After a bad history with PC games, Denuvo claims its anti-piracy solution for mobile games won’t affect performance.
Nadeem Sarwar

Irdeto or Denuvo are yet to reveal if they have found any takers for their anti-piracy product

Highlights

• Denuvo’s solution aims to protect the revenue stream of developers
• It also includes a virtualisation or emulator detection feature
• Irdeto's Mobile Game Protection won’t require source code access

Denuvo – the anti-tamper technology and digital rights management tool for PC games developed by the eponymous Austrian company – is venturing into the world of mobile games. Irdeto, of which Denuvo is a subsidiary, has announced Mobile Game Protection solution at Gamescom 2019 to protect mobile games from being tampered with. Or in simple words, to prevent hackers and malicious parties from cracking a game and then pirating it, severely affecting the revenue of developers as a result. What this essentially means is that paid games whose pirated or cracked versions in which all items are illegally unlocked by surpassing the progression system or paywall might soon be a thing of past.

In an official blog post, Irdeto has revealed that its Mobile Game Protection solution will tackle “cheating on mobile games and prevents hackers from debugging, reverse engineering and changing the game”. Unlike Denuvo's anti-piracy tool for PC games, its Mobile Game Protection solution won't require source code access. Instead, the anti-piracy package can directly be integrated into the final APK, which means build engineers and developers won't have to deal with another SDK or piece of code that goes with their game.

Denuvo's Mobile Game Protection solution includes anti-tamper and anti-cheat tools such as configurable protection levels, allowing developers to specify which portions of their app need protection. Moreover, it will also protect against static or dynamic manipulation of the game's code and wil provide protection against reverse engineering.

Anti-debugging and root detection are part of the package as well. There is also a virtualisation alarm that will identify when a game is played on an emulated Android device, something which is not particularly rare these days and offers an unfair advantage to players exploiting the loophole.

Lastly, and more importantly, Irdeto claims that the Mobile Game Protection solution won't impact the performance of games. PC gamers have long protested against Denuvo protection, and there is a long history of Denuvo-shielded games that are plagued by performance issues. In some cases, it has been proved that the Denuvo DRM was indeed the reason behind a game's performance woes. While the assurance from Irdeto regarding mobile games is welcome, it remains to be soon how things actually turn out when Irdeto's anti-piracy solution for mobile games becomes mainstream.
https://gadgets.ndtv.com/games/news/...lution-2088435





LA-Based Brainbase Raises $3 Million for its Intellectual Property Licensing Management Tech
Jonathan Shieber

It’s been nearly a century since Walt Disney first introduced Mickey Mouse to the world. In the ensuing decades, Disney, and the mythmakers of Hollywood have churned out storytelling franchises that are worth billions.

But the ways in which many of these mythmaking houses have kept track on the various characters they’ve come up with, and the partners they work with to have those characters live on in different forms, has been almost as antiquated as Steamboat Willie’s original 1928 animation.

Seeing an opportunity to give Hollywood’s licensing back-end an upgrade Nate Cavanaugh, Karl Johan Vallner and Nikolai Tolkatshjov, formed Brainbase in 2016.

The company, which raised $3 million from Struck Capital last month, sells an intellectual property licensing management tool and operates a marketplace where would-be vendors can meet license holders to pitch ideas on new products using intellectual property.

The financing also included investments from Tectonic Capital, Bonfire Ventures, Sterling Road and Watertower Ventures.

“Intellectual property licensing is fundamentally broken with the space dominated by a few players all dependent on legacy or homegrown infrastructure. In an environment where new brands are constantly emerging, the pervasiveness of social media enables them to become recognized on the world stage overnight” said Adam B. Struck, founder and managing partner of Struck Capital.

Clients for the service already include Sanrio, which owns the “Hello Kitty“ brand.

The company also recently made a couple of senior leadership hires which should help grow the business. Andrea Adelson, the former senior vice president of licensing at Fremantle — the production company and distributor of game shows like Family Feud, The Price Is Right and American Idol, is joining as the head of growth, and Ted Larkins, a former senior vice president and general manager of licensing agency CPLG North America has joined as the company’s head of business development.

Brainbase also nabbed a new board member along with its financing. Ray Hatoyama, the chief executive of Hatoyama Studio and a director at the Japanese messaging company, LINE.

“We are excited and thankful to have a seasoned, global group of investors, advisors, and customers supporting Brainbase’s mission,” said co-founder and CEO Nate Cavanaugh. “Our vision of building a product ecosystem for licensing management and monetization is resonating well across the industry. Our team is going to remain obsessed with building the best licensing technology platform and providing a great customer experience,” he added. “Those are ultimately the things that matter.”
https://techcrunch.com/2019/08/19/la...nagement-tech/





Eminem Publisher Sues Spotify Claiming Massive Copyright Breach, "Unconstitutional" Law

Eight Mile Style alleges "Lose Yourself" and many of the rapper's hits aren't licensed by the streamer. Now comes a suit eyeing Spotify's billions.
Eriq Gardner

Eminem's publisher Eight Mile Style has filed a major new lawsuit claiming Spotify has infringed hundreds of song copyrights and challenging the constitutionality of a recently passed music licensing law.

In a suit filed Wednesday in federal court in Nashville, Eight Mile accuses Spotify of willful copyright infringement by reproducing "Lose Yourself" and about 250 of the rapper's songs on its service to the tune of potentially billions of dollars in alleged damages. The suit also targets the Music Modernization Act, a federal law enacted last October that was intended to make life easier for tech companies and to get songwriters paid. The suit accuses Spotify, the $26 billion Stockholm-based streaming behemoth, of not living up to its obligations under the MMA, while also making a frontal attack on one of the few legislative accomplishments during the Donald Trump presidency.

According to the complaint, a copy of which was obtained by The Hollywood Reporter, Spotify has no license for Eminem's compositions, and despite streaming these works billions of times, "Spotify has not accounted to Eight Mile or paid Eight Mile for these streams but instead remitted random payments of some sort, which only purport to account for a fraction of those streams."

The suit adds that Spotify has placed "Lose Yourself" into a category called "Copyright Control," reserved for songs for which the owner is not known. Eight Mile attacks the "absurd" notion that it can't be identified as the owner of such an iconic song, which was the centerpiece of the 2002 film 8 Mile, hit No. 1 on the Billboard Hot 100 and won an Oscar for best original song. According to chart data, Eminem is among the most followed artists on Spotify with monthly listens on par with Bruno Mars, Coldplay and Taylor Swift.

Eight Mile is being represented by Richard Busch, a Nashville attorney whose appearance in this matter is notable. A decade ago, he handled a trailblazing case on behalf of the company that produced Eminem's early work. That dispute against Universal Music Group explored whether digital downloads should be treated as "licenses" or "sales" — a meaningful accounting difference that changed the economics of distributing music in the iTunes era. More recently, besides famously representing Marvin Gaye's family in its successful copyright suit over "Blurred Lines," Busch took on Spotify in a pair of cases that alleged rampant infringement. Those recently settled cases may have played some role in the passage of the Music Modernization Act by convincing Spotify and other music distributors to come to the negotiating table for new legislation.

Before the Music Modernization Act, a persistent problem for those digitally distributing music was identifying and locating the co-authors of tens of millions of copyrighted musical works. Under copyright law, Spotify could obtain a compulsory license for its mechanical reproduction of a song, but it needed to send out a "notice of intention" and make required payments. Spotify, like others, works with the Harry Fox Agency to comply, but past class action lawsuits alleged Spotify had fallen short on efforts.

The new law was meant to alleviate the difficulty of "matching" songs with their owners through a database run by a Mechanical Licensing Collective, which will grant blanket licenses beginning in 2021. At a signing ceremony, Trump was flanked by such musicians as Kid Rock and John Rich, and he talked how the new law is good for the music community. "I've been reading about this for many years and never thought I'd be involved in it, but I got involved in it," said Trump. "They were treated very unfairly. They're not going to be treated unfairly anymore."

The Music Modernization Act was hailed by both song publishers ("We are humbled by the extraordinary progress propelled by compromise," said National Music Publishers Assn. president David Israelite at the time) as well as the Digital Media Association, the lobbying voice of the streaming industry. "The MMA will benefit the music community and create a more transparent and streamlined approach to music licensing and payment for artists," said Horacio Gutierrez, Spotify's general counsel and vp business and legal affairs, upon the law's enactment.

Streamers may have thought their copyright troubles were ending, but that sentiment might have been both optimistic and premature.

Eight Mile now alleges Spotify's attempts to serve notices of intent for Eminem's music are "untimely and ineffective" and that the streamer can't demonstrate compliance with the Music Modernization Act.

"First, by its terms, the MMA liability limitation section only applies to compositions for which the copyright owner was not known, and to previously unmatched works (compositions not previously matched with sound recordings), and not to 'matched' works for which the DMP [Digital Music Provider] knew who the copyright owner was and just committed copyright infringement," states the complaint.

In other words, Eight Mile asserts that Spotify knew exactly who owned these Eminem songs, and even if it didn't, Spotify "did not engage in the required commercially reasonable efforts to match sound recordings with the Eight Mile Compositions as required by the MMA."

Spotify will surely have its own interpretation of the Music Modernization Act once it files its court papers. THR reached out for comment on the suit.

In a security filing earlier this year, the company discussed how between October 2018 and Dec. 31, 2020, the mechanism for obtaining a compulsory license was no longer operative and there was "risk" in situations where it had no direct license and couldn't locate the owner of a composition. Additionally, Spotify warned that the Music Modernization Act, when fully implemented, could actually increase the cost and difficulty of obtaining licenses, especially upon any delay in adoption of new regulations.

Eminem's publisher is doing more than merely questioning Spotify's compliance with copyright law. The lawsuit also makes a pretty bold argument regarding a new law's constitutionality.

The Music Modernization Act held out a carrot for streamers in the form of essentially a blank slate for past copyright infringement. Those who didn't sue by the end of last year were out of luck (which explains why Tom Petty's publisher Wixen filed a since-settled case against Spotify on New Year's Eve).

But as Eight Mile contends, the attempt to retroactively wipe out a copyright holder's ability to recover profits, statutory damages and attorneys fees amounts to "an unconstitutional taking of Eight Mile’s vested property right," basically meaning the Music Modernization Act is allegedly in violation of the Takings Clause of the Fifth Amendment.

Spotify and others participating in the tech industry's lobbying efforts "knew what they were doing," asserts the Eight Mile complaint. "Given the penny rate for streaming paid to songwriters, the elimination of the combination of profits attributable to infringement, statutory damages and attorneys’ fees would essentially eliminate any copyright infringement case as it would make the filing of any such action cost prohibitive, and ensure that any plaintiff would spend more pursuing the action then their recovery would be. In addition, with the removal of these remedies, it cleared the last hurdle for Spotify to go public, thereby reaping its equity owner’s tens of billions of dollars. The unconstitutional taking of Eight Mile’s and others vested property right was not for public use but instead for the private gain of private companies."

The application of the Takings Clause to copyright reform is, in the words of a 2015 article in the Harvard Law Review, "largely unexplored" territory, with scholars at times having "expressed concern that applying Takings Clause scrutiny to intellectual property might inhibit legal change."

The issue of an unconstitutional taking did come up during the legislative process, though it was mostly geared towards discussion of a different aspect of the Music Modernization Act — treatment of pre-1972 recordings, which have now become eligible for digital royalties.

The lawsuit from Eminem's publisher is now set to put tough issues before a court.

As relief for alleged copyright infringement, Eight Mile seeks Spotify's substantial profits, which the complaint painstakingly attempts to attribute to sweeping copyright theft of songs like "Lose Yourself." (Universal, Sony, and Warner Music own big equity stakes in Spotify.) If the plaintiff runs into trouble demonstrating how Spotify has benefited from failing to secure licenses, the lawsuit seeks in the alternative the maximum amount of statutory damages — $150,000 for each of the 243 works at issue, which computes to $36.45 million. The lawsuit also seeks a judicial declaration that Spotify does not qualify for limitation from damages under the Music Modernization Act as well as a second declaration that the law's retroactive elimination of damages available for copyright infringement is unconstitutional.
https://www.hollywoodreporter.com/th...al-law-1233362





Top U.S. Publishers Sue Amazon's Audible for Copyright Infringement

Amazon.com Inc’s (AMZN.O) Audible was sued by some of the top U.S. publishers for copyright infringement on Friday, aiming to block a planned rollout of a feature called ‘Audible Captions’ that shows the text on screen as a book is narrated.

The lawsuit was filed by seven members of the Association of American Publishers (AAP), including HarperCollins Publishers, Penguin Random House, Hachette Book Group, Simon & Schuster, and Macmillan Publishers.

“Essentially Audible wants to provide the text as well as the sound of books without the authorization of copyright holders, despite only having the right to sell audiobooks,” AAP said in a statement.

The lawsuit was filed in the United States District Court for the Southern District of New York.

Audible did not immediately respond to a request for comment.

Reporting by Ayanti Bera in Bengaluru; Editing by Anil D'Silva
https://www.reuters.com/article/us-a...-idUSKCN1VD1ZY





YouTube Sues Alleged Copyright Troll Over Extortion of Multiple YouTubers

Chris Brady seemed to target the Minecraft community, according to the suit
Julia Alexander

YouTube is going after an alleged copyright troll using the Digital Millennium Copyright Act’s (DMCA) provisions, alleging that Christopher Brady used false copyright strikes to extort YouTube creators, harming the company in the process. Now, YouTube is suing Brady, using the DMCA’s provisions against fraudulent takedown claims, seeking compensatory damages and an injunction against future fraudulent claims.

The lawsuit, first spotted by Adweek reporter Shoshana Wodinsky, alleges that Brady sent multiple complaints claiming that a couple of Minecraft gaming YouTubers — “Kenzo” and “ObbyRaidz” — infringed on his copyrighted material in January. (Their legal names were not listed in the lawsuit.) YouTube removed the videos that Brady claimed were infringing on his copyrighted material, as the company does whenever a claim is submitted.

ObbyRaidz was sent a message from Brady, according to the lawsuit, that stated if the YouTuber didn’t pay Brady $150 via PayPal (or $75 in bitcoin), he would issue a third copyright strike. This would essentially terminate ObbyRaidz’ channel and remove all of his videos from the platform. Kenzo was sent a similar message, but Brady requested $300. ObbyRaidz spoke about the situation in a video, noting that he made multiple attempts to get in touch with someone at YouTube but didn’t make any progress.

“Brady has submitted these notices as part of a scheme to harass and extort money from the users that he falsely accuses of infringement,” the lawsuit reads.

It wasn’t until ObbyRaidz and Kenzo spoke about the alleged extortion on their individual YouTube channels that YouTube’s team learned about the issue, according to the lawsuit. Still, the company alleges that Brady continued to go after members of the YouTube gaming community. He allegedly sent four copyright takedown notices to YouTube between June 29th and July 3rd, stating that YouTuber “Cxlvxn” infringed on his content.

“Brady’s extortionate and harassing activities described here may, at least in part, be motivated by his failings in his Minecraft interactions,” the lawsuit reads.

Copyright claim abuse — often referred to as copyright striking within the YouTube community — is a big issue on the platform. Third-party companies and aggressors will often use the tactic as a means of making a statement. Sometimes creators will weaponize the ability to make a claim while feuding with another creator.

“We regularly terminate accounts of those that misuse our copyright system,” a YouTube spokesperson told The Verge. “In this case of particularly egregious abuse, where the copyright removal process was used for extortion, we felt compelled to pursue further legal action and make it clear that we do not tolerate abuse of our platform or its users.”

Brady’s case seems to have been unusual in a number of respects. YouTube also alleges that Brady used at “least 15 different online identities, all of which YouTube traced back to him,” in order to serve various copyright infringement claims. The time spent on the investigation has caused “YouTube to expend substantial sums on its investigation in an effort to detect and halt that behavior, and to ensure that its users do not suffer adverse consequences from it.” As a result of Brady’s actions and the methods he took to cover up his identity, YouTube may be “unable to detect and prevent similar misconduct in the future.”
https://www.theverge.com/2019/8/19/2...tion-minecraft





Court Rules That “Patent Troll” is Opinion, Not Defamation
Joe Mullin

Free speech in the patent world saw a big win on Friday, when the New Hampshire Supreme Court held that calling someone a “patent troll” doesn’t constitute defamation. The court’s opinion is good news for critics of abusive patent litigation, and anyone who values robust public debate around patent policy. The opinion represents a loss for Automated Transactions, LLC (ATL), a patent assertion entity that sued more than a dozen people and trade groups claiming it was defamed.

EFF worked together with the ACLU of New Hampshire to file an amicus brief in this case, explaining that the lower court judge got this case right when he ruled against ATL. That decision gave wide latitude for public debate about important policy issues—even when the debate veers into harsh language. We’re glad the New Hampshire Supreme Court agreed.

Last week’s ruling court notes that “patent troll” is a phrase used to describe “a class of patent owners who do not provide end products or services themselves, but who do demand royalties as a price for authorizing the work of others.” However, the justices note that “patent troll” has no clear settled definition. For instance, some observers of the patent world would exclude particular entities, like individual inventors or universities, from the moniker “patent troll.”

Because of this, when ATL’s many critics call it a “patent troll,” they are expressing their subjective opinions. Differences of opinion about many things—including patent lawsuits—cannot and should not be settled with a defamation lawsuit.

“We conclude that the challenged statement, that ATL is a well-known patent troll, is one of opinion rather than fact,” write the New Hampshire justices. “As the slideshow demonstrates, the statement is an assertion that, among other things, ATL is a patent troll because its patent-enforcement activity is ‘aggressive.’ This statement cannot be proven true or false because whether given behavior is ‘aggressive’ cannot be objectively verified.”

The court ruling also upheld tough talk about ATL’s behavior beyond the phrase “patent troll.” For instance, the court looked at statements referring to ATL’s actions as “extortive,” and rejected defamation claims on that basis, finding that was rhetorical hyperbole. Another ATL critic had complained that ATL’s efforts “cost them only postage and the paper their demand letters are written on.” This, too, was hyperbole, part of the give-and-take of a public debate.

This case has its origins in the patents of inventor David Barcelou, who claims he came up with the idea of connecting ATMs to the Internet. As Barcelou describes in his defamation lawsuit, he saw “his business efforts fail,” before he went on to transfer patent rights to ATL and create a patent assertion business.

ATL began suing banks and credit unions that were allegedly using Barcelou’s patents in their ATMs. In all, about 200 different companies paid ATL a total of $3 million in licensing fees to avoid litigation—that’s an average of $15,000 per company.

But when they were finally examined by judges, ATL’s patents failed to hold up. The Federal Circuit invalidated several patent claims owned by ATL, and further found that the defendants’ ATMs did not infringe the Barcelou patents.

After that court loss, ATL had a steep drop in licensing revenue. That’s when ATL launched its defamation lawsuit, blaming its critics for its setbacks.

For software developers and small business owners who bear the brunt of patent troll demands and lawsuits, the New Hampshire decision sends a clear message. If you’re upset about the abuses inherent in our current patent system, it’s okay to speak out by using the term “patent troll.” Calling out bad actors in the system is part and parcel of the debate around our patent and innovation policies.
https://www.eff.org/deeplinks/2019/0...not-defamation





How Artist Imposters and Fake Songs Sneak Onto Streaming Services
Noah Yoo

Last December, new music from Beyoncé and SZA appeared out of nowhere on Spotify and Apple Music. Released under the names “Queen Carter” and “Sister Solana” respectively, these full-length projects initially seemed like surprise drops with a twist. Soon fans realized that something wasn’t right: Many of the Beyoncé recordings came from old sessions, and the SZA songs sounded like unfinished demos, which the singer later confirmed. Neither Beyoncé nor SZA had anything to do with the releases, in fact. It wasn’t the first time a big artist’s music had been uploaded illegally to Spotify and Apple Music, and it wouldn’t be the last.

In the most troubling of these scenarios, fake releases have actually crept up the streaming charts. In March 2019, when a fake Rihanna album called Angel was uploaded to iTunes and Apple Music under the name “Fenty Fantasia,” it made it as far as No. 67 on the iTunes worldwide albums chart before being yanked off the platform. Then, in May, a leak of Playboi Carti and Young Nudy’s “Pissy Pamper / Kid Cudi” was uploaded to Spotify as “Kid Carti,” under the artist name “Lil Kambo.” Two million-plus streams later, “Kid Carti” topped the service’s U.S. Viral 50 chart before being removed. Ironically, “Pissy Pamper / Kid Cudi” was never released officially because of sample clearance issues involving Mai Yamane, whose 1980 song “Tasogare” serves as the basis for its beat. None of the involved artists—Yamane, Carti, Nudy—ultimately saw a dime from streams of the song.

The related artists on Lil Kambo’s page revealed even more Playboi Carti leakers, as well as “artists” who were masquerading as Juice WRLD and Lil Uzi Vert. Given the prevalence of such impersonators, it came as no surprise when “Pissy Pamper / Kid Cudi” made its way up the Spotify Viral chart again, under a different name, a month after the first fake was removed. Before the end of June, five more unreleased Playboi Carti tracks appeared on the rapper’s official Apple Music page. Fans celebrated the leaks, which made headlines on Genius and The Fader before being removed the following day.

Suspicious bootlegs and fraudulent uploads are nothing new in digital music, but the problem has infiltrated paid streaming services in unexpected and troubling ways. Artists face the possibility of impersonators uploading fake music to their official profiles, stolen music being uploaded under false monikers, and of course, simple human error resulting in botched uploads. Meanwhile, keen fans have figured out where they can find illegally uploaded, purposefully mistitled songs in user playlists.

Here’s how the process works: Artists who use independent distribution companies such as DistroKid or TuneCore get paid royalties for their streams and typically cash out via services like PayPal. TuneCore states that their royalty calculations typically operate on a two-month delay, while DistroKid has a three-month delay on payments, meaning that royalties accrued from streams in January may not be available to cash out until March or April. Distribution companies generally stipulate that users must agree not to distribute copyrighted content that they do not own, and streaming services similarly specify that copyright-infringing content is not allowed. However, it’s easy for leakers to simply lie and upload infringing music, which may or may not be caught by the distributors’ fraud prevention methods. By abusing the limited oversight in the digital supply chain, it’s possible that leakers can make significant amounts of money off music they have zero rights to.

One leaker told Pitchfork that they were paid upwards of $60,000 in royalties this year by DistroKid and TuneCore, after uploading unreleased tracks by artists including Playboi Carti and Lil Uzi Vert onto Spotify and Apple Music. The leaker, who spoke under the condition of anonymity and provided transaction records in addition to withdrawal confirmations from distributors, said that they released the songs in order to please “eager fans” of the artists. And while much of the music was later removed, the documents viewed by Pitchfork indicate that royalties were still paid out, as much as $10,000 at a time.

Pitchfork reached out to representatives at DistroKid, TuneCore, Spotify, and Apple Music for comment regarding the possibility of royalties generated by copyright-infringing music being paid to an illegal uploader.

A spokesperson for Spotify said:

We take the protection of creators’ intellectual property extremely seriously and do not tolerate the distribution of content without rightsholder permission. As with any large digital services platform, there are individuals who attempt to game the system. We continue to invest heavily in refining our processes and improving methods of tackling this issue.

TuneCore Chief Communications Officer Jonathan Gardner said:

In addition to subjecting all uploaded material to a detailed content review process before it is delivered to any digital music service, it is also TuneCore’s policy to respond expeditiously to remove or disable access to any material which is claimed to infringe copyrighted material and which was posted online using the TuneCore service. By agreeing to TuneCore’s Terms of Service, each user also agrees, among other things, that, in the event that TuneCore is presented with a claim of infringement, TuneCore may freeze any and all revenues in the user’s account that are received in connection with the disputed material. While we cannot comment on any specific claims, we can say that TuneCore is committed to preventing our services from being used in connection with infringing or otherwise deceptive behavior.

DistroKid founder and CEO Philip Kaplan did not directly address the claims and instead offered this:

DistroKid recently launched DistroLock, which is an industry-wide solution to help stop unauthorized releases. Any artist, label, or studio can register their music with DistroLock, for free, to preemptively block it from being released by distributors and music services. We made DistroLock available for free to our competitors and other music services because by working together, we can help protect legitimate artists from fraud and infringement.

When reached by Pitchfork, a representative for Apple Music declined to comment.

To understand how leakers could game the system on paid platforms, it’s important to understand the huge amount of control held by digital distribution companies. Artists are unable to directly upload their music onto streaming services like Spotify or Apple Music (versus YouTube or SoundCloud, which are often thought of as less “legitimate”), so they must go through some sort of distributor. Last year, Spotify experimented with allowing artists to upload their own music directly, but the function was recently nixed so that the service could focus on “developing tools in areas where Spotify can uniquely benefit [artists and labels].”

The biggest record labels often oversee their own distribution, but there are independent digital distributors of all sizes out there. Artists who are just starting out typically depend on distributors with a lower barrier of entry, like DistroKid or TuneCore. There are scores of these companies, their main appeal being that they charge little to nothing to upload a song to streaming services. Uploads are generally vetted to varying degrees of thoroughness by algorithms, human beings, or a combination of both, depending on the company.

In the case of the Beyoncé and SZA leaks, the leakers distributed the tracks to Spotify and Apple Music via Soundrop. Zach Domer, a brand manager for Soundrop, says he believes the leakers used the service because it does not require an upfront fee for distribution. “It’s like, ‘Oh cool, I don't have to pay DistroKid’s $20 fee to do this fake thing,’” he said. “You can’t prevent it. What you can do is make it such a pain in the ass, and so not worth doing, that [leakers] just go back to the dark web.”

Domer told Pitchfork that Soundrop relies on a variety of systems to vet the legitimacy of their content, including “audio fingerprinting” systems similar to those powering the music identification app Shazam, as well as a small content approval team of three to four people. The team reviews any submissions that come back flagged, either because the songs triggered the fingerprinting system or have suspect metadata; an example of the latter would be the use of an existing artist name, which explains why these leaks typically don’t use artist’s official names. Though rudimentary, Soundrop’s vetting process is more extensive than some of their competitors’. Domer says, for example, that the fake song briefly uploaded to Kanye West’s Apple Music page last year should have been “super easy to catch.”

The fake song/real profile phenomenon doesn’t just happen to the Kanyes and Cartis of the industry. The manager of an unsigned act that has racked up over 50 million Spotify streams to date spoke with Pitchfork about their client’s struggles with impersonators throughout 2018. Fallible authentication measures made it possible for unsanctioned music to appear on said artist’s official Spotify profile. The manager issued takedown notices to the streaming service with mixed results: “The hurdle we came across was, will [Spotify] be able to remove the music, or will they shuffle it onto another profile and not actually remove it? There seems to be no consistency with which route is enforced.”

In one instance described by the manager, an impersonator went so far as to create and distribute a fake album under the artist’s name. According to the manager, it took three days for Spotify to remove it. “That was the first time we contacted a lawyer,” the manager said. “We didn't end up needing to pursue legal action, but we came to the conclusion that it is incredibly hard to even sue anyone who you cannot legally identify. And even then, that person could have multiple accounts on multiple uploading platforms. If they get caught on one, they could just go to another.”

While distributors are the ones who facilitate payments, all roads in the digital supply chain end with the streaming services. Companies like Spotify, Apple Music, Amazon Music, and Deezer are the final checkpoint before music reaches listeners. But with “close to 40,000” new tracks being uploaded to market leader Spotify every day, it seems near impossible, at least on the bigger services, to catch every single illegal upload before payouts accrue. There does not appear to be any publicly available information on how many of those tracks are vetted in the first place, or how many eventually get taken down due to copyright violations.

A source close to Spotify tells Pitchfork that it is standard practice for the company to flag pipelined releases from notable artists and double-check the accuracy of those uploads with the artists’ representatives before they go live. This policy might explain how that fake Kanye track made it onto his Apple Music page but never surfaced on Spotify. It also might explain how “Free Uzi”—released and promoted by Lil Uzi Vert as his next single but characterized as a “leak” by his label, Atlantic—never made it onto Spotify, despite initially showing up on other streaming services. But it’s unclear how many artists Spotify is willing to double-check for, and how that list is determined.

“When there’s a million gallons of water and a two-foot pipe for all of that water to come through, people start to figure out another way through,” said Errol Kolosine, an associate arts professor at New York University and the former general manager of prominent electronic label Astralwerks. “The fundamental reality is, if people are losing enough money or being damaged enough through this chicanery, you’ll see something change. But the little people who don’t have resources, well, it’s just the same story as always.”

When asked why labels haven’t pressed the issue of streaming fraud, several of the industry figures interviewed for this piece mentioned “the metadata problem.” This refers to the lack of a universal metadata database in music, which makes it incredibly difficult to keep track of personnel and rights holders on any given song, and thus a huge ongoing issue in the record business. Royalty tracking start-up Paperchain estimates that there is $2.5 billion in unpaid royalties owed to musicians and songwriters, due to shoddy metadata. (There doesn’t seem to be an industry consensus on this figure; by contrast, Billboard puts the estimate at roughly $250 million.)

It’s important to note that streaming scams will likely exist in some form with or without the existence of a metadata database. (“I don’t know if there’s ever going to be a pure technological solution to prevent somebody from uploading unreleased material under fake aliases, with fake metadata,” said Domer.) But the fractured state of music metadata makes it far easier for bad actors to entangle themselves in the streaming ecosystem. It should not be possible for outside individuals to gain access to artists’ official profiles on streaming services, and yet it occurs because there is no authentication protocol outside of individual companies’ own vigilance. Having a system in place to ensure accurate metadata across companies appears to be a necessary first step.

Spotify’s solution thus far seems to be the copyright infringement form on its website, which notes that artists “may wish to consult an attorney before submitting a claim.” Apple Music has a similar online form. As for the distribution companies, DistroKid appears to be the only one to date that has developed a promising defense strategy, the aforementioned DistroLock. That said, even DistroKid stakeholder Spotify has yet to announce any plans to integrate DistroLock within its platform.

Ultimately, the problem at hand is greater than the risk of lost royalties. The prevalence of leaks on established streaming services has a significant impact on an artist’s sense of ownership over their life’s work. The lines become blurred as to whether something actually “exists” in an artist’s canon if they never gave permission for it to be released. So while diehards might feel a thrill, circumventing the system and listening to unreleased songs by their favorite musicians, the leaks ultimately hurt those same artists. After the last of this June’s many leaks, Playboi Carti uploaded a brief explanation to his Instagram Stories: “Hacked ,” it read. “I haven’t released anything… I hate leaks.” Beneath it, a GIF sticker: “Leave me alone.”
https://pitchfork.com/features/artic...ming-services/





DISH Network Continues Campaign Against Pirate IPTV With a Lawsuit Targeting IPGuys
Bill Toulas

• DISH submits yet another lawsuit against a pirate IPTV provider, and this time the victim is the IPGuys IPTV service.
• The satellite TV provider has identified three persons who hide behind the network of IPTV resellers.
• The broadcaster is asking for a permanent injunction, damage compensation, and equipment confiscation.

After targeting Easybox IPTV last week, East IPTV last month, and SET TV in May, DISH Network moves on with the submission of yet another lawsuit, this time against the IPGuys IPTV service. IPGuys is a well-known pirate IPTV platform that enjoys great success in the field, so naturally, DISH Network wants them down for good. DISH is a US-based satellite TV provider which sees its revenues shrink month to month due to the cord-cutting trend that is going on in the country. Pirate IPTV services offer quite a few of the channels that are being offered via DISH Network as well, so they are directly and indirectly affected by these unfair competition entities.

IPGuys follows a common business model in this field by not offering an official website. Instead, they rely on an extensive network of resellers who sell pre-configured boxes that connect to the IPGuys’ servers without the need for additional fiddling. Despite this “anonymized” method of operation, plaintiffs found a way to figure out who is behind the IPGuys platform and included his name in the lawsuit. He is Tomasz Kaczmarek, a citizen of Ontario, Canada.

In addition to Mr. Kaczmarek, two more people are named, and they are New Yorkers John and Julia Defoe. According to the lawsuit, these two provide the stolen broadcast feeds to IPGuys’ service, with emphasis on DISH’s channels. The rest of the pirating group is unknown, so the lawsuit refers to them as “John Does”, estimating that they are between one to ten individuals. DISH identified the role of these people as “seeders”, which means they provide content to IPGuys who then re-broadcasts it without possessing any license to do so. Thus, John Does are considered accomplices.

The network of resellers is not left out of the prosecution’s spectrum either. The primary sellers who are included in the lawsuit are IPTV Bazaar, GetIPTVOnline, Romie IPTV World, The Napster, Miracle Media Box Media, and IPGuys-Live. The lawsuit asks the District Court of New York to impose a broad permanent injunction that covers all defendants, as well as hefty damage compensation. The amount of damages that are requested by DISH ranges from $10k to $100k for each violation, including attorney fees and relevant costs. Finally, DISH asks for an order to be allowed to take possession and destroy all devices and equipment that is in control of Kaczmarek, the Defoes, and the Does.
https://www.technadu.com/dish-networ...te-iptv/77595/





Hollywood Shut Down Vader Streams and is Ready to Receive $10 Million in Damages
Bill Toulas

• Vader Streams was shut down by ACE, Amazon, and Netflix, as revealed by the Ottawa court this week.
• The same coalition brought down several pirate IPTV platforms since 2018.
• Vader Streams will have to pay $10 million in damages, which isn’t that punishing for their magnitude.

Back in May, we reported about the imminent shutdown of the Vader pirate IPTV service platform. When the announcement came out from the owners of the service through Telegram, no one knew why things had to go this way, who was applying pressure on the operators of the service, and whether the subscribers of Vader were really running the risk of having their identities exposed to rightsholders or not. Vader administrators limited their announcement to reassuring statements about the protection of their users and resellers and claimed that closing down was their last resort.

Now, according to a story in Variety, it was a coalition of Hollywood studios and streaming companies that managed to bring Vader Streams down, so the speculation about what really happened in May has reached an end now. The coalition was the usual suspect, ACE (Alliance for Creativity and Entertainment), and the streaming services were Netflix and Amazon, so the pressure on Vader was definitely insurmountable. The same group of entities has managed to shut down SET TV last month, OneStepTV.com in May, and Dragon Box, TickBox, and 123Movies in 2018.

ACE has made their case on a court in Ottawa by accusing Vader Streams of providing a library of 2400 films and 350 TV shows from hundreds of channels. The estimated number of subscribers that supported Vader financially is eight million, mostly coming from the U.S. and Canada. The court has issued a permanent injunction this week, so the case went public, but this was kept a secret previously thanks to an “Anton Piller order”. This enabled ACE to search the premises that hosted the Vader infrastructure and seize all evidence of copyright infringement without a previous warning to the operators. Such orders are useful for when the plaintiff fears that the defendant will destruct the evidence to avoid paying damage compensation.

Speaking of which, Vader Streams is now facing a $10 million amount, while the ownership of the domains will be passed onto ACE. $10 million may sound like a lot, but I’m sure Vader made much more through their illicit business. Still, it is a strong final blow for the once-popular pirating IPTV service. Charles Rivkin, CEO of the MPA (Motion Picture Association) has made the following statement on the court’s decision: “Actions like these can help reduce piracy and promote a dynamic, legal marketplace for creative content that provides audiences with more choices than ever before, while supporting millions of jobs in the film and television industry.”
https://www.technadu.com/hollywood-s...damages/77367/





Streaming Video Will Soon Look Like the Bad Old Days of TV

As media monoliths bundle their offerings, consumers will once again have to pay for a bunch of shows they don’t want.
Matthew Ball

By 2010, nine in 10 American homes were subscribed to a pay-television service. Many had also come to hate it.

Whether delivered by satellite, cable, fiber-optics, radio tower or DVR, “traditional” television had become overrun with klaxonic advertisements and series aimed at the widest possible audiences while meeting the narrowest of advertiser whims. Most television shows were still stuck on annoying schedules and delivered through dreadful channel guides and clunky cable boxes.

And it was expensive. The cost of the average cable package, stuffed with unwanted channels, had grown to $65 per month — and that was before hidden fees and unavoidable equipment charges.

So it was no surprise that audiences eventually flocked to streaming services, such as Netflix and Hulu, which offered a balm for pay-TV’s frustrations. According to Nielsen, the average American now watches nearly a quarter less traditional television than a decade ago, with those under 34 years old having halved their consumption.

But the streaming video era is already starting to resemble the old age of television that viewers were so excited to escape. Many of the problems TV watchers thought they had left behind are just being remixed under different brands and bundles.

The next 12 months will see several video services come to market, including Disney+, AT&T/WarnerMedia’s HBO Max, Comcast/NBCUniversal’s unnamed service, Apple TV+ and Quibi from the Hollywood executive Jeffrey Katzenberg. This increased competition will offer audiences even more high-quality series, the sorts of films that can no longer be found in theaters, interactive storytelling they’ve never seen before, and further improvements in navigation and advertising.

Yet in this new multiplatform world, viewers will find they have to pay for a fistful of streaming subscriptions to watch all of their favorite programs — and in the process, they’ll again end up paying for lots of shows and movies they’ll never care to watch.

AT&T’s WarnerMedia, for example, is bundling its TV channels, like TBS, HBO and TruTV, and film studios, including Warner Bros., DC Films and New Line, into its HBO Max service. Disney+ will have Marvel, Pixar and Lucasfilm, but also National Geographic, “The Simpsons” and Disney’s offerings for children.

And to navigate these many subscriptions, most households will want companies like Amazon or Apple to further bundle these services together into a single app — just as they do with Dish or Xfinity. All of this bundling will eventually mean the return of a high monthly bill.

Behind this bill is the cost of making high-quality programming. Although much has been said about how Netflix and Amazon have disrupted the video business, no media company has figured out how to make premium movies or TV shows significantly more cheaply. In fact, competition has driven production budgets even higher. Ultimately, these costs are paid by viewers (especially if they choose to watch without ads).

But the rise of digital video is bringing back more than just bloated bundles and bills. Many companies are returning to TV’s original business model: selling you anything and everything but the television show in front of you.

For decades, all TV content was “free.” Networks like ABC and CBS distributed their shows free of charge because they weren’t really in the business of selling audiences 30 minutes of entertainment. Instead, they were selling advertisers eight or so minutes of the audience’s attention. While most digital video services do charge their viewers, their real objective is to lock audiences into their ever-expanding ecosystem. Their TV network is the ad.

Amazon, Apple and Roku, for example, use their networks to drive sales of their devices, software, services and other products (to quote Amazon’s chief executive, Jeff Bezos, “When we win a Golden Globe, it helps us sell more shoes”). For YouTube and Facebook, original movies and shows are about increasing the number of ads they serve and the prices they charge for these ads.

AT&T spent some $109 billion buying Time Warner in the hopes that series like “Game of Thrones” and “Friends” would help the company add more wireless subscribers, increase data usage and expand its digital advertising/data arm, Xandr. Today, AT&T gives HBO away to many of its subscribers.

Similarly, the real goal of Disney+ isn’t the creation of a new revenue line for Disney. Instead, it’s about giving the company the ability to know each of its fans individually, including what content and characters they like, and how much, and to sell to them directly. This is why the annual plan is priced at only $70. Monthly subscription fees are trivial if Disney can use the service to sell more $5,000 cruises. The same applies for merchandise, movie tickets and other products.

To this end, many analysts believe the greatest threat to Netflix isn’t imminent competition from storied media giants like Disney and WarnerMedia, but the fact that it’s only in the video business and doesn’t even sell ads.

Giving content away to drive other businesses isn’t unique to movies and TV. Amazon and Google are moving into video gaming primarily to benefit their cloud-computing businesses, for example. Amazon, Apple and Google operate unprofitable music services because the money they lose per subscriber is outweighed by the extra revenue generated elsewhere.

This dynamic reflects the economic reality of the media industry. It has lots of cultural influence, but a relatively modest amount of consumer spending. In 2018, audiences worldwide spent some $300 billion on TV, $138 billion on video gaming, $41 billion on movie tickets and only $19 billion on recorded music. The same year, the five technology giants Apple, Amazon, Microsoft, Google and Facebook had revenue of more than $800 billion combined.

Will any of these companies be willing to greenlight a show about evil artificial intelligence? Or make a show like “Mr. Robot,” a thriller focused on evil mega-corporations and corrupt telecommunication giants? Or bet on a big-budget series that doesn’t offer the prospect of multiplatform synergies? Are they the ideal sponsors of a free press?

Even as the video industry reconstitutes with new players — under old business models and familiar problems — most people agree that TV has never been better. Consumers have more options, better shows and more diversity than ever before.

But at the same time, we’re entering a world in which our culture is programmed by vertically integrated trillion-dollar corporations. This may help us escape high prices and ads in the short term, but eventually the bill will come due.
https://www.nytimes.com/2019/08/22/o...ulu-cable.html





Netflix Tests Human-Driven Curation with Launch of ‘Collections’
Sarah Perez

Netflix is testing a new way to help users find TV shows and movies they’ll want to watch with the launch of a “Collections” feature, currently in testing on iOS devices. While Netflix today already offers thematic suggestions of things to watch, based on your Netflix viewing history, Collections aren’t only based on themes. According to Netflix, the titles are curated by experts on the company’s creative teams, and are organized into these collections based on similar factors — like genre, tone, story line and character traits.

This human-led curation is different from how Netflix typically makes its recommendations. The streaming service is famous for its advanced categorization system, where there are hundreds of niche categories that go beyond broad groupings like “Action,” “Drama,” “Sci-Fi,” “Romance” and the like. These narrower subcategories allow the streamer to make more specific and targeted recommendations.

Netflix also tracks titles that are popular and trending across its service, so you can check in on what everyone else is watching, as well.

The new Collections feature was first spotted by Jeff Higgins, who tweeted some screenshots of the addition.

If you’ve been opted in to the test, the Collections option is available at the top right of the app’s homepage — where My List would have been otherwise.

The suggestions are organized into editorial groups, with titles like “Let’s Keep It Light,” “Dark & Devious TV Shows,” “Prizewinning Movie Picks,” “Watch, Gasp, Repeat,” “Women Who Rule the Screen” and many more.

You can follow the Collection from the main screen, or you can tap into it to further explore its titles.

If you tap a collection that interests you, it smoothly expands to show the thumbnails of the suggested titles below a header that explains what the collection is about. You can choose to follow the suggestion from here too, which presumably ties into Netflix’s notification system.

Collections are also found on the app’s Home page, for those who have access to the new feature.

“We’re always looking for new ways to connect our fans with titles we think they’ll love, so we’re testing out a new way to curate Netflix titles into collections on the Netflix iOS app,” a Netflix spokesperson confirmed to TechCrunch. “Our tests generally vary in how long they run for and in which countries they run in, and they may or may not become permanent features on our service.”

This isn’t the first time Netflix has toyed with organizing content suggestions into Collections. The company’s DVD service (yes, it still exists), had rolled out a similar Collections feature in its own mobile app.

This test comes at a time when Netflix is working on features to better retain existing subscribers amid increased competition, including that from upcoming rivals like Disney+ and Apple TV+, among others. On this front, it also recently launched a feature allowing users to track new and soon-to-launch releases, as a means of keeping subscribers anticipating what comes next.

Netflix said Collections is only available on iOS. As a test, it won’t show to all users.
https://techcrunch.com/2019/08/23/ne...f-collections/





Degrading Tor Network Performance Only Costs a Few Thousand Dollars Per Month

Attackers can flood Tor's bridges with just $17k/month, Tor's load balancers for only $2.8k/month, academics say.
Catalin Cimpanu

Threat actors or nation-states looking into degrading the performance of the Tor anonymity network can do it on the cheap, for only a few thousands US dollars per month, new academic research has revealed.

According to researchers from Georgetown University and the US Naval Research Laboratory, threat actors can use tools as banale as public DDoS stressers (booters) to slow down Tor network download speeds or hinder access to Tor's censorship circumvention capabilities.

Academics said that while an attack against the entire Tor network would require immense DDoS resources (512.73 Gbit/s) and would cost around $7.2 million per month, there are far simpler and more targeted means for degrading Tor performance for all users.

In research presented this week at the USENIX security conference, the research team showed the feasibility and effects of three types of carefully targeted "bandwidth DoS [denial of service] attacks" that can wreak havoc on Tor and its users.

Researchers argue that while these attacks don't shut down or clog the Tor network entirely, they can be used to dissuade or drive users away from Tor due to prolongued poor performance, which can be an effective strategy in the long run.

I. Targeting Tor bridges

In the first DDoS attack scenario the research team has analyzed, academics said that a threat actor can target Tor bridges instead of attacking each and every Tor server.

Tor bridges are special servers that act as entry points into the Tor network, however, unlike Tor guard servers, they don't have their IP addressses listed in public directories, hence they can't be blocked with ease.

Users residing in countries where access to the public Tor guard servers has been blocked by the local government can configure the Tor Browser to use one of the tens of buit-in bridge servers as a way to go around any Tor censorship attempt.

But researchers said that not all of these Tor bridges are currently operational and that saturating traffic to all (currently 12 working Tor bridges) costs only about $17k/month.

In the case all 38 Tor bridges would be repaired and made operational again, the attack would cost $31k/month, which is still a price tag in the reach of any nation-state willing to prevent citizens and dissidents from being able to connect to the Tor network.

II. Targeting TorFlow

A second DDoS attack scenario would be if threat actors would target TorFlow, the Tor network's load balancing system, which measures Tor relay capacity and distributes traffic accordingly, to prevent some Tor servers from getting overcrowded and becoming slow.

Academics said that targeting all TorFlow servers with constant DDoS attacks using public DDoS booter services would only cost $2.8k/month, even less than the first attack they analyzed.

"Through high-fidelity network simulation [...], we find that such an attack reduces the median client download rate by 80%," researchers said.

III. Targeting Tor relays

And for the third type of DDoS attack, academics chose to target Tor relays, the most common type of Tor servers, and the ones which bounce Tor traffic between each other to help preserve user anonimity.

But instead of relying on DDoS stressers, which are mostly used for funneling large chunks of traffic at a target, academics tried a different approach by exploiting flaws in the Tor protocol itself.

These denial of service (DoS) bugs use logic faults to slow down the Tor protocol and reduce download times for Tor content.

Such flaws have existed for years, and have been successfully used in the past -- albeit the Tor Project team has recently began patching these issues.

However, during their simulations, academics have put a price on how much one of these attacks would cost to target the entire Tor network, and not just one Tor-based .onion domain at a time.

According to the research team, an attacker could increase the median download time of Tor traffic by 120% with just $6.3k/month, and by 47% with only $1.6k/month.

Certainly in the budget

Taking into account that most nation-states have budgets in the millions of US dollars, such attacks are more than feasible.

"Nation-states are known to sponsor DoS attacks, and the ease of deployment and low cost of our attacks suggest that state actors could reasonably run them to disrupt Tor over both short and long timescales," researchers said.

"We speculate that nation-states may, e.g., choose DoS as an alternative to traffic filtering as Tor continues to improve its ability to circumvent blocking and censorship."

Furthermore, the research team also argues that the second and third attacks they analyzed also deliver better results for the money a threat actor invests, when compared to the older Sybil attacks (when a malicious threat actor introduces rogue servers on the Tor network in order to gain more visilibity into the traffic that passes through).

In other words, it's cheaper and a more reliable strategy to degrade Tor network performance than trying to deanonymize its traffic.

As for countering these threats for the Tor ecosystem, academics have also proposed some basic mitigations.

"We recommend additional financing for meek bridges, moving away from load balancing approaches that rely on centralized scanning, and Tor protocol improvements (in particular,the use of authenticated SENDME cells)," they said.

The problem with these mitigations is that they rely on increased financing of the Tor Project, a problem the organization has been trying to solve for yeas as Tor has become more popular.

Additional details about this research are available in a white paper named "Point Break: A Study of Bandwidth Denial-of-Service Attacks against Tor," which the research team has presented this week at the 28th USENIX Security Symposium in Santa Clara, US.
https://www.zdnet.com/article/degrad...ars-per-month/





Wireless Carrier Throttling of Online Video Is Pervasive: Study
Olga Kharif

U.S. wireless carriers have long said they may slow video traffic on their networks to avoid congestion and bottlenecks. But new research shows the throttling happens pretty much everywhere all the time.

Researchers from Northeastern University and University of Massachusetts Amherst conducted more than 650,000 tests in the U.S. and found that from early 2018 to early 2019, AT&T Inc. throttled Netflix Inc. 70% of the time and Google’s YouTube service 74% of the time. But AT&T didn’t slow down Amazon.com Inc.’s Prime Video at all.

T-Mobile US Inc. throttled Amazon Prime Video in about 51% of the tests, but didn’t throttle Skype and barely touched Vimeo, the researchers say in a paper to be presented at an industry conference this week.

"They are doing it all the time, 24/7, and it’s not based on networks being overloaded," said David Choffnes, associate professor at Northeastern University and one of the study’s authors.

To deliver videos people want to watch on their phones, sacrifices in speed are required, Verizon Communications Inc., AT&T and T-Mobile have said in the past.

While it’s true that slowing speeds can reduce congestion, the carriers’ behavior raises questions about whether all internet traffic is treated equally, a prime tenet of net neutrality. The principle states that carriers should not discriminate by user, app or content. The Federal Communications Commission enshrined net-neutrality rules in 2015, but after Donald Trump won the 2016 presidential election, a Republican-led FCC scrapped the regulations.

Following the release of Choffnes’ prior findings, several politicians raised concerns over net neutrality on U.S. networks. In February, three senators asked the FCC to investigate whether U.S. wireless carriers are throttling popular apps without telling consumers.

"It’s important to keep publishing the work," Choffnes said. "It would be nice if this is not completely forgotten. At least when there’s an appetite for legislation on this topic, we’ll have the data."

The discrepancies in throttling different video services could be due to errors, as some carriers haven’t been able to detect and limit some video apps after they made technical tweaks.

"They may try to throttle all video to make things fair, but the internet providers can’t dictate how the content providers deliver their video," Choffnes said. "Then you have certain content providers that get throttled and some that don’t."

The researchers enlisted more than 126,000 smartphone users globally, who downloaded an app called Wehe to test internet connections. Information from those tests was aggregated and analyzed to check if data speeds are being slowed, or throttled, for specific mobile services.

Choffnes’ work has been funded by the National Science Foundation, Google parent Alphabet Inc. and ARCEP, the French telecom regulator. Amazon has provided some free services for the effort, too. He’s even had a deal with Verizon to measure throttling at U.S. carriers. Choffnes says Verizon can’t restrict his ability to publish research and the companies that support him don’t influence his work.
https://www.bloomberg.com/news/artic...ervasive-study





MIT Experts Find a Way to Reduce Video Stream Buffering on Busy WiFi

No more fighting with your family over who gets to stream in HD.
Georgina Torbet

Is there anything more annoying that trying to watch a video on a slow internet connection shared with a bunch of other users? Skips, endless buffering, and ugly pixelation can ruin the experience of watching a movie or TV show when everyone in your house is trying to stream at the same time.

Now a team from MIT have come up with a tool to help multiple people share a limited WiFi connection. The group from the Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed the Minerva system which analyzes videos before playing them to check how much they would be impacted by being played at a lower quality.

Traditional protocols for WiFi sharing simply split the available bandwidth by the number of users. So if you're trying to watch an HD sports match on your TV and one of your kids is trying to watch a cartoon on their smartphone, you'll each be allocated half the available bandwidth. That's fine for your kid but terrible for you, as fast-moving videos like sports events suffer more from low bandwidth than other types of videos like cartoons.

Minerva can analyze both videos in an offline phase to see which would benefit from being allocated more bandwidth and which could be served using lower bandwidth without the quality suffering. The protocol then assigns bandwidth based on the needs of the different users, and will adjust itself over time in response to the video content being played.

In real-world tests, Minerva was able to reduce rebuffering time almost by half, and in one third of cases was able to offer improvements to video playback quality that were the equivalent of going from 720p to 1080p. And the system doesn't only work within households. The same principle could be used to share internet connections across entire regions, making it ideal for companies like Netflix and Hulu which have to serve video to large numbers of users.

The system can be introduced by video providers without needing to change any hardware, making it essentially a "drop-in replacement for the standard TCP/IP protocol" according to the team.
https://www.engadget.com/2019/08/19/...deo-buffering/





The FCC has No Idea How Many People Don’t have Broadband Access

The FCC likely counts millions of unserved homes as having broadband.
Jon Brodkin

A new broadband mapping system is starting to show just how inaccurate the Federal Communications Commission's connectivity data is.

In Missouri and Virginia, up to 38% of rural homes and businesses that the FCC counts as having broadband access actually do not, the new research found. That's more than 445,000 unconnected homes and businesses that the FCC would call "served" with its current system.

Given that the new research covered just two states with a combined population of 14.6 million (or 4.5% of the 327.2 million people nationwide), it's likely that millions of homes nationwide have been wrongly counted as served by broadband. A full accounting of how the current data exaggerates access could further undercut FCC Chairman Ajit Pai's claims that repealing net neutrality rules and other consumer protection measures have dramatically expanded broadband access. His claims were already unconvincing for other reasons.

The new research was conducted by CostQuest Associates, a consulting firm working for USTelecom, an industry lobby group that represents AT&T, Verizon, CenturyLink, Frontier, and other fiber and DSL broadband providers. USTelecom submitted a summary of the findings to the FCC on Tuesday. The two-state pilot was intended to determine the feasibility of creating a more accurate broadband map for the whole US.

Why the FCC’s current data is wrong

The key problem with today's maps is that the FCC's Form 477 data-collection program that requires ISPs to report census-block coverage lets an ISP count an entire census block as served even if it can serve just one home in the block. It has always been known that this approach could undercount the number of unserved homes, but it was never clear exactly how far off the numbers were.

The FCC's latest numbers suggest that 21.3 million Americans lack access to fixed broadband with speeds of at least 25Mbps down and 3Mbps up. But those numbers are based on the faulty census block data.

"In addition to other important metrics, our pilot shows as many as 38 percent of additional rural locations in Virginia and Missouri are unserved by participating providers in census blocks that would have been reported as 'served' in today's FCC Form 477 reporting approach," USTelecom and other trade groups representing telecoms and fixed wireless providers told the FCC. "These locations are homes and small businesses hidden from service providers and policymakers simply because of a lack of knowledge fueled by gaps in data—gaps that we can now fill."

USTelecom argued as recently as October 2017 that the FCC "should not seek to collect broadband deployment data that is more granular than at the census block level, because such a change would be unduly burdensome to providers." But industry groups and the FCC itself changed their stance after bipartisan complaints about the inaccuracy of US broadband data.

Three weeks ago, the FCC voted to require ISPs to give the FCC geospatial maps of where they provide service instead of merely reporting which census blocks they could offer service in.

The FCC's new policy should go a long way toward fixing the data problem. But the more accurate data may make it harder for Pai to claim that his deregulatory moves are closing America's broadband gaps. Pai didn't create the process for collecting the Form 477 data, but he has used it to claim that his policies created more broadband, even though the data showed broadband deployment was progressing at about the same rate it did during the Obama administration.

More accurate data will also help the FCC determine which areas should get the most federal funding to expand broadband access. The FCC's Connect America Fund has given billions to ISPs to expand Internet service since its creation in 2011, and Pai plans to continue that with a 10-year, $20.4 billion fund that would pay ISPs to bring broadband to unserved rural areas.

Data was wrong in 48% of rural census blocks

The CostQuest/USTelecom two-state pilot created a map (or "fabric") of virtually all homes and businesses that could be served by broadband if Internet providers built out to them. The pilot also asked ISPs to submit coverage data, and the ISP-submitted data was compared to the statewide maps to determine how many buildings lacked access.

"Creating the fabric revealed that in just two states over 450,000 homes and businesses exist that are counted as 'served' under current 477 reporting that are not receiving service from participating providers," CostQuest wrote. "While not every broadband provider chose to participate in this pilot—so the actual number of unserved may be lower—that still leaves the potential for substantial misrepresentations about service availability."

The pilot also showed that current broadband-availability data is wrong in 48% of rural census blocks and is "in many cases significantly different."

A nationwide version of the CostQuest map and data set could be completed in 12 to 15 months "for between $8.5-$11 million in upfront costs and $3-4 million in annual updates," the summary of key findings said. To create the fabric, the pilot used data sources including tax assessor and parcel attribute data, georeferenced building footprints, and road data, along with statistical analysis and crowdsourced in-person reviews of parcels to check accuracy.

CostQuest said its data is far more accurate than what standard geocoding tools provide. The company found that 61% of locations in rural areas were geocoded at an incorrect location and that 25% of the locations were off by more than 100 meters. Additionally, 23% of locations had been geocoded to the wrong census block.

In Missouri, CostQuest found that 9% of non-rural locations were unserved and that 36% of rural locations were unserved. In Virginia, 12% of non-rural locations were unserved and 39% of rural locations were unserved.

It's not clear whether the USTelecom/CostQuest approach will be completed nationwide. The FCC is requiring ISPs to submit geospatial coverage maps, and the FCC plans to create a crowdsourcing system to collect public input on the accuracy of those ISP-submitted maps. But that wouldn't require the use of CostQuest's system, and deadlines for ISPs to submit maps have not yet been announced.
https://arstechnica.com/tech-policy/...adband-access/





The Planet Needs a New Internet
Maddie Stone

When climate change comes for our coffee and our wine, we’ll moan about it on Twitter, read about it on our favorite websites, and watch diverting videos on YouTube to fill the icy hole in our hearts. We’ll do all this until the websites go dark and the networks go down because eventually, climate change will come for our internet, too.

That is, unless we can get the web ready for the coming storms.

Huge changes will be needed because right now, the internet is unsustainable. On the one hand, rising sea levels threaten to swamp the cables and stations that transmit the web to our homes; rising temperatures could make it more costly to run the data centers handling ever-increasing web traffic; wildfires could burn it all down. On the other, all of those data centers, computers, smartphones, and other internet-connected devices take a prodigious amount of energy to build and to run, thus contributing to global warming and hastening our collective demise.

To save the internet and ourselves, we’ll need to harden and relocate the infrastructure we’ve built, find cleaner ways to power the web, and reimagine how we interact with the digital world. Ultimately, we need to recognize that our tremendous consumption of online content isn’t free of consequences—if we’re not paying, the planet is.

To save the internet and ourselves, we’ll need to harden and relocate the infrastructure we’ve built, find cleaner ways to power the web, and reimagine how we interact with the digital world.

Drowning infrastructure

You probably don’t think about it when you’re liking a photo or reading an article, but everything you do online is underpinned by a globe-spanning labyrinth of physical infrastructure. There are the data centers hosting the web and managing enormous flows of information on the daily. There are the fiber cables transmitting data to into our homes and offices, and even across oceans. There are cell towers sending and receiving countless calls and texts on the daily.

By and large, this infrastructure wasn’t built with a changing climate in mind. Researchers and companies are only now starting to explore how threatened it is, but what they’ve found so far is alarming.

Take a study published last year by researchers at the University of Oregon and the University of Wisconsin-Madison. The authors decided to examine the internet’s vulnerability to sea level rise by overlaying projections of coastal inundation from the National Oceanic Atmospheric Administration with internet infrastructure data compiled by Internet Atlas. They found that within the next 15 years, in a scenario that projects about a foot of sea level rise by then, 4,067 miles of fiber conduit cables are likely to be permanently underwater. In New York, Los Angeles, and Seattle, the rising seas could drown roughly 20 percent of all metro fiber conduit. These are the lines that physically ferry our Internet traffic from place to place. Another 1,101 “nodes”—the buildings or places where cables rise out of the ground, which often house computer servers, routers, and network switches to move our data around—are also expected to be swamped.

And that’s just in the United States. As far as senior study author Paul Barford knows, this vulnerability hasn’t been systematically studied elsewhere. But he expects to find a similar situation around the world.

“There’s a huge amount of human population that lives within close proximity of coastlines, and communications infrastructure has been deployed to support their needs,” Barford told Gizmodo.

Barford was reluctant to speculate how big of an internet disruption the coming cable inundation could cause. Conduits are typically sheathed in a tough, water-resistant polyethylene tube, and unlike electrical wires, the fiber ribbons inside can handle some water intrusion. But, as the study puts it, “most of the deployed conduits are not designed to be under water permanently.” If water molecules work their way into fiber micro-cracks, that could cause their signal to degrade. Electrical connections to the fiber cables could be fried, and if a submerged cable froze, the fibers could physically break.

Nobody knows how long it would take the damage to unfold. But Barford suspects that much of the at-risk infrastructure ultimately will have to be hardened in place or redeployed on higher ground. “It’s gonna be a major amount of work,” he said.

Gizmodo reached out to telecommunications companies flagged by the study as having the most vulnerable infrastructure to learn if this issue was on their radar. Several didn’t respond, one said they aren’t doing anything about the threat, and another indicated their networks would be fine because of “proper redundancy and route diversity.”

Dave Schaeffer, CEO of telecommunications company Cogent, expressed confidence in the fortitude of the cables. But Schaeffer did say there’s reason to worry about those places where the cables come out of the ground.

“Those would be impacted directly if those buildings came underwater,” he said, adding that while most nodes in their network sit least 20 feet above sea level, more powerful storm surges could pose a growing threat. The company got a taste of what may be to come during superstorm Sandy, when a network hub housed at 10 Pine Street in New York City was inundated by storm surge and the company was forced to move its generator and a fuel tank to a higher floor, a process that took several months.

At least one telecommunications company is now explicitly planning for future climate disruptions. Earlier this year AT&T partnered with Argonne National Labs to build a “Climate Change Analysis Tool,” which Chief Sustainability Officer Charlene Lake told Gizmodo will allow the company “to visualize the risks of sea-level rise—at the neighborhood level and 30 years into the future—so we can make the adaptations that are necessary today in order to help ensure resilience.” Lake added that AT&T is also piloting the tool for high winds and storm surges, and in the future plans to incorporate other climate impacts, like drought and more severe wildfires.

Barford also flagged the threat of wildfires and storm surges as two areas of future investigation for his group. Then there’s the fact that climate change is driving temperatures up, which could increase the need for cooling at data centers, particularly those built in warm climates.

Ironically, in a world where these energy-intensive facilities have to draw even more power to stay cool during, say, a heat wave, local grids potentially could be placed at greater risk of brownouts, like the one that affected 50,000 customers in New York City last month. And while it’s purely hypothetical, if a major data center went dark, that could lead to widespread service disruptions.

As Barford put it, “there are cascading effects here that are complicated and deserve attention.”

Skyrocketing energy use

The internet may be threatened by climate change, but it’s hardly an innocent victim. Our collective addiction to the digital realm has an enormous climate impact.

“The digital mythology is built on words like cloud,” Maxime Efoui, an engineer and researcher at the French think-tank Shift Project, told Gizmodo. “Something that isn’t really real. That’s how we picture it.”

The reality, though, is that it takes loads of energy to stream all those on-demand videos and back up all those photos to the cloud. Anders Andrae, Senior Expert of Life Cycle Assessment at Huawei, told Gizmodo that the internet as a whole—including the energy used to power data centers, networks, and individual devices, as well as the energy used during the manufacturing of those devices—is responsible for about 7 percent of global electricity consumption, with power demands growing at around 8 percent per year. A report the Shift Project published in July found that digital technologies now accounts for 4 percent of the world’s greenhouse gas emissions—more than the entire aviation sector. And that footprint could double to 8 percent by 2025.

Gary Cook, an IT sector analyst with Greenpeace, said this footprint is being driven by skyrocketing data demands, particularly in more affluent countries. There are numerous culprits here, including the shift to next-generation networks like 5G which will allow for greater data flows, the rise of artificial intelligence, the proliferation of an internet of things, all those energy-gobbling Bitcoin transactions, and online video streaming, which accounted for a full 60 percent of global web traffic in 2018, per the Shift Project. From storing the videos in data centers to transferring them to our computers and smartphones via cables and mobile networks, everything about watching videos online requires electricity, so much so that our collective streaming emitted as much carbon as all of Spain last year.

If these numbers seem shocking to you, well, you’re not alone. “Every single time I speak to people who work in tech, people seem to be astonished by the fact that servers run on electricity and electricity comes often from fossil fuels,” Chris Adams, a director at the The Green Web Foundation, a group that helps companies shift to renewable web hosting, told Gizmodo.

Clearly, the internet’s reliance on fossil fuels needs to change if we’re to stave off climate change’s worst impacts. An obvious place to start greening the energy supply is at those data centers, a huge and fast-growing piece of the pie that currently accounts for about 2 percent of global electricity use, according to a recent white paper.

Encouragingly, some tech companies have begun to do so. Apple now runs all of its data centers on renewables that it either owns or purchases in local markets. Google and Microsoft Azure, two of the biggest cloud companies, are purchasing renewable energy credits to match their data center growth. This means as their electricity use rises, the companies are paying for an equal amount of renewable energy to be built elsewhere. While this so-called offsetting strategy doesn’t eliminate the use of fossil fuel energy to power the data centers directly, both Google and Microsoft Azure say they have a long term goal of getting there. Google told Gizmodo that many of the company’s data centers already see “a strong degree of hourly matching with regional carbon-free energy,” while Microsoft Azure said it expects to source 60 percent of its data center electricity needs directly from renewables by the end of the year.

Because it boosts their bottom line, tech companies are also constantly improving data centers’ power use efficiency, and there’s no shortage of ideas for how to push things further. Google is now using AI to automate data center cooling, while Alibaba Cloud, a major cloud service in China, boasts an “immersion liquid cooling technology” that it says can reduce data center cooling needs by up to 90 percent. Some researchers have even suggested new data centers be built in Greenland, where A.C. needs would be minimal and clean hydropower is abundant.

However, Anne Currie, an engineer, science fiction author, and advocate for greening data centers, cautioned that efficiency improvements alone won’t clean up the internet, because the more efficient things are, the more we use them. “We just need to make it socially unacceptable to be hosting the internet on fossil fuels,” Currie said.

And most experts Gizmodo spoke with agreed that the tech industry isn’t moving in that direction fast enough. Disturbingly, Amazon Web Services, the world’s largest cloud provider, has since late 2014 tripled its data center operations in Virginia, a state gets just a small fraction of its power from renewable wind and solar, according to a recent report by Greenpeace. AWS has also come under fire for its lack of transparency surrounding climate issues, including failing to report energy consumption and carbon emissions figures. (Amazon has said it will start reporting its carbon footprint this year.)

Reached for comment, Amazon Web Services called the Greenpeace report’s data on its energy consumption and renewables mix “inaccurate”, adding that the report overstates “both AWS’s current and projected energy use” and “does not properly highlight” the company’s investments in solar projects in Virginia. (Greenpeace asserts AWS’ data center growth in Virginia “far exceeds” these investments.) AWS added that it remains “firmly committed” to achieving its goal of 100 percent renewable energy for its global infrastructure, noting that it exceeded 50 percent renewable energy in 2018.

AWS has not stated when it aims to reach its 100 percent goal, and did not offer a target date, nor a date when one might be announced, when Gizmodo asked. Orion Stanger, a software engineer and member of Amazon Employees for Climate Justice, an employee-led organization that sprung up late last year to push Amazon to take more aggressive action on climate change, said Amazon’s continued failure to put a date on that goal is a problem.

“We could even go backwards to 20 percent [renewables] and then by some later date hit 100 percent and that would still qualify under the goal we’ve set,” Stanger told Gizmodo. He’d like to see his company set science-based targets around emissions reductions throughout its operation, including at data centers.

“We really want Amazon to lead on climate,” Stanger went on. “It’s been very much a follower in this space.”

Paul Johnston, a former AWS employee and green data center advocate, felt that unless companies are fined for their impact or otherwise incentivized to switch to renewables, the energy transition won’t keep pace with what science says is needed to avoid the worst impacts of climate change.

“I don’t think there’s any way around it,” he said when asked if government regulation will be necessary to compel companies to make the necessary shifts.

For some data centers, more regulation finally may be on the horizon. In July, Amsterdam, reportedly the largest data center hub in Europe, placed a temporary moratorium on building new data centers until some ground rules could be established concerning their operation. The city wants to set requirements that data centers use clean energy, and it wants the facilities to capture the prodigious waste heat they produce—yet another way data centers contribute to warming—and provide it to local citizens for free.

Amsterdam’s decision to pump the brakes on new data centers comes after the city’s data center power usage grew by a staggering 20 percent last year. Cook was glad to see the city “stepping in and trying to do a reset on how to manage growth.”

“Voluntary stuff has taken us so far,” he said. “Ultimately we need to have government step in and level the playing field.”

Digital “sobriety”

Powering our data centers, networks, and cities with more renewable energy would go a long way toward reducing the internet’s climate impact. But the uncomfortable reality that it’s going to be hard to keep up in a world where we’re spending ever-increasing amounts of time watching videos and playing games online, browsing the web and scrolling our social media feeds (four activities that, together, make up nearly 90 percent of traffic downloaded from the web, according to a 2018 report by networking company Sandvine).

Some advocates say we need to pump the brakes on all this consumption. In its recent report about online video, the Shift Project is called for a revolution of “digital sobriety”, which Efoui described as implementing policies to constrain the internet’s growth in a world of finite resources.

“If we really understand the gravity of the constraints that are coming to us and to our systems that we built... we have to take them into account,” he said.

How we’d actually go about constraining the web is an open question. Should governments impose emissions limits on server farms and data centers, and fine companies that exceed them? Should streaming services like Netflix encourage us to watch in standard definition over HD? Will grassroots campaigns to unplug spring up around the world, similarly to the emerging movement to give up flying? Efoui thinks we will need to gather “a lot of solutions together” and that different places will adopt different strategies depending on their infrastructure and society’s needs.

The changes don’t all have to be huge. In fact, a burgeoning field of research known as sustainable interaction design is showing that small tweaks to apps and websites can have a serious impact on consumption. A recent study on YouTube found that simply allowing users to turn off video streaming when they’re listening to music could slash the service’s 11-million ton a year carbon footprint by up to 5 percent. As the researchers note, that’s “comparable in scale” to the climate benefits Google has achieved by purchasing renewables to power YouTube’s servers.

And it’s just one intervention. A notification that encourage social media users to take a break from feed-scrolling is another possibility. Or, websites could get rid of all those autoplay ads nobody asked for. Kelly Widdicks, a PhD student at Lancaster University whose studies the impact of internet-enabled device use on society and the environment, noted that Facebook’s decision to start autoplaying ads everywhere “increased traffic massively” for many users.

“Before you had to interact with the platform to watch something,” Efoui said. “Now you have to interact with the platform to stop watching. That’s actually a big change.”

Widdicks felt that companies might roll out some changes voluntarily if their customers made enough noise over, say, the health benefits of watching less. But she also saw value in thinking about what sorts of limits and restrictions ought to be imposed. Mike Hazas, a Reader at Lancaster University who studies the relationship between technology and sustainability, agreed, noting that researchers have estimated the internet could consume more than a fifth of the world’s electricity by 2030.

“If we were to double the airline industry by 2030, that’d be a major topic of discussion,” he said. (Indeed, it has been, for years.)

No one can say what shape the future internet will take, but things can’t go on the way they are now. And while individual actions alone won’t get us out of this mess, if enough of us change our behavior it will make a difference. And there are plenty of places to start.

We can ease up on our social media use. We can think twice before letting that next episode autoplay, or kick it old school and return to broadcast, which Hazas described as “very efficient” compared to streaming. We can make sure to host websites and buy cloud space with companies that have demonstrated a real commitment to clean energy.

Most of all, Hazas said, it’s important that we “make a conscious decision” rather than allowing ourselves get swept along through an endless buffet of content. “These are very well designed services,” he said. “They keep us using them.”
https://earther.gizmodo.com/the-plan...net-1837101745





21st Century Datacenter Locations Driven by 19th Century Politics
George Moore

Google recently announced the availability of a new datacenter in Salt Lake City, Utah. This is the latest in datacenter investments by Microsoft, Facebook, Apple, Yahoo and others, distributed along a line corresponding to the 41st parallel in the United States:

Each of these companies are investing billions of dollars into these four cities:

• Microsoft has invested $3.5 billion in one of the world’s largest datacenters located in Des Moines, Iowa. And a $750 million datacenter in Cheyenne, Wyoming.
• Apple has invested $1.35 billion in their latest datacenter in Des Moines, Iowa.
• Facebook is building their latest 500,000 square foot datacenter on 146 acres in Omaha, Nebraska.
• Google is investing $1 billion in their datacenter outside of Omaha, Nebraska. This is in addition to their new datacenter investments in Salt Lake City.
• The National Center for Atmospheric Research has deployed a 1.5 petaflop supercomputer in Cheyenne, Wyoming. This is one of the fastest supercomputers in the world.
• The US National Security Agency operates a massive intelligence data repository for their datacenter in Salt Lake City, Utah.

What is so special about the 41st parallel that would make so many different companies invest billions of dollars to build datacenters in these cities? It’s because the vast bulk of east/west data traffic in the United States passes through each of these cities via the largest collection of fiber optic cables from the highest diversity of telecommunication companies: AT&T, Verizon, Comcast, Level 3, Zayo, Fibertech, Windstream and others.

This fiber optic infrastructure provides the datacenters with unprecedented access to the absolute highest bandwidth in a virtuous cycle of investment: more datacenters drive more traffic, which drives more fiber optic cables, which drives more datacenters.

Why did all of these telecommunication companies choose to locate their fiber cabling along this specific route across the United States? It’s because each of these cables are buried in the contiguous 200-foot right-of-way alongside the first transcontinental railroad, completed in 1869. The United States government granted these land rights to the Union Pacific railroad via the Pacific Railway Act of 1862. If you’re a telecommunication company in 2019 wishing to deploy new fiber across the United States, you only need to negotiate with a single entity: the Union Pacific railroad. Their single strip of land completely bisects the United States, as shown by the original railway survey completed in 1864:

One of the best examples of this telecommunication co-location is the main uplink facility for EchoStar in Cheyenne, Wyoming. EchoStar maintains a fleet of 25 geostationary satellites for media broadcasting and film distribution. EchoStar purchased a large plot of land directly adjacent to the Union Pacific right of way, enabling them to directly tie into the transcontinental fiber cables buried next to the railroad.

Why was the first transcontinental railroad built along the 41st parallel from Council Bluffs, Iowa to Sacramento, California? Starting in 1853, the United States conducted Pacific surveys to map the best transcontinental route for a railway: along the 47th parallel, 39th parallel, 35th parallel and 32nd parallel. In 1859 the US Secretary of War, Jefferson Davis, strongly favored the southerly railroad route from New Orleans to San Diego: it was shorter, had no major mountains to traverse, and had lower operational costs due to lack of snowfall to clear from the tracks. However, in the 1850s no Congressman from a northern state would have voted for a southerly railroad route to aid the Confederacy's slave-based economy, and no Congressman from the south would have voted for a northerly route. This stalemate lasted until the start of the US Civil War. When the southern states seceded from the Union in 1861, the remaining northern politicians quickly voted and passed the Pacific Railway Act of 1862 which fixed the starting location of the transcontinental railway at Council Bluffs, Iowa, heading west along the 41st parallel.

Why was Council Bluffs, Iowa on the 41st parallel chosen as the starting point? There were many competing towns who were lobbying for the privilege. Council Bluffs was picked because the Platte River valley, due west of the city, formed an unbroken 600-mile gentle rise across the Great Plains to the Rocky Mountains, providing the steam locomotives with a good source of water for their boilers. This same river water is now used for the adiabatic cooling of the modern datacenters along this route.

After the first railway was completed, Western Union immediately established the first telecommunications corridor within the railroad right of way and was soon carrying all transcontinental telegrams. Later, as AT&T established long distance voice lines in the early-20th century, those same lines were also placed along the first transcontinental railroad. This collection of early lines grew and expanded to the vast collection of telecommunication options available in this corridor today.

Thus, political actions from over 150 years ago now dictates the location of billions of dollars of modern datacenter investments.
https://www.linkedin.com/pulse/21st-...-george-moore/





Federal Officials Raise Concerns about White House Plan to Police Alleged Social Media Censorship
Brian Fung

Officials from the Federal Communications Commission and the Federal Trade Commission have expressed serious concerns about a draft Trump administration executive order seeking to regulate tech giants such as Facebook (FB) and Twitter (TWTR), according to several people familiar with the matter.

In a closed-door meeting last month, officials from the two agencies met to discuss the matter with a US Commerce Department office that advises the White House on telecommunications, the people said.

A key issue raised in the meeting was the possibility the Trump administration's plan may be unconstitutional, one of the people said. The draft order — a summary of which CNN obtained this month — proposes to put the FCC and FTC in charge of overseeing claims of partisan censorship on social media. But critics of the idea, including some legislators and policy analysts in the tech community, say it amounts to appointing a government "speech police" in violation of the First Amendment.

"This executive order would be the most transformative action related to the purpose of the FCC since the Telecommunications Act of 1996," said Blair Levin, a former FCC chief of staff during the Clinton administration. "This would give the FCC more power over content than it's ever had."

Agency officials now appear to share the critics' reservations. The pushback points to a sea of bureaucratic and legal difficulties ahead as the Trump administration seeks to put additional pressure on Silicon Valley's most dominant players.

President Donald Trump has been willing to plow ahead with policy before, over the professional opinions of other government experts. Beginning in 2017, for example, Trump sought repeatedly to ban transgender Americans from serving in the military, a move that multiple courts said was not supported by the Defense Department's own conclusions.

Right-wing critics, including Trump, have long claimed that an anti-conservative bias is baked into the tech industry's most popular products — though researchers have consistently failed to unearth systemic evidence of partisan discrimination.

That hasn't stopped conservative policymakers from targeting the purported effects of Silicon Valley's liberal bent. Within the White House, the people said, the efforts to draft the executive order are being led by a labor economics expert, James Sherk. Sherk spent over a decade at the right-leaning Heritage Foundation as a research analyst before joining the White House as a domestic policy adviser.

A White House spokesperson declined to make Sherk available for an interview and declined to comment on the interagency meeting. But the spokesperson referred CNN Business to Trump's previous promises to explore all policy options to address complaints of social media bias. The FCC and FTC also declined to comment.

As the Trump administration continues to work on the draft order, experts say its dubious constitutionality is just the tip of the legal iceberg.

For one thing, the FCC and FTC do not answer to the White House. As independent federal agencies, they report to Congress and cannot be ordered by the president to do anything.

The draft order addresses that hurdle, said three people familiar with the matter, by directing the Commerce Department's telecom office — the National Telecommunications and Information Administration — to ask the agencies to step in. NTIA didn't immediately respond to a request for comment.

But other aspects of the draft have raised questions, as well.

For years, under both Republican and Democratic leaders, the FCC has backed away from regulating websites or Internet companies — opting instead to regulate the providers of Internet access, such as Comcast (CMCSA) or Verizon (VZ).

In 2017, as he prepared to repeal the government's net neutrality rules for those providers, Chairman Ajit Pai praised that hands-off approach.

"The Internet is the greatest free-market innovation in history," Pai said in a speech. "Its success is due in part to regulatory restraint."

For the FCC now to assume a role overseeing social media would directly undercut that message, analysts say.

"It would completely contradict everything that the FCC said in the 'Restoring Internet Freedom Order' (RIFO) repealing net neutrality," wrote Harold Feld, a senior vice president at the consumer advocacy group Public Knowledge, in a recent blog post.

The stakes are higher than the simple appearance of a flip-flop. It could mean either finding the FCC has extremely broad powers to regulate the Internet itself under Section 230 of the Communications Decency Act, or it could lead to a court tossing out the FCC's net neutrality deregulation as a case of "arbitrary and capricious" rule-making, Feld added.

Meanwhile, other legal analysts say it would be unprecedented for the FTC to prosecute a company over the way it justifies content moderation.

"The FTC just doesn't sue media companies over their editorial policies," said Berin Szoka, president of the libertarian-leaning think tank TechFreedom. "Second-guessing whether a company is politically 'neutral' would mean substituting a regulator's editorial decisions for a private company's — something the First Amendment forbids."

Thus far, officials from the FCC and FTC have largely refrained from speaking publicly against the draft order. Jessica Rosenworcel, a Democratic commissioner at the FCC, appeared to express shock in response to the draft order earlier this month with a one-word tweet: "What."

But officials have periodically signaled their reluctance to become an effective moderator of political speech.

Asked last November by Republican Senator Ted Cruz how the FTC could address allegations of conservative censorship, FTC Chairman Joseph Simons said it wasn't clear the his agency "should be addressing that at all."

"Unless it's something that relates to a competition issue, or it's unfair or deceptive, then I don't think we have a role," he said at a hearing.

Meanwhile, Pai has resisted calls by Trump to revoke the broadcast licenses of TV networks based on the content they run, saying he is a believer in the First Amendment.

But Pai has separately been outspoken in his own criticism of the tech industry, claiming in his 2017 remarks that Twitter "has a viewpoint and uses that viewpoint to discriminate."

That could put Pai in a challenging position as his agency seeks to distance itself from the draft order in its current form. As with other such interagency meetings, last month's would likely have included at least one of Pai's top lawyers, along with representatives for the White House, said Levin.

"I wouldn't be surprised if this meeting was requested by the FCC," said Levin. He added: "I think the White House would have been there. But then again, this White House works differently from a lot of other White Houses."
https://edition.cnn.com/2019/08/22/t...-social-media/





Phone Companies Ink Deal With All 50 States And D.C. To Combat Robocalls
Brakkton Booker

AT&T, Sprint and Verizon and nine other telecommunications companies teamed up with attorneys generals of all 50 states plus the District of Columbia to announce a new pact to eradicate a common scourge in America: illegal robocalls.

The agreement, which amounts to a set of anti-robocall principles, is aimed at combating and preventing the phone-ringing annoyance. Included in the deal is call-blocking technology which will be integrated into a dozen phone networks' existing infrastructure, at no additional charge to customers.

The tech giants will also provide other call blocking and call labeling for those customers who want more screening tools.

"We owe it to the most vulnerable in our communities to do everything in our power to protect them," North Carolina Attorney General Josh Stein said at an announcement in Washington on Thursday.

"Thanks to these prevention principles our phones will ring less often."

Stein and other attorneys general who spoke at the press conference, said while robocalls regularly present hassles and interruptions for millions of Americans, in some instances the calls can also be harmful.

"Robocalls are also a very effective device for illegal conduct," said New Hampshire Attorney General Gordon MacDonald.

He added that robocallers prey on unsuspecting people. Once valuable personal and financial information is divulged, those duped individuals are at risk of losing "their savings, their identity and their security."

To get people to answer the phone, MacDonald said about 40 percent of the scammers resort to a tactic known as the "neighbor spoofing technique." That's where bilkers mask who they are by placing calls using the same area code and first three digits of their potential victim.

Under the plan, service providers will help provide technology, known to industry insiders as SHAKEN/STIR, to combat that practice and aid state attorney generals in locating and prosecuting the fraudulent robocallers.

The Federal Communications Commission describes the technology as allowing a phone network receiving a call to verify that it comes from the number it purports to before it reaches the customer.

"I salute today's bipartisan, nationwide effort to encourage best practices for combating robocalls and spoofing and am pleased that several voice service providers have agreed to abide by them," said FCC Chairman Ajit Pai in a statement.

The statement continued: "We continue to see progress toward adoption of caller ID authentication using SHAKEN/STIR standards. And our call blocking work has cleared the way for blocking of unwanted robocalls by default and of likely scam calls using non-existent phone numbers."

In June the FCC encouraged phone carriers to block robocalls by default. As we've reported, prior to the ruling many companies offered services to block robocalls, but consumers had to specifically make the request to their carriers and often pay extra for the service.

By one estimate nearly 48 billion robocalls were made in the U.S. last year alone. According to YouMail, which conducted the survey, that represented a nearly 57% increase in total robocall volume over 2017 figures.

Other companies that signed on the agreement announced on Thursday include: CenturyLink, Comcast, Frontier Communications Corporation, T-Mobile USA, Bandwidth Inc., Charter Communications, Consolidated Communications, U.S. Cellular, and Windstream Services.
https://www.npr.org/2019/08/22/75352...mbat-robocalls





Federal Police Fight Court Ruling a Mobile Phone is Not a Computer

AFP granted warrant to unlock smartphone but decision overturned on grounds device not covered by Crimes Act
Josh Taylor

The AFP is fighting a ruling making its warrant to unlock a mobile phone invalid. Photograph: Lauren Hurley/PA

Australian federal police are fighting a federal court ruling that a smartphone is not considered a computer, making a warrant it was using to force a suspect to unlock a phone invalid.

In August last year, the AFP obtained a warrant under section 3LA of the Crimes Act to unlock a gold-coloured Samsung phone found in the centre console of the man’s car when he was pulled over and searched.

The man supplied the password for a laptop also in the car, and a second phone did not have a pin to unlock, but when asked about the gold phone, he answered “no comment” and would not provide a password for the phone.

He later claimed it wasn’t his phone and he didn’t know the password to access it.

The federal court last month overturned the magistrate’s decision to grant a warrant forcing the man to provide assistance in unlocking the phone.

The decision was overturned on several grounds, notably judge Richard White found that the Samsung phone was not a computer or data storage device as defined by the federal Crimes Act.

The law does not define a computer, but defines data storage devices as a “thing containing, or designed to contain, data for use by a computer”.

White found that the phone could not be defined as a computer or data storage device.

“While a mobile phone may have the capacity to ‘perform mathematical computations electronically according to a series of stored instructions called a program’, it does not seem apt to call such an item a computer,” he said.

“Mobile phones are primarily devices for communicating although it is now commonplace for them to have a number of other functions ... Again, the very ubiquity of mobile phones suggests that, if the parliament had intended that they should be encompassed by the term ‘computer’ it would have been obvious to say so.”

He also overturned the decision arguing that the order was not specific enough to what the AFP required the man to do – provide a password or pin or a fingerprint, or to decrypt any data.

The AFP commissioner argued that the order was written requiring the man to provide “particular information or assistance” in order to allow flexibility in what they required him to do without needing to get another warrant.

In the appeal filed earlier this month, obtained by Guardian Australia, the AFP argued that level of specificity was not required under the law, and White erred in stating that the phone was not a computer because a smartphone “performs the same functions and mathematical computations as a computer and is designed to contain data for use by a computer”.

No court date has yet been set for the appeal.

The Australian federal police declined to comment to Guardian Australia on the broader implications of the decision, stating in was inappropriate to comment while an appeal was under way.

In his judgment, White noted that much of 3LA in the Crimes Act had been amended as part of the Telecommunications (Assistance and Access) Act passed in December 2018, and the case in question is about the law before it was amended.

The amendments to the Crimes Act, however, did not define a computer or data storage device beyond what was already in the law. The changes to 3LA just increased the penalty for failure to comply with such orders from two years to a maximum of 10 years in jail.
https://www.theguardian.com/australi...not-a-computer





Encryption has Created an Uncrackable Puzzle for the Real World

Encryption protects us, so maybe it's time for us to protect it. But no answer to the encryption debate is without a downside.

With a certain amount of inevitability, governments on both sides of the Atlantic are taking another swing at one of the technologies they really love to hate -- encryption.

Late last month, US attorney general William Barr warned that the use of end-to-end encryption -- which he described as 'warrant-proof' encryption -- "allows criminals to operate with impunity, hiding their activities under an impenetrable cloak of secrecy."

Similarly, the UK's new home secretary Priti Patel has more recently criticised the use of end-to-end encryption by messaging services like Facebook's WhatsApp.

"Where systems are deliberately designed using end-to-end encryption which prevents any form of access to content, no matter what crimes that may enable, we must act," she said.

"This is not an abstract debate: Facebook's recently announced plan to apply end-to-end encryption across its messaging platforms presents significant challenges which we must work collaboratively to address," Patel added.

Patel didn't indicate how the government would act, beyond asking Facebook and other tech companies "to work with us urgently on detailed discussions".

Governments have regularly called on tech companies to give up encryption in recent years, to little effect.

As Labour's shadow home secretary Diane Abbott told ZDNet: "The new home secretary repeats the errors of some of her predecessors. She seems not to understand that a general access to encrypted communications by the police and security services would effectively end those communications, because no-one could trust them."

"We know this government doesn't like evidence, but they really do need to understand that only a targeted, court-approved access by law enforcement and other agencies will work. If the home secretary's line is pursued, the criminals and terrorists could simply be driven underground and all the rest of us will lose the right to privacy."

Indeed, the UK government theoretically already has the powers it needs to demand that tech companies strip the encryption from their messaging services.

Under the controversial Investigatory Powers Act passed back in 2016, the government can require tech companies to remove 'electronic protection' -- encryption -- from messages in serious cases.

But in reality, that legal power is significantly limited, which is why the government hasn't used it. First, many of the biggest messaging companies are based in the US, which means they aren't particularly worried about what politicians in one medium-sized foreign market think.

Second, these companies are increasingly making security, usually underpinned by a commitment to end-to-end encryption, part of their marketing.

That's because consumers are becoming ever more aware of the benefits of security. For tech companies, offering customers the privacy of end-to-end encryption is now a competitive advantage.

Indeed, it's worth remembering this recent vogue for encryption only came about because of extensive US government overreach and snooping in the first place.

These trends make it much harder for tech firms to compromise on encryption: no company wants to offer a service that's known as 'the one the government can spy on easily'. On top of this, if you don't trust a tech company anymore (and plenty don't), then knowing it can't read your messages might make you feel slightly more comfortable using that service.

Beyond this is the technical issue, which is that these messaging companies have now designed their systems around end-to-end encryption. Breaking that model at the behest of one or two countries would be vastly expensive and weaken security for all users around the world.

An alternative is to provide a separate, less secure service for some nations, which would likely be shunned.

Tech companies get demands from all sorts of regimes to turn over customer communications. Some do, some don't – but if liberal democracies start insisting on getting this data, it's very hard for a tech company to turn down demand from repressive states as well.
Legislation and enforcement

To stop the use of end-to-end encrypted messaging would require tough legislation not just in the UK, but also at least in the US (which has limited enthusiasm for such a move) and in Europe (which has very little inclination either). And then pretty much every other government in the world.

Such a concerted effort is high unlikely -- and, even then, you'd only prevent the majority of law-abiding citizens from using encrypted services. But who would enforce it, and at what cost?

For those who did not want to be monitored, for whatever reason, services would always be available -- either home-grown or international. There are a few slightly more elegant solutions to the problem, but they are limited in scope.

So why do politicians keep bringing this up? The cheap shot is to say they do it for the easy headlines. But the truth is, the costs of encryption are real -- police cannot spot criminals or terrorists plotting -- and should be acknowledged. No answer to the encryption debate is without downsides, and we need to remember and admit that.

We are living in a time of unprecedented erosion of privacy. Some of that we are doing ourselves: we're carrying smartphones that can report where we are and what were are doing to an array of corporations in real time, and filling our homes with cameras and microphones that outdo Orwell's telescreens.

Some of the privacy erosion is being done to us -- the introduction of facial recognition systems across cities is the latest way that technology is chipping away at our privacy (and just arrived: facial recognition that can spot fear). Encryption may be one of the few forms of protection left open to us.

As one privacy advocate once put it to me, we are in a golden era of surveillance and the state has most of the picture of our lives -- the battle now is to protect those last few pixels missing from the image.

It's hard to see anything in the decades ahead other than the continued erosion of privacy by the technology around us. Whenever we give up another fragment of privacy, we should not expect see it returned to us again. Don't give up those last precious pixels without thinking long and hard.
https://www.zdnet.com/article/encryp...he-real-world/





I Visited 47 Sites. Hundreds of Trackers Followed Me.
Farhad Manjoo

Earlier this year, an editor working on The Times’s Privacy Project asked me whether I’d be interested in having all my digital activity tracked, examined in meticulous detail and then published — you know, for journalism. “Hahaha,” I said, and then I think I made an “at least buy me dinner first” joke, but it turned out he was serious. What could I say? I’m new here, I like to help, and, conveniently, I have nothing whatsoever at all to hide.

Like a colonoscopy, the project involved some special prep. I had to install a version of the Firefox web browser that was created by privacy researchers to monitor how websites track users’ data. For several days this spring, I lived my life through this Invasive Firefox, which logged every site I visited, all the advertising tracking servers that were watching my surfing and all the data they obtained. Then I uploaded the data to my colleagues at The Times, who reconstructed my web sessions into the gloriously invasive picture of my digital life you see here. (The project brought us all very close; among other things, they could see my physical location and my passwords, which I’ve since changed.)

What did we find? The big story is as you’d expect: that everything you do online is logged in obscene detail, that you have no privacy. And yet, even expecting this, I was bowled over by the scale and detail of the tracking; even for short stints on the web, when I logged into Invasive Firefox just to check facts and catch up on the news, the amount of information collected about my endeavors was staggering.

My unique identifier shared across sites

The session documented here took place on a weekday in June. At the time, I was writing a column about Elizabeth Warren’s policy-heavy political strategy, which involved a lot of Google searches, a lot of YouTube videos, and lots of visits to news sites and sites of the candidates themselves. As soon as I logged on that day, I was swarmed — ad trackers surrounded me, and, identifying me by a 19-digit number I think of as a prisoner tag, they followed me from page to page as I traipsed across the web.

Looking at this picture of just a few hours online, what stands out to me now is how ordinary a scene it depicts: I didn’t have to visit any shady sites or make any untoward searches — I just had to venture somewhere, anywhere, and I was watched. This is happening every day, all the time, and the only reason we’re O.K. with it is that it’s happening behind the scenes, in the comfortable shadows. If we all had pictures like this, we might revolt.

Where I live

This tracker for Advertising.com received my almost exact location as latitude and longitude — about a quarter mile off from my actual location. Several other trackers gathered information about where I was, including my city, state, country and zip code. They base this off my IP address, so I had no chance to opt-out. They use the data to conduct targeted advertising but can also use it to track where I’m moving and build a more detailed picture of my interests and activities.

Widgets or trackers?

Tracking scripts like this one for Twitter allow websites to add useful features like share buttons. But the scripts often double as trackers meant to record site visits and build profiles about users. In this case, Twitter can use the information about this page to suggest new followers or sell more targeted advertising on its platform.

My unique identifier: 5535203407606041218

The internet wasn’t built to track people across websites. But that didn’t stop advertisers. They developed technology to share identifiers among websites. This line connects all trackers that were sharing one of my unique IDs, created by the advertising company AppNexus as I browsed the internet and then stored on my browser for others to use. I had about a dozen IDs shared among sites I visited, but this one was present on eight different pages, shared with nearly a dozen trackers and advertisers including Amazon, Yahoo, Google and lesser-known companies like SpotX and Quantcast.

Fingerprinting

Even when companies don’t have an ID to track me, they can use signals from my computer to guess who I am across sites. That’s partly why trackers like this one received more information about my computer than you could imagine being useful, like my precise screen size. Other trackers received my screen resolution, browser information, operating system details, and more.

Election tracking

Websites for Democratic presidential candidates Elizabeth Warren and Pete Buttigieg were also participating in aggressive online tracking. Their sites sent data to Facebook, Twitter, Google, Amazon and about a dozen other third-party trackers. Warren’s site also sent my latitude and longitude to Heap Analytics along with a field indicating whether I was living in an early-primary state (I wasn’t).

News sites were the worst

Among all the sites I visited, news sites, including The New York Times and The Washington Post, had the most tracking resources. This is partly because the sites serve more ads, which load more resources and additional trackers. But news sites often engage in more tracking than other industries, according to a study from Princeton.
Google, Google, everywhere

Google’s own domains don’t contain that many trackers. The same is true for Facebook. But that’s because they place most of their trackers on other websites. Google was present on every site I visited, collecting information on where I live, the device I used and everything I looked at.

Additional reporting and design by Stuart A. Thompson, Jessia Ma and Aaron Krolik. Illustration by Jessia Ma.
https://www.nytimes.com/interactive/...-tracking.html





Google Proposes New Privacy and Anti-Fingerprinting Controls for the Web
Frederic Lardinois

Google today announced a new long-term initiative that, if fully realized, will make it harder for online marketers and advertisers to track you across the web. This new proposal follows the company’s plans to change how cookies in Chrome work and to make it easier for users to block tracking cookies.

Today’s proposal for a new open standard extends this by looking at how Chrome can close the loopholes that the digital advertising ecosystem can use to circumvent that. And soon, that may mean that your browser will feature new options that give you more control over how much you share without losing your anonymity.

Over the course of the last few months, Google started talking about a “Privacy Sandbox,” which would allow for a certain degree of personalization while still protecting a user’s privacy.

“We have a great reputation on security. […] I feel the way we earned that reputation was by really moving the web forward,” Justin Schuh, Google’s engineering director for Chrome security and privacy told me. “We provide a lot of benefits, worked on a lot of different fronts. What we’re trying to do today is basically do the same thing for privacy: have the same kind of big, bold vision for how we think privacy should work on the web, how we should make browsers and the web more private by default.”

Here is the technical side of what Google is proposing today: To prevent the kind of fingerprinting that makes your machine uniquely identifiable as yours, Google is proposing the idea of a privacy budget. With this, a browser could allow websites to make enough API calls to get enough information about you to group your into a larger cohort but not to the point where you give up your anonymity. Once a site has exhausted this budget, the browser stops responding to any further calls.

Some browsers also already implement a very restrictive form of cookie blocking. Google argues that this has unintended consequences and that there needs to be an agreed-upon set of standards. “The other browser vendors, for the most part, we think really are committed to an open web,” said Schuh, who also stressed that Google wants this to be an open standard and develop it in collaboration with other players in the web ecosystem.

“There’s definitely been a lot of not intentional misinformation but just incorrect data about how sites monetize and how publishers are actually funded,” Schuh stressed. Indeed, Google today notes that its research has shown that publishers lose an average of 52% of their advertising revenue when their readers block cookies. That number is even higher for news sites.

In addition, blocking all third-party cookies is not a viable solution, according to Google, because developers will find ways around this restriction by relying on fingerprinting a user’s machine instead. Yet while you can opt out of cookies and delete them from your browser, you can’t opt out of being fingerprinted, because there’s no data stored on your machine (unless you regularly change the configuration of your laptop, the fonts you have installed and other identifiable traits that make your laptop uniquely yours).

What Google basically wants to do here is change the incentive structure for the advertising ecosystem. Instead of trying to circumvent a browser’s cookie and fingerprinting restrictions, the privacy budget, in combination with the industry’s work on federated learning and differential privacy, this is meant to give advertisers the tools they need without hurting publishers, while still respecting the users’ privacy. That’s not an easy switch and something that, as Google freely acknowledges, will take years.

“It’s going to be a multi-year journey,” said Schuh. “What I can say is that I have very high confidence that we will be able to change the incentive structures with this. So we are committed to taking very strong measures to preserve user privacy, we are committed to combating abuses of user privacy. […] But as we’re doing that, we have to move the platform forward and make the platform inherently provide much more robust privacy protections.”

Most of the big tech companies now understand that they have a responsibility to help their users retain their privacy online. Yet at the same time, personalized advertising relies on knowing as much as possible about a given user, and Google itself makes the vast majority of its income from its various ad services. It sounds like this should create some tension inside the company. Schuh, however, argued that Google’s ad side and the Chrome team have their independence. “At the end of the day, we’re a web browser, we are concerned about our users’ base. We are going to make the decisions that are most in their interest so we have to weigh how all of this fits in,” said Schuh. He also noted that the ad side has a very strong commitment to user transparency and user control — and that if users don’t trust the ads ecosystem, that’s a problem, too.

For the time being, though, there’s nothing here for you to try out or any bits being shipped in the Chrome browser. For now, this is simply a proposal and an effort on the Chrome team’s part to start a conversation. We should expect the company to start experimenting with some of these ideas in the near future, though.

Just like with its proposed changes to how advertisers and sites use cookies, this is very much a long-term project for the company. Some users will argue that Google could take more drastic measures and simply use its tech prowess to stop the ad ecosystem from tracking you through cookies, fingerprinting and whatever else the adtech boffins will dream up next. If Google’s numbers are correct, though, that would definitely hurt publishers, and few publications are in a position to handle a 50% drop in revenue. I can see why Google doesn’t want to do this alone, but it does have the market position to be more aggressive in pushing for these changes.

Apple, which doesn’t have any vested interest in the advertising business, has already made this more drastic move with the latest release of Safari. Its browser now blocks a number of tracking technologies, including fingerprinting, without making any concessions to advertisers. The results of this for publishers is in line with Google’s cookie study.

As far as the rest of Chrome’s competitors, Firefox has started to add anti-fingerprinting techniques as well. Upstart Brave, too, has added fingerprinting protection for all third-party content, while Microsoft’s new Edge currently focuses on cookies for tracking prevention.

By trying to find a middle path, Chrome runs the risk of falling behind as users look for browsers that protect their privacy today — especially now that there are compelling alternatives again.
https://techcrunch.com/2019/08/22/go...s-for-the-web/





Soap, Detergent and Even Laxatives Could Turbocharge a Battery Alternative

Researchers are trying to develop options to lithium-ion and other batteries in a quest for quick bursts of power and extended energy storage.
XiaoZhi Lim

Living in a world with smartphones, laptops and cars powered by batteries means putting up with two things: waiting for a depleted battery to charge, and charging it more frequently when its once-long life inevitably shortens.

That’s why the battery’s cousin, the supercapacitor, is still in the game, even though batteries dominate electricity storage.

“There are circumstances where you don’t need a lot of energy, but you need a very quick surge of power,” said Daniel Schwartz, a chemical engineer who leads the Clean Energy Institute at the University of Washington.

For example, Dr. Schwartz’s new car has start-stop technology, which is common in vehicles in the European Union to meet stringent emission standards. Start-stop systems demand that the car’s starter battery deliver big bursts of power whenever the engine starts or stops, and that it recharge quickly to keep up. That is taxing for a battery, but it is a piece of cake for a supercapacitor.

Commercially, supercapacitors fell behind because they can’t store as much energy as batteries do. Today, they are used in niche applications like helping wind turbines cope with fluctuating winds.

But as demand for energy storage grows, whether to support electric vehicles or intermittent renewable power, scientists and consumers are keeping up their search for alternatives to conventional lithium-ion batteries.

A battery’s limited lifetime means it needs to be replaced every few years. In grid storage, that could generate a hefty amount of electronic waste. Batteries also pose a fire risk — manageable in a smartphone but dangerous in a vehicle or power plant.

In a study this month in the journal Nature Materials, researchers reported a new phenomenon that could potentially bring a supercapacitor’s energy storage capacity on par with lithium-ion batteries: by using a new class of electrolytes composed of ionic liquids, or salts that remain liquid at room temperature.

The materials are abundant: The molecular components in this novel class of liquid salts are found in soaps, detergents and even stool softeners.

Supercapacitors charge quickly but store little energy because all the action takes place only at the interface where its solid components — the electrodes — and its liquid component — the electrolyte — meet. In contrast, a battery brings its charge inside the electrodes and thus uses the full volume of the electrodes for storage.

“Think of an electrode as a sponge,” said Dr. Schwartz, who was not involved in the study. “The battery soaks water up into all of the sponge, whereas the supercapacitor just has it on the surface of each pore.”

Xianwen Mao, a chemical engineer at Cornell University and the lead author of the study, had been working in a research group led by Alan Hatton at the Massachusetts Institute of Technology to improve the surface of a supercapacitor’s electrodes. But then, a few years ago, Paul Brown, a chemist who studied ionic liquids, worked with Dr. Mao to focus on creating new electrolytes instead.

In the M.I.T. lab, Dr. Brown prepared new ionic liquids from positively and negatively charged ions that were significantly different in size. Crucially, the negatively charged ions were also common surface-active agents, or surfactants: giant molecules carrying a long, water-repelling tail while holding their negative charge on their water-loving heads.

When the ionic liquids were first tested in a prototype supercapacitor, Dr. Mao did not observe any significant improvement in energy storage capacity. But he didn’t abandon the idea. Noticing that the liquids were quite viscous, he decided to heat up the experiment. At 130 degrees Celsius and above, the prototype’s energy storage capacity abruptly spiked.

To understand this sudden improvement in energy storage capacity, the researchers looked at what was happening at the electrode-electrolyte interface. It turned out that the giant, negatively charged surfactant ions had corralled the small, positively charged ions into squeezing and huddling on the supercapacitor’s electrodes while their tails intertwined into a network.

Surfactants are known to self-assemble — for example, when a soap bubble forms. This self-assembly phenomenon was observed for the first time at the electrode-electrolyte interfaces, Dr. Mao said.

The high concentration of positively charged ions on the electrode means the supercapacitor packs more energy in less space. The researchers have applied for a patent to use the ionic liquids as supercapacitor electrolytes.

“They really laid out a clear set of design principles,” Dr. Schwartz said, adding that he expected to see “lots of follow-up work” based on this design.

“Like almost all research in energy storage systems, it’s not about one breakthrough electrode or a breakthrough in the electrolyte,” he said. “It’s about the whole system and how stable is that system, how well does it perform, what are the degradation mechanisms, and how much does it cost.”

Other than energy storage, the researchers think that, with some modifications, these ionic liquids could find practical uses in drug delivery or carbon dioxide capture. Researchers are now working to convert the ionic liquids into gel-like solids by linking the molecules into a network. They expect the gels to trap or release molecules, such as drugs or carbon dioxide, in a controlled fashion upon electrical stimulation, Dr. Mao said.

But perhaps what excites Dr. Mao the most is that these new ionic liquid electrolytes are made from everyday molecules, of which there is a huge variety of commercialized options to choose from.

“All of the starting materials are very inexpensive,” he said. “Just think about soaps and detergents.”
https://www.nytimes.com/2019/08/22/s...apacitors.html





Could Restorative Justice Fix the Internet?

Perhaps. But it relies on people being capable of shame, so …
Charlie Warzel

As we all spend our days yelling at one another online, it’s easy to despair and wonder: Is there any way to fix our toxic internet?

Micah Loewinger, a producer for WNYC’s “On the Media,” was pondering this question when he met Lindsay Blackwell, a Ph.D. student at the University of Michigan who studies online harassment. Ms. Blackwell, also a researcher at Facebook, had been toying with the idea of applying the principles of the restorative justice movement to online content moderation (you can listen to their episode here).

Restorative justice is an alternative form of criminal justice that focuses on mediation. Often, an offender will meet with the victim and the broader community with a chance to make amends. The confrontation, advocates of the technique argue, helps the offender come to terms with the crime while giving the victim a chance to be heard. If the relationship is repaired and the harm to the victim reduced, the offender is allowed to re-enter the community. Studies, including one by the Department of Justice, suggest the approach can be an effective way to decrease repeat offenses and works for perpetrators and victims.

For Ms. Blackwell, applying a similar tactic to tech platforms made sense. Current tech company enforcements, if enacted, tend to be harsh and geared toward deterrence, not treating the underlying causes of rule-breaking behavior.

Ms. Blackwell and Mr. Loewinger decided to run what they called “a highly unscientific” experiment on Reddit, a social network with tens of thousands of forum communities. Each community is policed by volunteer moderators who take down offensive posts and enforce that community’s set of rules. Ms. Blackwell and Mr. Loewinger teamed up with the moderators of Reddit’s r/Christianity community, which has roughly 200,000 members. It is diverse, comprising L.G.B.T.Q. Christians, fundamentalists, atheists and others with an interest in posting about the faith. Discussions get intense.

The pair selected three users who were barred for repeatedly violating rules. They created a chat room where the offender and community moderator would meet with Mr. Loewinger and Ms. Blackwell, who acted as mediators. The offenders would be confronted with past bad behavior and given the opportunity to better understand why they were barred. Upon successful completion, they’d be readmitted to the group.

The results were mixed. In one case, mediation broke down, in part because of Ms. Blackwell and Mr. Loewinger’s inexperience mediating and tensions between a user and a moderator that boiled over. The second case, which involved an anti-gay user who was accused of bullying an L.G.B.T.Q. user into committing suicide years ago, proved simply too toxic to continue. The third case, involving “James,” an atheist and biblical historian who was barred for repeatedly violating r/Christianity’s rules for civil discussion, was a success.

At various points throughout the chat log of the mediation, James expressed genuine shock. “Dang this wasn’t the context that I remembered,” he types at one point, after looking at past bullying posts. “I thought someone else was the instigator and I felt ganged-up on or something. But … looks like I was the instigator.” He apologized for lashing out, at one point suggesting “the problem is more obviously about (mis) communication and hostility that comes up in the course of these conversations.” Eventually, the moderators lifted their ban.

When I spoke to James over the phone about the process, he described his aggressive behavior as a kind of dissociation — a moment of weakness where he stopped seeing those on the other end of the thread as real people. “My frustration expressed itself as insult diarrhea with no regard to whether I was being reasonable,” he said. He noted that he’d been back in the community for two months, is more conscious of his interactions and has yet to break the rules.

James isn’t convinced the process could work for everyone. He argued that mediation was effective for his specific personality type. “It’s the element of shame,” he said. “I’m somebody who feels guilt being confronted and it allowed me to see I was the one at fault.” Ms. Blackwell and Mr. Loewinger’s mixed results suggest success is far from guaranteed. Online, mediators have to deal with pseudonymous individuals, trolls and pranksters with no desire to reform. Even those dealing in good faith might bristle at having to apologize or confront their victims. Given the nature of online harassment and bullying, the restorative justice approach is full of pitfalls. Forcing targeted minorities or vulnerable users to confront abusers, for one, could increase trauma or put undue burden on victims.

Most daunting is the issue of scale. There’s simply no way to replicate the amount of time and effort involved with Ms. Blackwell and Mr. Loewinger’s experiment across the web. “It’s like trying to moderate a wild river,” an r/Christianity moderator said in the chat logs. “It’s only getting worse, too. I can’t even begin to evaluate all of this stuff.” The ceaseless torrent of posts and comments is why tech platforms are increasingly turning to algorithms and artificial intelligence to solve the problem.

But successful moderation — the kind that not only keeps a community from collapsing under the weight of its own toxicity but also creates a healthy forum — requires a human touch. Even skilled moderators assume a huge psychological burden; many working for Facebook and YouTube are outside contractors, subjected daily to torrents of psychologically traumatizing content and almost always without proper resources. Even in small communities, keeping the peace requires a herculean effort. A recent New Yorker article described the job of two human moderators of a midsize tech-news message board as an act of “relentless patience and good faith.”

This reality makes Ms. Blackwell and Mr. Loewinger’s experiment equal parts compelling and dispiriting. Mr. Loewinger remains optimistic. “It’s easy to write off all people who exhibit jerk-ish behavior online as pathological trolls,” he told me. “Dislodging that assumption might hold the key to a less toxic web. The James case demonstrated to me that people are open to reflecting on what they’ve done, especially when treated with dignity.” Ms. Blackwell argued that having reformed users back in the community actually makes the forums healthier. “We will never effectively reduce online harassment unless we address the underlying motivations for participating in abusive behavior, and having reformed violators go on to model prosocial norms is an incredible bonus,” she said.

But if reform means an abundance of shame and dignity on the internet, it’s hard not to feel that all is already lost. Still, the pair’s earnestness is refreshing. And at its core there’s a lesson: If fast, scalable algorithmic solutions gave us the broken system we’ve got, it’s stripped-down patience and humanity that have the best chance of pulling us out.
https://www.nytimes.com/2019/08/20/o...e-justice.html

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

August 17th, August 10th, August 3rd, July 27th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 10:22 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)