P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 21-03-18, 07:18 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - March 24th, ’18

Since 2002







































March 24th, 2018




Popular Pirate Streaming Giant 123movies Announces Shutdown
Kavita Iyer

With clear legal pressure growing on illegal streaming sites, one more pirate streaming site is going to bite the dust very soon. 123movies(hub), also known as GoMovies, the world’s most popular illegal streaming site with millions of visitors per day, has announced shutdown on its website.

According to a message posted on the site, the site’s operators say that they will shut down at the end of the week. However, the operators are simultaneously also advising their users to “respect” filmmakers by paying for movies and TV-shows instead of pirating them.

“We’ve been providing links to movies and shows for years. Now it’s time to say goodbye. Thank you for being our friends and thanks for staying with us that long,” the 123movies team writes.

“PS: Please pay for the movies/shows, that’s what we should do to show our respect to people behind the movies/shows,” the team adds.

The shutdown announcement, which is currently only visible on the classic homepage, comes after a recent investigation by the Motion Picture Association of America (MPAA) who discovered a new threat based in Vietnam and officially called 123movies, as the world’s most popular illegal movie site.

“Right now, the most popular illegal site in the world, 123movies.to (at this point), is operated from Vietnam, and has 98 million visitors a month,” MPAA’s Executive Vice President & Chief of Global Content Protection, Jan van Voorn said.

“There are more services like this [123movies] – sites that are not helpful for local legitimate businesses,” added Voorn, who is working with the Office of the Police Investigation Agency (C44) to tackle the problem.

It’s unclear if this site is being closed down under pressure or for some other reason. The 123movies team has yet to provide a reason for the planned closure decision.
https://www.techworm.net/2018/03/123...-shutdown.html





Forensic Watermarking Attributed with Halving Losses from Online Video Piracy
Joseph O'Halloran

Research from YouGov and CDN technology provider Edgeware has found that anti-piracy measures could save billions as 39% of consumers admit they are likely to watch pirated film or TV content.

The YouGov and Edgeware online research surveyed more than 4,000 people globally and looked into the extent of illegal online television consumption and the impact of anti-piracy measures.

Moreover, Edgware calculated that if the 21% of adults who were likely to watch pirated sports events did so, the loss of revenue from live sports events could equate to content owners losing upwards of $9 billion per year.

As well as the fact that nearly two-fifths of viewers were likely to watch pirated content on-demand by downloading or streaming illegally shared versions of popular film and TV, more than a fifth (21%) said they would watch live events – like live sports – from unsolicited online sources. Almost three in ten viewers watched pirated content at least once per month while 39% of viewers were likely to watch pirated TV or films online. The most cited reasons for viewers watching pirated content is its ease of availability (32%), followed by cost (24%).

Yet the good news in the TV piracy research report was that half of viewers who said they would watch pirated content would be dissuaded from doing so if they knew a programme they were watching could be tracked back to its source using forensic watermarking.

“The illegal distribution of programming is a huge problem for content distributors and owners, with piracy costing them billions in lost revenue,” said Richard Brandon from Edgeware. “This research has shown that digitally watermarking content as it’s streamed will have a significant benefit. Those watching pirating content could drop by half and then forensic watermarking will also make it faster and easier to identify those illegally distributing content.”
https://www.rapidtvnews.com/20180320...eo-piracy.html





Owner of Illegal Music Sharing Site Sharebeast.com Sentenced to 5 Years in Prison
AP

A California man who operated what prosecutors say was one of the most successful illegal music sharing websites on the internet has been sentenced to five years in federal prison.

The U.S. attorney's office in Atlanta said in a news release Thursday that 30-year-old Artur Sargsyan of Glendale, Calif., owned and operated Sharebeast.com and other websites. Prosecutors say his file-sharing infrastructure allowed the illegal download of about a billion copyrighted musical works from at least 2012 through 2015.

Sargsyan pleaded guilty in September to criminal copyright infringement for private financial gain. In addition to the prison term, the judge ordered him to pay $458,200 in restitution and to forfeit nearly $185,000.

Prosecutors say Sargsyan ignored repeated notifications that he was illegally hosting and sharing copyrighted works.
https://www.usatoday.com/story/tech/...ars/450001002/





A $1.6 Billion Spotify Lawsuit is Based on a Law Made for Player Pianos

The hidden costs of streaming music
Sarah Jeong

Spotify is finally gearing up to go public, and the company’s February 28th filing with the SEC offers a detailed look at its finances. More than a decade after Spotify’s launch in 2006, the world’s leading music streaming service is still struggling to turn a profit, reporting a net loss of nearly $1.5 billion last year. Meanwhile, the company has some weird lawsuits hanging over its head, the most eye-popping being the $1.6 billion lawsuit filed by Wixen Publishing, a music publishing company that includes the likes of Tom Petty, The Doors, and Rage Against the Machine.

So, what happened here? Did Spotify really fail to pay artists to the tune of a billion dollars all the while losing money? Is digital streaming just a black hole that sucks up money and spits it out into the cold vacuum of space?
"Spotify is fundamentally being sued for literal paperwork"

The answer is complicated. The amount of money that songwriters are making through streaming services like Spotify is oddly low, but the Wixen lawsuit itself exists in a bizarre universe of convoluted legal provisions that have very little bearing to fairness, common sense, or even how the technology actually works. And as Spotify’s IPO filing notes in its section on risk factors, the company is dependent on third-party licenses, which makes its business model especially vulnerable to any hiccups in the bureaucracy of music licensing.

Spotify is being sued by Wixen because of mechanical licenses — a legal regime that was created in reaction to the dire threat to the music industry posed by player pianos. Yes, the automated pianos with the rolls of paper with punch holes in them.

But that’s not actually the weird part. The weird part is that Spotify is fundamentally being sued for literal paperwork: Wixen says Spotify is legally required to notify songwriters in writing that they’re in the Spotify catalog — a fact that escapes probably zero songwriters today. A paper notice requirement made sense in the age of player pianos when songwriters could hardly be expected to keep track of every player piano roll in the country. It makes no sense in the age of Spotify, Pandora, and Apple Music. The question of what would be fair to pay artists is a contentious one, but the story of Wixen v. Spotify is not so much about paying the artists. It’s really a story about how, in a time when services, labels, and artists have never been better poised to work under a centralized, automated system for licenses and royalties, everyone keeps punching themselves in the face instead.

What are mechanical licenses?

When player pianos became popular at the turn of the century, they posed a threat to the music industry. Before recorded music existed, songwriters made their money by selling sheet music, but the manufacturers of piano rolls that played their songs didn’t pay them a dime. The songwriters claimed that the piano rolls were equivalent to sheet music and that they were entitled to royalties. The manufacturers disagreed.

In 1908, the Supreme Court ruled that player piano companies were not required to pay royalties to songwriters, and piano rolls didn’t fit the definition of sheet music. Outraged, the songwriters went to Congress, and the next year, “mechanical licenses” were created as part of the Copyright Act of 1909 to provide royalties to songwriters. As music technology evolved, mechanical licenses did, too, following the shift from player pianos to physical records and finally to digital downloads and certain kinds of streaming (delightfully labeled “digital phonorecord deliveries” in legal jargon).

It’s not clear what the word “mechanical” actually means. It might be the player pianos that inspired the law or it might be a reference to how the license works: it’s what the law calls a compulsory license. It works automatically: a player piano roll company or a streaming service doesn’t have to negotiate rates with individual songwriters, they just have to follow the rules that are set by the Copyright Royalty Board every five years.


Unfortunately, nothing is ever that simple in copyright law, and when it comes to music copyright, it’s especially convoluted. This is because as the technology around music has evolved over time, Congress and other legislative bodies around the world have chosen to tack on all kinds of little fixes to keep the whole thing going. There isn’t one copyright in one song — it’s four or five or six or really, a potentially unknowable number of rights scattered across the whole work.

Right off the bat, a song is split into two different kinds of copyright: the composition and the sound recording. Composers have been writing songs for centuries — that part is pretty straightforward and well-settled — but the technology of recording music is a pretty recent innovation. So, copyright for sound recordings was only added to US copyright law in 1976.

Sometimes rights in composition and rights in sound recording belong to the same person. If you write and record your own music, you own all the rights. But often in the world of commercial music, multiple people are co-authors for the composition and sound recording, with one or two overlapping creators. For the sake of simplicity, let’s assume that all of these people are adequately represented by various agents, have signed all the right contracts, and are actually on speaking terms with each other.

Now, we can move on to the part that will make you want to blow your brains out.

So there are the rights in composition and the rights in sound recording, but after that, each of these components is subdivided into even more rights.

Radio does it differently

In 2018, there are a bajillion ways to listen to a song. Maybe you’re still using FM radio, maybe you prefer SiriusXM, maybe you’re a Spotify adherent, maybe you buy things through the iTunes store, or maybe you’re a hipster who only listens to records. Every one of these things has to license all the rights to songs in a completely different way.

FM radio and SiriusXM are treated completely differently from streaming services like Spotify, and even more bizarrely, they’re treated differently from each other. FM radio is considered a “public performance,” and it doesn’t pay royalties to the artists who record the songs, just the songwriters. Royalties for public performances of compositions are compulsory and are collected by a handful of organizations called performance royalty organizations (PRO). There are basically three big players in this field — BMI, ASCAP, and SESAC — which then redistribute the royalties to their member artists according to some mysterious formula. Theoretically, this is efficient and good for artists who can’t be bothered to chase down royalties, but it’s a little cartel-like since the PROs are actually private entities. And, well, they’ve been under an antitrust consent decree for decades, so someone at some point agreed with me that it seemed pretty cartel-like.

Does it make any sense for FM radio to pay songwriters but not the recording artists listeners actually know? No, but let’s move on.

Similar to the BMI / ASCAP / SESAC arrangement for composition rights, internet and satellite radio pay royalties for sound recording rights to a government-endorsed nonprofit called SoundExchange that calculates how much every artist is owed and automatically sends it to them.

Remember Pandora as it used to be? How you couldn’t just play any song you wanted? All of those weird restrictions, like the limited number of skips? That had to do with the licensing regime Pandora fell under. Pandora threw in all of those restrictions so it could qualify as internet radio, and therefore only deal with a couple of middlemen that then dealt with individual artists.

Spotify is in a different place altogether because it allows you to play the exact song or album you want to, at any given time. Spotify can’t just use BMI or SoundExchange as an easy clearinghouse for rights. It has to get licenses for both rights in recording and rights in composition from everyone.

The extremely wild thing to take away here is that internet radio and satellite radio are treated completely differently from a streaming service like Spotify. They might all seem like streaming services to you, but legally they’re totally different.


When it comes to sound recordings, Spotify has to negotiate with individual labels and artists. But when it comes to the rights in composition, it pays mechanicals (the compulsory, automatically prenegotiated rates mentioned earlier). Rates are currently set at 9.1 cents per composition or 1.75 cents per minute, whichever is more.

Record companies pay mechanicals to songwriters. So every time a CD gets pressed with Cyndi Lauper’s classic “Girls Just Wanna Have Fun,” songwriter Robert Hazard receives that mechanical royalty. The recording industry has been dealing with mechanical licenses forever and is theoretically familiar with the ins and outs of locating composers and making sure they get their compulsory licensing fees.

Perhaps for that reason, the iTunes store doesn’t pay mechanicals directly: instead, Apple pays record companies, which are then supposed to pay the songwriters. You can think of iTunes as a sort of extension of the record industry — another layer of distribution that branches right off the labels.

But Spotify took a completely different route. Instead of foisting the work off onto the record labels, Spotify is on the hook for making sure songwriters get their mechanicals. There’s a good reason why, of course: the iTunes store and Spotify work in very different ways.

Consider this: once you buy a CD, you have the CD. Once you buy a track from iTunes, you have the file. The various licenses, including the mechanical license, are bought and paid for, and you own something.

When you listen to music through Spotify, you don’t own the song, even though you might be able to listen to it at any time. The moment Jay Z yanks Watch the Throne from Spotify, you just don’t have it anymore. That 9.1 cent fee per composition makes sense when you’re pressing a single CD, but it doesn’t have any meaningful application to on-demand streaming.

So when it comes to mechanical licenses, streaming services like Spotify fall under a completely different set of fees set by something called the Copyright Royalty Board, which is part of the Library of Congress.

The new rates for interactive streaming

As of this year, services like Spotify are facing a formula that is changing every year from now until 2022. In 2018, they’ll be paying 11.4 percent of revenue or 22.0 percent of total content cost, whichever is greater. This keeps ticking up over the course of five years, at which point the Copyright Royalty Board will issue a new ruling.

In 2019, 12.3 percent of revenue or 23.1 percent of total content cost
In 2020, 13.3 percent of revenue or 24.1 percent of total content cost
In 2021, 14.2 percent of revenue or 25.2 percent of total content cost
In 2022, 15.1 percent of revenue or 26.2 percent of total content cost


Every five years, a bunch of judges decide the fair rate for all songwriters and set fees for various scenarios. It’s not just that streaming services have to follow a certain rate. If your service offers “conditional downloads,” you get another rate and you get treated differently based on whether you’re supported by subscriptions or ads. And if you thought “9.1 cents per composition or, if a composition is longer than 5 minutes, 1.75 cents per minute” sounded complicated, streaming services have to abide by a set of formulas often calculated as percentages of revenue. For the time period that Wixen is suing over, Spotify would have owed the songwriters something like “10.5% of revenue minus PRO payments,” depending on which formula got applied.

So what Spotify owes to songwriters is set by regulation that’s negotiated every five years in front of a panel of administrative judges. And that means Spotify knows exactly how much it’s supposed to pay to music publishers. And that money is being paid... somewhere. We’re not sure. The publishers aren’t sure. In fact, Spotify might not be sure.

That’s where the Wixen lawsuit comes in.

Just like BMI and ASCAP are more or less the only game in town for compulsory licenses for public performances of compositions (e.g., radio play), the Harry Fox Agency (HFA) is more or less the place you go to get mechanical licenses from songwriters. If there’s something like a phone book for all the songwriters in the country, it’s HFA. And if the composer isn’t represented by HFA, HFA is supposed to go out and find them so they can get their money.

This is the most baffling part of the Wixen lawsuit. Wixen claims that “Spotify knew that HFA did not possess the infrastructure to obtain the required mechanical licenses and Spotify knew it lacked these licenses.”

It’s ironic: HFA is pretty much the agency for the job, and on top of that, HFA was founded by the National Music Publishers Association (NMPA) — a trade organization that represents songwriters’ interests — back in 1927. But it’s not untrue that HFA’s efficiency is somewhat questionable. Every one of these big clearinghouses for music rights — like BMI and ASCAP — is like that. After Paul McCartney signed up with a company called Kobalt to administer his rights, his lawyer told The New York Times that McCartney had suddenly seen a 25 percent increase in how much money was collected.

Legally speaking, the lawsuit isn’t about whether Spotify is supposed to pay “10.5% of revenue minus PRO payments” and whether it was willing to do so. It’s about whether it sent along a piece of paper to a songwriter’s last known address letting them know that they were going get paid. And because they supposedly didn’t, Wixen is asking for $150,000 in statutory damages per song. That’s an expensive piece of missing paper — totaled up, it’s why the lawsuit is for $1.6 billion.

The law does allow Spotify to file its notice of intent with the Copyright Office if it can’t find the rights holder, and it’s not clear from the lawsuit whether that happened and whether that was supposed to be HFA’s job. (Spotify did not return requests for comment.) It’s possible that something was filed at the Copyright Office and notice still never made its way to the songwriters. (About 45 million notices of intent have been filed at the Copyright Office since 2016 when the process first became available.)

It’s almost like this whole thing could be automated and it isn’t because we can’t have nice things.

The endless war over mechanicals

The Wixen lawsuit isn’t the first of its kind. It’s a breakaway lawsuit after a 2017 class action settlement with Spotify where $43.4 million was set aside to compensate songwriters who didn’t receive royalties. And that suit came in the aftermath of Spotify’s $30 million settlement with the National Music Publishers Association (NMPA) in 2016.


The $43.4 million fund from the 2017 settlement, Wixen claims, is simply not enough. A similar lawsuit filed in August 2017 also objected to the $43.4 million settlement, calling it an “empty gesture that encourages infringement and is entirely insufficient to remedy years of illegal activity.”

The headache around mechanical licensing didn’t start with tech companies. In 2009, the NMPA and RIAA reached a settlement over mechanical licenses that record companies had not paid to songwriters — likely for the same bureaucratic reasons that Spotify is currently struggling with. The irony there is that the three biggest members of both the NMPA and the RIAA are the same companies: Sony, Warner, and Universal. The fight over mechanical licensing is the left hand and the right hand slapping each other, forever.

Why are there three different kinds of clearinghouses while other rights are still negotiated on a case-by-case basis? And we’re just talking about music here — we’re not talking about books, movies, short video clips, or photography. Music is just one slice of copyright law, and that slice is an Escher-esque hellscape of percentages and if-then conditionals.

Centralized clearinghouses like SoundExchange, ASCAP, and HFA (to some extent) are what are known as “collection societies.” In other countries, particularly in Europe, collection societies are much more popular and cover many different kinds of industries. In general, the trend in other countries is to group music rights together into one collection society, rather than split them up into several different ones divided by type of copyright and type of distribution.

And yes, there are some horror stories out of these systems — waste, inefficiency, and bureaucratic corruption. But no one can look at the US hybrid free market / collective system and say, in good faith, that it’s all working out. In the end, artists just want to make music and get a check at the end of the quarter while someone else in a suit does the work of chasing down royalties from 20 different places.

In 2018, streaming companies know with precision how many people are listening to what song. Databases of artists and how much they’re being owed are being updated regularly. And yet, in this unprecedented age of information and automation, it’s only become more difficult and more complicated to get money to the people who are owed it. Everywhere else the digital revolution is supposed to be streamlining old processes; when it comes to music, the logistics have only gotten more convoluted.

The paradox has to do with the unique position of music copyright. More than any other kind of copyright, music copyright law has suffered at the hands of technological change. With every new innovation — from player pianos to cassette players to internet radio — legislators have tacked on some new patch to “fix” music copyright, creating an increasingly untenable monstrosity of flapping bits held together with staples and Scotch tape. And while this mess is barely understandable by the average consumer, it pops out in one fairly obtrusive way: music streaming is dominated by a handful of giants because only a giant can deal with the legal mess. Anyone can open a record store (although good luck getting any foot traffic), but if you want to launch a streaming service, you’d need billions of dollars and lots of lawyers to fend off lawsuits like Wixen v. Spotify.

Part of the Wixen lawsuit has to do with the introduction of the Music Modernization Act by Rep. Doug Collins (R-GA) earlier this year. One of the things the MMA would do is create a Mechanical Licensing Collective, a collection society that acts as the official middleman for mechanical licenses for digital services — like SoundExchange, but for mechanicals. Another thing it does is it allows the Copyright Royalty Board to set different mechanical rates for different songs based on market value. Instead of the same flat fee for every song, more “valuable” songs can charge higher mechanicals than others.

The MMA does something else: it prevents lawsuits like Wixen v. Spotify. If a streaming service sets aside the money it’s trying to allocate to a songwriter it can’t find, it can’t be sued later on for not finding the songwriter.

And for once in the history of the world, a proposed bill has met with the approval of the record labels and the tech companies. The MMA has the support of the RIAA, the National Music Publishers Association, the various performance royalties organizations, and the Digital Music Association, a trade organization that represents Spotify, YouTube, Amazon, Napster, and others. Both Spotify and Pandora have directly lauded the bill as well.

That’s how much this state of affairs sucks: the RIAA and Napster have managed to agree on something.

The bill has now been introduced in both the House and the Senate. The music industry — with all its various stakeholders, who are far more accustomed to suing each other than presenting a unified front — are hoping that Congress will push the button and turn the unholy disaster of music licensing into something slightly less unholy and somewhat less disastrous. But less controversial causes have failed to pass muster in the last year. Only time will tell.

In the meantime, we have Wixen vs. Spotify.
https://www.theverge.com/2018/3/14/1...ixen-explainer





CDs, Vinyl are Outselling Digital Downloads for the First Time Since 2011
Derek Hawkins

Digital downloads had a short run as the top-selling format in the music industry. It took until 2011, a decade after the original iPod came out, for their sales surpass those of CDs and vinyl records, and they were overtaken by music streaming services just a few years later.

Now, digital downloads are once again being outsold by CDs and vinyl, according to the Recording Industry Association of America.

The RIAA released its 2017 year-end revenue report on Thursday, showing that revenue from digital downloads plummeted 25 percent to $1.3 billion over the previous year. Revenue from physical products, by contrast, fell just 4 percent to $1.5 billion.

Toys ‘R’ us liquidation sales, new report says there is higher plastic debris in the Pacific Ocean and man loses his pre-cleared security status for flying after federal officials found out he owns a medical marijuana business are some of today’s Hot List stories.

Overall, the music industry grew for a second year straight. And with $8.7 billion in total revenue, it’s healthier than it has been since 2008, according to the report.

Nearly all of the growth was the result of the continued surge in paid music subscription services like Spotify and Apple Music. Those services grew by more than 50 percent to $5.7 billion last year and accounted for nearly two-thirds of the industry’s revenue. Physical media accounted for 17 percent, while digital downloads made up just 15 percent.

RIAA Chairman Cary Sherman called the industry’s recovery “fragile” in a Medium post Thursday.

“We’re delighted by the progress so far, but to put the numbers in context, these two years of growth only return the business to 60 percent of its peak size – about where it stood 10 years ago – and that’s ignoring inflation,” Sherman wrote. “And make no mistake, there’s still much work to be done to make this growth sustainable in the long term.”

The outlook for digital downloads is bleak. This is the third year in a row they’ve posted double-digit declines, according to the RIAA. And this is the first time since 2011 they’ve fallen behind physical music media. If the trend continues, they could wind up going the way of the eight-track tape, which was overtaken in the early 1980s by the cheaper and more compact cassette.

The situation isn’t very rosy for physical media either. CD shipments continued their years-long decline, falling 6 percent to $1.1 billion in 2017, according to the report.

But vinyl sales were up 10 percent to $395 million – a “bright spot among physical formats,” the RIAA noted. It’s a tiny fraction of the industry’s overall sales, but it was enough to convince Sony last year to start pressing LPs again after a 28-year hiatus.
https://www.mercurynews.com/2018/03/...me-since-2011/





Apple Is Secretly Developing Its Own Screens for the First Time
Mark Gurman

• The company has a secret manufacturing facility in California
• Apple Watch to be first Apple product with MicroLED technology

Apple Inc. is designing and producing its own device displays for the first time, using a secret manufacturing facility near its California headquarters to make small numbers of the screens for testing purposes, according to people familiar with the situation.

The technology giant is making a significant investment in the development of next-generation MicroLED screens, say the people, who requested anonymity to discuss internal planning. MicroLED screens use different light-emitting compounds than the current OLED displays and promise to make future gadgets slimmer, brighter and less power-hungry.

The screens are far more difficult to produce than OLED displays, and the company almost killed the project a year or so ago, the people say. Engineers have since been making progress and the technology is now at an advanced stage, they say, though consumers will probably have to wait a few years before seeing the results.

The ambitious undertaking is the latest example of Apple bringing the design of key components in-house. The company has designed chips powering its mobile devices for several years. Its move into displays has the long-term potential to hurt a range of suppliers, from screen makers like Samsung Electronics Co., Japan Display Inc., Sharp Corp. and LG Display Co. to companies like Synaptics Inc. that produce chip-screen interfaces. It may also hurt Universal Display Corp., a leading developer of OLED technology.

Display makers in Asia fell after Bloomberg News reported the plans. Japan Display dropped as much as 4.4 percent, Sharp tumbled as much as 3.3 percent and Samsung slid 1.4 percent. Shares in Apple were down 1.3 percent during early trading at 5:21 a.m. in New York.

Controlling MicroLED technology would help Apple stand out in a maturing smartphone market and outgun rivals like Samsung that have been able to tout superior screens. Ray Soneira, who runs screen tester DisplayMate Technologies, says bringing the design in-house is a “golden opportunity” for Apple. “Everyone can buy an OLED or LCD screen,” he says. “But Apple could own MicroLED.”

None of this will be easy. Mass producing the new screens will require new manufacturing equipment. By the time the technology is ready, something else might have supplanted it. Apple could run into insurmountable hurdles and abandon the project or push it back. It’s also an expensive endeavor.

Ultimately, Apple will likely outsource production of its new screen technology to minimize the risk of hurting its bottom line with manufacturing snafus. The California facility is too small for mass-production, but the company wants to keep the proprietary technology away from its partners as long as possible, one of the people says. “We put a lot of money into the facility,” this person says. “It’s big enough to get through the engineering builds [and] lets us keep everything in-house during the development stages.”

An Apple spokeswoman declined to comment.

Right now smartphones and other gadgets essentially use off-the-shelf display technology. The Apple Watch screen is made by LG Display. Ditto for Google’s larger Pixel phone. The iPhone X, Apple’s first OLED phone, uses Samsung technology. Phone manufacturers tweak screens to their specifications, and Apple has for years calibrated iPhone screens for color accuracy. But this marks the first time Apple is designing screens end-to-end itself.

The secret initiative, code-named T159, is overseen by executive Lynn Youngs, an Apple veteran who helped develop touch screens for the original iPhone and iPad and now oversees iPhone and Apple Watch screen technology.

The 62,000-square-foot manufacturing facility, the first of its kind for Apple, is located on an otherwise unremarkable street in Santa Clara, California, a 15-minute drive from the Apple Park campus in Cupertino and near a few other unmarked Apple offices. There, about 300 engineers are designing and producing MicroLED screens for use in future products. The facility also has a special area for the intricate process of “growing” LEDs.

Another facility nearby houses technology that handles so-called LED transfers: the process of placing individual pixels into a MicroLED screen. Apple inherited the intellectual property for that process when it purchased startup LuxVue in 2014.

About a year after that acquisition, Apple opened a display research lab (described internally as a “Technology Center”) in Taiwan. In a test to see if the company could pull off in-house display manufacturing, engineers in Taiwan first built a small number of LCD screens using Apple technology. They were assembled at the Santa Clara factory and retrofitted into iPhone 7 prototypes. Apple executives tested them, then gave the display team the go-ahead to move forward with the development of Apple-designed MicroLED screens.

The complexity of building a screen manufacturing facility meant it took Apple several months to get the California plant operational. Only in recent months have Apple engineers grown confident in their ability to eventually replace screens from Samsung and other suppliers.

In late 2017, for the first time, engineers managed to manufacture fully functional MicroLED screens for future Apple Watches; the company aims to make the new technology available first in its wearable computers. While still at least a couple of years away from reaching consumers -- assuming the company decides to proceed -- producing a functional MicroLED Apple Watch prototype is a significant milestone for a company that in the past designed hardware to be produced by others.

The latest MicroLED Apple Watch prototypes aren’t fully functioning wearables; instead the screen portion is connected to an external computer board. The screens are notably brighter than the current OLED Watch displays, and engineers have a finer level of control over individual colors, according to a person who has seen them. Executives recently approved continued development for the next two years, with the aim of shipping MicroLED screens in products.

It’s unlikely that the technology will reach an iPhone for at least three to five years, the people say. While the smartphone is Apple’s cash cow, there is precedent for new screen technologies showing up in the Apple Watch first. When it was introduced in 2014, the Apple Watch had an OLED screen. The technology finally migrated to the iPhone X last year.

Creating MicroLED screens is extraordinarily complex. Depending on screen size, they can contain millions of individual pixels. Each has three sub-pixels: red, green and blue LEDs. Each of these tiny LEDs must be individually created and calibrated. Each piece comes from what is known as a “donor wafer” and then are mass-transferred to the MicroLED screen. Early in the process, Apple bought these wafers from third-party manufacturers like Epistar Corp. and Osram Licht AG but has since begun “growing” its own LEDs to make in-house donor wafers. The growing process is done inside a clean room at the Santa Clara facility.

Engineers at the facility are also assembling prototype MicroLED screens, right down to attaching the screen to the glass. The backplanes, an underlying component that electronically powers the displays, are developed at the Taiwan facility. Apple is also designing its own thin-film transistors and screen drivers, key components in display assemblies. Currently, the Santa Clara facility is capable of manufacturing a handful of fully operational Apple Watch-sized (under 2 inches diagonally) MicroLED screens at a time.

Until MicroLED is ready for the world to see, Apple will still -- at least publicly -- be all-in on OLED. The company plans to release a second OLED iPhone in the fall, a giant, 6.5-inch model, and is working to expand OLED production from Samsung to also include LG.

— With assistance by Debby Wu, and Ian King
https://www.bloomberg.com/news/artic...amsung-screens





'They'll Squash You Like a Bug': How Silicon Valley Keeps a Lid on Leakers

Working for a tech company may sound like all fun and ping-pong, but behind the facade is a ruthless code of secrecy – and retribution for those who break it
Olivia Solon

One day last year, John Evans (not his real name) received a message from his manager at Facebook telling him he was in line for a promotion. When they met the following day, she led him down a hallway praising his performance. However, when she opened the door to a meeting room, he came face to face with members of Facebook’s secretive “rat-catching” team, led by the company’s head of investigations, Sonya Ahuja.

The interrogation was a technicality; they already knew he was guilty of leaking some innocuous information to the press. They had records of a screenshot he’d taken, links he had clicked or hovered over, and they strongly indicated they had accessed chats between him and the journalist, dating back to before he joined the company.

“It’s horrifying how much they know,” he told the Guardian, on the condition of anonymity. “You go into Facebook and it has this warm, fuzzy feeling of ‘we’re changing the world’ and ‘we care about things’. But you get on their bad side and all of a sudden you are face to face with [Facebook CEO] Mark Zuckerberg’s secret police.”

The public image of Silicon Valley’s tech giants is all colourful bicycles, ping-pong tables, beanbags and free food, but behind the cartoonish facade is a ruthless code of secrecy. They rely on a combination of Kool-Aid, digital and physical surveillance, legal threats and restricted stock units to prevent and detect intellectual property theft and other criminal activity. However, those same tools are also used to catch employees and contractors who talk publicly, even if it’s about their working conditions, misconduct or cultural challenges within the company.

While Apple’s culture of secrecy, which includes making employees sign project-specific NDAs and covering unlaunched products with black cloths, has been widely reported, companies such as Google and Facebook have long put the emphasis on internal transparency.

Zuckerberg hosts weekly meetings where he shares details of unreleased new products and strategies in front of thousands of employees. Even junior staff members and contractors can see what other teams are working on by looking at one of many of the groups on the company’s internal version of Facebook.

“When you first get to Facebook you are shocked at the level of transparency. You are trusted with a lot of stuff you don’t need access to,” said Evans, adding that during his induction he was warned not to look at ex-partners’ Facebook accounts.

“The counterbalance to giving you this huge trusting environment is if anyone steps out of line, they’ll squash you like a bug.”

During one of Zuckerberg’s weekly meetings in 2015, after word of its new messaging assistant spread, the usually affable CEO warned employees: “We’re going to find the leaker, and we’re going to fire them.” A week later came the public shaming: Zuck revealed the culprit had been caught and fired. People at the meeting applauded.

“Companies routinely use business records in workplace investigations, and we are no exception,” said a Facebook spokeswoman, Bertie Thomson.

It’s a similar story at Google. Staff use an internal version of Google Plus and thousands of mailing lists to discuss everything from homeownership to items for sale, as well as social issues like neoconservatism and diversity. With the exception of James Damore’s explosive memo about gender and tech, most of it doesn’t leak.

By and large, staff buy into the corporate mission in a happy-clappy campus which helps foster a tribal mentality that discourages treachery. Employees are also rewarded with annual allocations of restricted stock that can buy silence for years after leaving.

“You would never do something that screws up the company’s chance of success because you are directly affected by it,” said a former Googler Justin Maxwell, who noted the pressure to behave in a “Googley” way.

The search engine’s former head of investigations, Brian Katz, highlighted this in 2016 in a company-wide email titled: “Internal only. Really.”

“If you’re considering sharing confidential information to a reporter – or to anyone externally – for the love of all that’s Googley, please reconsider! Not only could it cost you your job, but it also betrays the values that makes [sic] us a community,” he wrote.

This email came to light after another former employee sued Google for its overzealous approach to preventing leaks using overly broad confidentiality agreements and getting employees to spy on and report each other. The legal complaint alleges that Google’s policies violate labour laws that allow employees to discuss workplace conditions, wages and potential legal violations inside the company. Both parties are scheduled to enter mediation later this year.

James Damore, the software engineer who was fired from Google after writing a controversial memo questioning diversity programmes, suspects he was being monitored by the company during his final days.

He also described “weird things” happening to his work phone and laptop after the memo went viral. “All the internal apps updated at the same time, which had never happened before. I had to re-sign in to my Google account on both devices and my Google Drive – where the document was – stopped working.”

Damore said that much of the spying capabilities were outlined in his contract and that it was mostly “necessary” for a company that gives “everyone access to secret things”.

After he was fired, Damore stopped using his personal Gmail account in favour of Yahoo email out of fear that Google might be spying on him. “My lawyer doesn’t think they are above doing that,” he said.

It’s not implausible: Microsoft read a French blogger’s Hotmail account in 2012 to identify a former employee who had leaked trade secrets.

However, a Google spokeswoman said the company never reads personal email accounts and denied spying on Damore’s devices.

“I wouldn’t expect them to admit to it,” Damore said.

Since Damore’s memo, Google has become much leakier, particularly around internal discussions of racial and gender diversity.

“It’s a cry for help internally,” said another former Googler, who now runs a startup.

He said people at Google had for years put up with covert sexism, internal biases or, in his case, a manager with anger management problems. “No one would do anything until one day a VP saw the guy yelling at me in the hallway.

“People have been dealing with this stuff for years and are finally thinking ‘if Google isn’t going to do something about it, we’re going to leak it’.”

For low-paid contractors who do the grunt work for big tech companies, the incentive to keep silent is more stick than carrot. What they lack in stock options and a sense of corporate tribalism, they make up for in fear of losing their jobs.

One European Facebook content moderator signed a contract, seen by the Guardian, which granted the company the right to monitor and record his social media activities, including his personal Facebook account, as well as emails, phone calls and internet use. He also agreed to random personal searches of his belongings including bags, briefcases and car while on company premises. Refusal to allow such searches would be treated as gross misconduct.

Following Guardian reporting into working conditions of community operations analysts at Facebook’s European headquarters in Dublin, the company clamped down further, he said.

Contractors would be questioned if they took photographs in the office or printed emails or documents. “On more than one occasion someone would print something and you’d find management going through the log to see what they had printed,” said one former worker.

Security teams would leave “mouse traps” – USB keys containing data that were left around the office to test staff loyalty. “If you find a USB or something you’d have to give it in straight away. If you plugged it into a computer it would throw up a flare and you’d be instantly escorted out of the building.”

“Everyone was paranoid. When we texted each other we’d use code if we needed to talk about work and meet up in person to talk about it in private,” he said.

Some employees switch their phones off or hide them out of fear that their location is being tracked. One current Facebook employee who recently spoke to Wired asked the reporter to turn off his phone so the company would have a harder time tracking if it had been near the phones of anyone from Facebook.

Two security researchers confirmed that this would be technically simple for Facebook to do if both people had the Facebook app on their phone and location services switched on. Even if location services aren’t switched on, Facebook can infer someone’s location from wifi access points.

“We do not use cellphones to track employee locations, nor do we track locations of people who do not work at Facebook, including reporters,” said Thomson.

Companies will also hire external agencies to surveil their staff. One such firm, Pinkerton, counts Google and Facebook among its clients.

Among other services, Pinkerton offers to send investigators to coffee shops or restaurants near a company’s campus to eavesdrop on employees’ conversations.

“If we hear anything about a new product coming, or new business ventures or something to do with stocks, we’ll feed that information back to corporate security,” said David Davari, a managing director at the firm, adding that the focus is usually IP theft or insider trading.

Facebook and Google both deny using this service.

Through LinkedIn searches, the Guardian found several former Pinkerton investigators to have subsequently been hired by Facebook, Google and Apple.

“These tools are common, widespread, intrusive and legal,” said Al Gidari, consulting director of privacy at the Stanford Center for Internet and Society.

“Companies are required to take steps to detect and deter criminal misconduct, so it’s not surprising they are using the same tools to make sure employees are in compliance with their contractual obligations.”
https://www.theguardian.com/technolo...llance-leakers





Next Worry for Facebook: Disenchanted Users

Rising number of users claim to be abandoning social media giant, prompting a warning from some analysts that its growth could slow
Kirsten Grind

Facebook Inc.’s FB -2.24% handling of user data has upset lawmakers and regulators in multiple countries. But the biggest risk to its business could come from angry users.

Throughout previous controversies in recent years, Facebook’s user population has climbed steadily, providing the critical base that draws an ever growing gusher of advertising revenue.

Now Facebook is contending with a groundswell of users—some of whom are tweeting under the hashtag #DeleteFacebook—who claim to be abandoning the social media giant, prompting some analysts to warn that its growth juggernaut could sputter.

“The biggest issue we see for Facebook is if the DeleteFacebook leads to user attrition and eventually ad dollars allocated elsewhere,” Barclays analysts said in a research note Tuesday. The public backlash also could impinge on Facebook’s ability to recruit talented engineers, they said.

Late Wednesday, financial services firm Stifel slashed its target price for Facebook shares to $168 from $195, saying, “Facebook’s current plight reminds us of eBay in 2004—an unstructured content business built on trust that lost that trust prior to implementing policies to add structure and process.”

The latest crisis began late Friday when Facebook said it was looking into reports that analytics firm Cambridge Analytica , which worked with the Donald Trump campaign in 2016, improperly accessed data from its platform on tens of millions of users, and retained the data even after it had agreed to delete it. Cambridge Analytica said it followed Facebook policies.

The controversy knocked a total of 9% off Facebook’s stock price Monday and Tuesday, erasing $50 billion in market value, before shares rebounded 0.7% on Wednesday. The stock fell again by more than 1% in early trading Thursday. Facebook is facing legislative inquiries on two continents and an investigation by the Federal Trade Commission.

Chief Executive Mark Zuckerberg broke his silence on the issue Wednesday, admitting mistakes and pledging an investigation and improvements to user-data policies. “We have a responsibility to protect your data, and if we can’t, then we don’t deserve to serve you,” Mr. Zuckerberg wrote in a Facebook post.

It’s possible the tensions will ebb. For now, though, those reassurances weren’t enough for some users who already were frustrated by Facebook’s handling of Russian interference on the platform around the 2016 U.S. election.

Sabine Stanley, a 42-year-old professor at Johns Hopkins University, says she had been thinking about deleting her Facebook account for months as the company battled one crisis after another, but the revelation about Cambridge Analytica and Facebook’s slow response pushed her over the edge.

“You combine that with the election scandal, and I decided I couldn’t support Facebook anymore,” says Ms. Stanley, who also deleted her account on Facebook’s Instagram app.

A Facebook spokeswoman declined to comment. The number of people world-wide who use Facebook at least once a month has more than doubled since it went public in 2012, hitting 2.13 billion in the fourth quarter. Revenue and profit have grown even faster, thanks to Facebook’s use of its wealth of data to help advertisers target their messages to those users.

Warning signs began appearing last year, as anger rose in the U.S. over Facebook’s lax controls over misinformation and abuse on its platform. In an earlier Pivotal Research Group analysis of Nielsen data, Facebook’s U.S. users spent 7% less time on the site in August compared with a year ago and 4.7% less time in September.

Last month, Facebook said its users collectively spent 5% less time on the platform a day in the past three months of 2017, translating to a little more than two minutes per day, per user.

The company also said it experienced its first-ever quarter-to-quarter drop in the number of people who log in daily in its most lucrative market, the U.S. and Canada, where Facebook lost about 700,000 daily users out of 184 million overall. Facebook said the decline was a blip and that the figure was likely to fluctuate given Facebook’s broad reach in the region.

Analysts are also concerned that users will use Facebook less. Brian Weiser, an analyst with New York-based Pivotal Research, says he expects the Cambridge Analytica issue to reduce the amount of time users spend on Facebook by 10% to 15%.

“The biggest, most concerning thing here is the scale of this problem,” Mr. Weiser says. “All the operational failures indicate a real management problem.”

Gabrielle Estres, a 34-year-old industrial adviser in London, deleted her Facebook account this week after the recent data issues at the company, but said even before that she had been using it less.

“It was this super cool thing and now at this point it’s more the thing that always reminds me of birthdays,” she said. “The threshold of deleting your account is not that high anymore.”

Other users are more circumspect: “#DeleteFacebook isn’t the answer,” one Facebook user said on Twitter. “You just have to be smart when you use them.”

So far, there’s little indication that advertisers have changed their plans because of the latest furor—though some prominent executives have criticized the firm in recent months. Investors will get their next glimpse of Facebook’s performance with its first-quarter report more than a month from now.

“If time goes on and it appears they still seem disconnected from how users feel, then they might have a problem,” says Colin Sebastian, a senior research analyst at Robert W. Baird & Co. in San Francisco. “We can see now they’re in crisis management mode, which is a good thing.”
https://www.wsj.com/articles/next-wo...ers-1521717883





Schools Are Spending Millions on High-Tech Surveillance of Kids
Sidney Fussell

Advanced surveillance technologies once reserved for international airports and high-security prisons are coming to schools across America. From New York to Arkansas, schools are spending millions to outfit their campuses with some of the most advanced surveillance technology available: face recognition to deter predators, object recognition to detect weapons, and license plate tracking to deter criminals. Privacy experts are still debating the usefulness of these tools, whom they should be used on, and whom they should not, but school officials are embracing them as a way to save lives in times of crisis.

On Monday, the Magnolia School Board in Magnolia, Arkansas approved $287,217 for over 200 cameras at two schools. According to the Magnolia Reporter, the camera system will be capable of “facial recognition and tracking, live coverage, the ability to let local local law enforcement tap into the system in the event of a school situation, infrared capability and motion detection.”

And they aren’t the only ones. Earlier this month, the Lockport City School District announced it was installing new cameras outfitted with both face recognition and object recognition software. According to the software’s maker, faces can be matched against a database of gang members, fired employees, and sex offenders, while the object recognition tech can look for weapons and other prohibited objects.

“It is cutting edge. We’re hoping to be a model [for school security],” said Dr. Robert LiPuma, director of technology for the district told the Niagara Gazette. The paper reports the school district plans to spend “nearly all” of a $4 million state grant on new high-tech security measures at eight schools.

Similarly, license plate reading (LPR) cameras are coming to the Randolph Central School in New York, which spent half of $1.07 million a state bond allocation on high-tech security upgrades.

LPR cameras match license plates numbers against against national databases. They’re a quick way for law enforcement to know if a car has been stolen or if the owner is wanted for arrest, but also provides a wealth of information on where people go. If you’re not a suspect in a crime, cops can’t follow you around in your car all day. But, with a series of LPR cameras, officers could map where you’ve traveled all day, essentially granting them the same information, without ever having to seek a warrant.

Privacy and civil liberties experts are concerned, however, that the push to include biometric and location tracking security will have unforeseen consequences.

“Schools are justified in thinking about safety, both in terms of gun violence and other possible hazards,” Rachel Levinson, senior counsel at the Brennan Center for Justice, told Gizmodo. “At the same time, these technologies do not exist in a vacuum; we know, for instance, that facial recognition is less accurate for women and people of color, and also that school discipline is imposed more harshly on children of color.”

The technology isn’t foolproof. A study in February found that several face recognition systems had significantly higher failure and misidentification rates when used on dark-skinned and female faces, echoing earlier studies about the accuracy of such software.

Similarly, the databases people would be matched against are unreliable. People are frequently added to gang databases based on suspicion, without any gang-related convictions or even arrests. A 2016 audit found California police had added dozens of toddlers less a year old to its CalGang database.

“Any school or school district considering adopting these kinds of technologies must address these issues head on,” Levison said, “involve parents and the school community at large in any decision-making, and be fully transparent about how information gathered is used, retained, or shared, particularly with law enforcement or school resource officers.”

Additionally, undocumented and immigrant parents may have reason to worry about LPR implementation in schools, as ICE was recently granted access to one nationwide LPR database. If LPR cameras come to schools across the country, these parents might have legitimate fears about being targeted by ICE when dropping off their kids.

Ultimately, when schools turn to surveillance as a public safety tool, they’re also bringing the muddled issues of privacy, race, and fairness that comes with it. Technological proposals to protect students come with the same promises as those in the public sphere: faster, more accurate systems capable of larger scale identification. But, they come with the same problems of privacy and power.
https://gizmodo.com/schools-are-spen...nce-1823811050





A “Tamper-Proof” Currency Wallet just got Trivially Backdoored by a 15-Year-Old

Backdoor allows attacker to recover private keys stored on Ledger hardware wallets.
Dan Goodin

For years, executives at France-based Ledger have boasted their specialized hardware for storing cryptocurrencies is so securely designed that resellers or others in the supply chain can't tamper with the devices without it being painfully obvious to end users. The reason: "cryptographic attestation" that uses unforgeable digital signatures to ensure that only authorized code runs on the hardware wallet.

"There is absolutely no way that an attacker could replace the firmware and make it pass attestation without knowing the Ledger private key," officials said in 2015. Earlier this year, Ledger's CTO said attestation was so foolproof that it was safe to buy his company's devices on eBay.

On Tuesday, a 15-year-old from the UK proved these claims wrong. In a post published to his personal blog, Saleem Rashid demonstrated proof-of-concept code that had allowed him to backdoor the Ledger Nano S, a $100 hardware wallet that company marketers have said has sold by the millions. The stealth backdoor Rashid developed is a minuscule 300-bytes long and causes the device to generate pre-determined wallet addresses and recovery passwords known to the attacker. The attacker could then enter those passwords into a new Ledger hardware wallet to recover the private keys the old backdoored device stores for those addresses.

Using the same approach, attackers could perform a variety of other nefarious actions, including changing wallet destinations and amounts for payments so that, for instance, an intended $25 payment to an Ars Technica wallet would be changed to a $2,500 payment to a wallet belonging to the backdoor developer. The same undetectable backdoor works on the $200 Ledger Blue, which is billed as a higher-end device. Variations on the exploit might also allow so-called "evil maid attacks," in which people with brief access to the device could compromise it while they clean a user's hotel room.

Two weeks ago, Ledger officials updated the Nano S to mitigate the vulnerability Rashid privately reported to them in November. In the release notes for firmware version 1.4.1, however, Ledger Chief Security Officer Charles Guillemet stressed the vulnerability was "NOT critical." In a deeper-dive into the security fix published Tuesday, Guillemet said the "attack cannot extract the private keys or the seed," an assertion that Rashid has publicly challenged as incorrect.

Guillemet also said Ledger can detect backdoored wallets if they connect to the Ledger server using a device manager to load applications or update the firmware. He said he had no estimate when the same vulnerability in Ledger Blue would be patched. "As the Blue has been distributed almost exclusively through direct sales, the probability to run the 'shady reseller scam' is negligible," he said. Meanwhile, the company post saying there is "absolutely no way" firmware can be replaced on Ledger devices remains.

A fundamentally hard problem

Rashid said he has yet to verify that this month's Nano S update fully neutralizes his proof-of-concept backdoor exploit as claimed by Ledger. But even if it does, he said he believes a key design weakness in Ledger hardware makes it likely his approach can be modified so that it will once again work. Specifically, the Ledger Blue and Nano S rely on the ST31H320 secure microcontroller from STMicroelectronics to provide the cryptographic attestation that the device is running authorized firmware. The secure microcontroller doesn't support displays, USB connections, or high-throughput communications, so Ledger engineers added a second general-purpose microcontroller, the STM32F042K6, to serve as a proxy.

The secure microcontroller, which Ledger calls the Secure Element, communicates directly with the general-purpose microcontroller, which Ledger calls the MCU. The MCU, in turn, communicates with the rest of the hardware wallet, including its USB host, built-in OLED display, and device buttons users press to control various wallet functions. In a nutshell, Rashid's exploit works by replacing the genuine firmware with unauthorized code while at the same time causing the MCU to send the Secure Element the official firmware image.

Matt Green, a Johns Hopkins University professor specializing in encryption security, has reviewed Rashid's research. Green told Ars the dual-chip design makes him skeptical that this month's update permanently fixes the weakness Rashid exploited.

"Ledger is trying to solve a fundamentally hard problem," he explained. "They need to check the firmware running on a processor. But their secure chip can't actually see the code running on that processor. So they have to ask the processor to supply its own code! Which is a catch-22, since that processor might not be running honest code, and so you can't trust what it gives you. It's like asking someone who may be a criminal to provide you with their full criminal record—on the honor system."

The difficulty of solving the problem is in stark contrast to the confidence Ledger marketers profess in guaranteeing the security of the devices. In addition to the tamper-proof assurances mentioned earlier, the company includes a leaflet with each device. It reads: "Did you notice? There is no anti-tampering sticker on this box. A cryptographic mechanism checks the integrity of your Ledger device's internal software each time it is powered on. The Secure Element chip prevents any interception or physical replacement attempt. Ledger devices are engineered to be tamper-proof."

Creative and devastating

To be fair, Ledger engineers took steps to prevent the MCU from being capable of misrepresenting to the Secure Element the code that's running on the device. The Secure Element requires the MCU to pass along the entire contents of its flash memory. At the same time, the MCU has a relatively limited amount of flash memory. To sneak malicious code onto a hardware wallet, the MCU must, in theory, store the official Ledger firmware and the malicious code. Generally speaking, the storage capacity of the MCU should prevent this kind of hack from working.

Rashid got around this challenge after noticing that the MCU stores both a bootloader and firmware and that the certain types of software functions called "compiler intrinsics" in these separate programs were identical. He then removed the intrinsics in the firmware and replaced them with his ultra-small malicious payload. When the Secure Element asked the MCU for its flash contents—which, of course, included the unauthorized firmware—Rashid's hack pieced together a legitimate image by removing the malicious code and replacing it with the legitimate intrinsics from the bootloader. As a result, the Secure Element mistakenly verified the backdoored firmware as genuine.

The result was a device that generated wallet addresses and recovery passwords that weren't random but, rather, were entirely under the control of the backdoor developer. The 24 passwords, which technically are known as recovery seed, are used in the event a hardware wallet is lost or broken. By entering the seed into a new device, the wallet addresses' private keys stored in the old device are automatically restored.

A video accompanying Rashid's blog post shows a device displaying the word "abandon" for the first 23 recovery passwords and "art" for the remaining one. A malicious backdoor could provide a recovery seed that appeared random to the end user but was entirely known to the developer.

"He's carving up the firmware in a really efficient way to fit it into a tiny amount of space to pull off the attack here," said Kenn White, an independent researcher who reviewed Rashid's research before it was published. "It's well done, it's clever, it's creative, and it's devastating."

Rashid told Ars that it might have been possible for his backdoor to do a variety of other nefarious things. He also said the weaknesses could be exploited in evil-maid scenarios in which someone has brief access to the device and possibly by malware that infects the computer the device is plugged into. Researchers are usually quick to point out that physical access and malware-infected computers are, by definition, compromises on their own and hence shouldn't be considered a valid means for compromising the hardware wallets. The chief selling point of hardware wallets, however, is that they protect users against these fatal events.

Rashid declined to provide much personal information to Ars other than to say he's 15, lives in the south part of the UK, and is a self-taught programmer. Neither White nor Green said they verified Rashid's age, but they also said they had no reason to doubt it.

"I'd be heartbroken if he wasn't 15," Green said. "He's one of the most talented 15-year-olds I've ever talked to. Legitimate hacking genius. If he turns out to be some 35-year-old, he'll still be legit talented, but my faith in humanity will be shaken."
https://arstechnica.com/information-...a-15-year-old/





Firefox Master Password System Has Been Poorly Secured for the Past 9 Years
Catalin Cimpanu

For at past nine years, Mozilla has been using an insufficiently strong encryption mechanism for the "master password" feature.

Both Firefox and Thunderbird allow users to set up a "master password" through their settings panel. This master password plays the role of an encryption key that is used to encrypt each password string the user saves in his browser or email client.

Experts have lauded the feature because up until that point browsers would store passwords locally in cleartext, leaving them vulnerable to malware or attackers with physical access to a victim's computer.

But Wladimir Palant, the author of the AdBlock Plus extension, says the encryption scheme used by the master password feature is weak and can be easily brute-forced.

Master password encryption uses low SHA1 iteration count

"I looked into the source code," Palant says, "I eventually found the sftkdb_passwordToKey() function that converts a [website] password into an encryption key by means of applying SHA-1 hashing to a string consisting of a random salt and your actual master password."

"Anybody who ever designed a login function on a website will likely see the red flag here," Palant says.

The flag Palant is referring to is the fact that the SHA-1 function has an iteration count of 1, meaning it's applied just once, while industry practices regard 10,000 as a solid minimum for this value, while applications like LastPass use values of 100,000.

This low iteration count makes it incredibly easy for an attacker to brute-force the master password and later decrypt the encrypted passwords stored inside the Firefox or Thunderbird databases.

Palant points to recent advances in GPU card technologies that now allow attackers to brute-force simplistic master passwords in under a minute.

Issue first reported nine years ago

But Palant wasn't the first to notice such weakness. A Mozilla bug tracker entry by Justin Dolske from nine years ago reported the same issue, soon after the master password feature's launch.

Dolske also pointed to the low iteration count of 1 as the master password's main problem. But despite the report, Mozilla did not take any official action for years.

It was only until this past week when Palant reanimated the original bug report that Mozilla finally provided an official answer, suggesting this would be fixed with the launch of Firefox's new password manager component —currently codenamed Lockbox and available as an extension.

Using a master password is much better than the alternative of not using one. For the time being, choosing longer and more complex master passwords mitigates the feature's inherent weak encryption scheme. Users who want to be sure nobody can touch their web passwords should use a third-party password manager application.

The optimum solution, according to Palant, would be if Mozilla engineers would employ the Argon2 library for hashing passwords instead of SHA1.
https://www.bleepingcomputer.com/new...-past-9-years/





Telegram Loses Bid to Block Russia From Encryption Keys
Ilya Khrennikov

• Messaging service plans to appeal Russian court’s decision
• Regulators could block Telegram service if it fails to comply

Telegram, the encrypted messaging app that’s prized by those seeking privacy, lost a bid before Russia’s Supreme Court to block security services from getting access to users’ data, giving President Vladimir Putin a victory in his effort to keep tabs on electronic communications.

Supreme Court Judge Alla Nazarova on Tuesday rejected Telegram’s appeal against the Federal Security Service, the successor to the KGB spy agency which last year asked the company to share its encryption keys. Telegram declined to comply and was hit with a fine of $14,000. Communications regulator Roskomnadzor said Telegram now has 15 days to provide the encryption keys.

Telegram, which is in the middle of an initial coin offering of as much as $2.55 billion, plans to appeal the ruling in a process that may last into the summer, according to the company’s lawyer, Ramil Akhmetgaliev. Any decision to block the service would require a separate court ruling, the lawyer said.

“Threats to block Telegram unless it gives up private data of its users won’t bear fruit. Telegram will stand for freedom and privacy,” Pavel Durov, the company’s founder, said on his Twitter page.

Putin signed laws in 2016 on fighting terrorism, which included a requirement for messaging services to provide the authorities with means to decrypt user correspondence. Telegram challenged an auxiliary order by the Federal Security Service, claiming that the procedure doesn’t involve a court order and breaches constitutional rights for privacy, according to documents.

The security agency, known as the FSB, argued in court that obtaining the encryption keys doesn’t violate users’ privacy because the keys by themselves aren’t considered information of restricted access. Collecting data on particular suspects using the encryption would still require a court order, the agency said.

“The FSB’s argument that encryption keys can’t be considered private information defended by the Constitution is cunning,” Akhmetgaliev, Telegram’s lawyer, told reporters after the hearing. “It’s like saying, ‘I’ve got a password from your email, but I don’t control your email, I just have the possibility to control.’”

The court decision is intended to make one of the last holdouts among communications companies bow to Putin’s efforts to track electronic messaging. Durov in June registered the service with the state communications watchdog after it was threatened with a ban over allegations that terrorists used it to plot a suicide-bomb attack.

Telegram has more than 9.5 million users in Russia, according to researcher Mediascope. It raised $850 million from investors in February in a so-called initial coin offering and is trying to raise another $1.7 billion, according to company documents seen by Bloomberg News. Telegram plans to use the proceeds to build a blockchain network with built-in cryptocurrency Gram that could enable faster transactions than bitcoin.
https://www.bloomberg.com/news/artic...ncryption-keys





The NSA Worked to “Track Down” Bitcoin Users, Snowden Documents Reveal
Sam Biddle

Internet paranoiacs drawn to bitcoin have long indulged fantasies of American spies subverting the booming, controversial digital currency. Increasingly popular among get-rich-quick speculators, bitcoin started out as a high-minded project to make financial transactions public and mathematically verifiable — while also offering discretion. Governments, with a vested interest in controlling how money moves, would, some of bitcoin’s fierce advocates believed, naturally try and thwart the coming techno-libertarian financial order.

It turns out the conspiracy theorists were onto something. Classified documents provided by whistleblower Edward Snowden show that the National Security Agency indeed worked urgently to target bitcoin users around the world — and wielded at least one mysterious source of information to “help track down senders and receivers of Bitcoins,” according to a top-secret passage in an internal NSA report dating to March 2013. The data source appears to have leveraged the NSA’s ability to harvest and analyze raw, global internet traffic while also exploiting an unnamed software program that purported to offer anonymity to users, according to other documents.

Although the agency was interested in surveilling some competing cryptocurrencies, “Bitcoin is #1 priority,” a March 15, 2013 internal NSA report stated.

The documents indicate that “tracking down” bitcoin users went well beyond closely examining bitcoin’s public transaction ledger, known as the Blockchain, where users are typically referred to through anonymous identifiers; the tracking may also have involved gathering intimate details of these users’ computers. The NSA collected some bitcoin users’ password information, internet activity, and a type of unique device identification number known as a MAC address, a March 29, 2013 NSA memo suggested. In the same document, analysts also discussed tracking internet users’ internet addresses, network ports, and timestamps to identify “BITCOIN Targets.”

The agency appears to have wanted even more data: The March 29 memo raised the question of whether the data source validated its users, and suggested that the agency retained bitcoin information in a file named “Provider user full.csv.” It also suggested powerful search capabilities against bitcoin targets, hinting that the NSA may have been using its XKeyScore searching system, where the bitcoin information and wide range of other NSA data was cataloged, to enhance its information on bitcoin users. An NSA reference document indicated that the data source provided “user data such as billing information and Internet Protocol addresses.” With this sort of information in hand, putting a name to a given bitcoin user would be easy.

The NSA’s budding bitcoin spy operation looks to have been enabled by its unparalleled ability to siphon traffic from the physical cable connections that form the internet and ferry its traffic around the planet. As of 2013, the NSA’s bitcoin tracking was achieved through program code-named OAKSTAR, a collection of covert corporate partnerships enabling the agency to monitor communications, including by harvesting internet data as it traveled along fiber optic cables that undergird the internet.

Specifically, the NSA targeted bitcoin through MONKEYROCKET, a sub-program of OAKSTAR, which tapped network equipment to gather data from the Middle East, Europe, South America, and Asia, according to classified descriptions. As of spring 2013, MONKEYROCKET was “the sole source of SIGDEV for the BITCOIN Targets,” the March 29, 2013 NSA report stated, using the term for signals intelligence development, “SIGDEV,” to indicate the agency had no other way to surveil bitcoin users. The data obtained through MONKEYROCKET is described in the documents as “full take” surveillance, meaning the entirety of data passing through a network was examined and at least some entire data sessions were stored for later analysis.

At the same time, MONKEYROCKET is also described in the documents as a “non-Western Internet anonymization service” with a “significant user base” in Iran and China, with the program brought online in summer 2012. It is unclear what exactly this product was, but it would appear that it was promoted on the internet under false pretenses: The NSA notes that part of its “long-term strategy” for MONKEYROCKET was to “attract targets engaged in terrorism, [including] Al Qaida” toward using this “browsing product,” which “the NSA can then exploit.” The scope of the targeting would then expand beyond terrorists. Whatever this piece of software was, it functioned a privacy bait and switch, tricking bitcoin users into using a tool they thought would provide anonymity online but was actually funneling data directly to the NSA.

The hypothesis that the NSA would “launch an entire operation overseas under false pretenses” just to track targets is “pernicious,” said Matthew Green, assistant professor at the Johns Hopkins University Information Security Institute. Such a practice could spread distrust of privacy software in general, particularly in areas like Iran where such tools are desperately needed by dissidents. This “feeds a narrative that the U.S. is untrustworthy,” said Green. “That worries me.”

The NSA declined to comment for this article. The Bitcoin Foundation, a nonprofit advocacy organization, could not immediately comment.

Although it offers many practical benefits and advantages over traditional currency, a crucial part of bitcoin’s promise is its decentralization. There is no Bank of Bitcoin, no single entity that keeps track of the currency or its spenders. Bitcoin is often misunderstood as being completely anonymous; in fact, each transaction is tied to publicly accessible ID codes included in the Blockchain, and bitcoin “exchange” companies typically require banking or credit card information to convert Bitcoin to dollars or euros. But bitcoin does offer far greater privacy than traditional payment methods, which require personal information up to and including a Social Security number, or must be linked to a payment method that does require such information.

Furthermore, it is possible to conduct private bitcoin transactions that do not require exchange brokers or personal information. As explained in the 2009 white paper launching bitcoin, “the public can see that someone is sending an amount to someone else, but without information linking the transaction to anyone.” For bitcoin adherents around the world, this ability to transact secretly is part of what makes the currency so special, and such a threat to the global financial status quo. But the relative privacy of bitcoin transactions has naturally frustrated governments around the world and law enforcement in particular — it’s hard to “follow the money” to criminals when the money is designed to be more difficult to follow. In a November 2013 letter to Congress, one Homeland Security official wrote that “with the advent of virtual currencies and the ease with which financial transactions can be exploited by criminal organizations, DHS has recognized the need for an aggressive posture toward this evolving trend.”

Green told The Intercept he believes the “browsing product” component of MONKEYROCKET sounds a lot like a virtual private network, or VPN. VPNs encrypt and reroute your internet traffic to mask what you’re doing on the internet. But there’s a catch: You have to trust the company that provides you a VPN, because they provide both software and an ongoing networking service that potentially allows them to see where you’re going online and even intercept some of your traffic. An unscrupulous VPN would have complete access to everything you do online.

Emin Gun Sirer, associate professor and co-director of the Initiative for Cryptocurrencies and Contracts at Cornell University, told The Intercept that financial privacy “is something that matters incredibly” to the bitcoin community, and expects that “people who are privacy conscious will switch to privacy-oriented coins” after learning of the NSA’s work here. Despite bitcoin’s reputation for privacy, Sirer added, “when the adversary model involves the NSA, the pseudonymity disappears. … You should really lower your expectations of privacy on this network.”

Green, who co-founded and currently advises a privacy-focused bitcoin competitor named Zcash, echoed those sentiments, saying that the NSA’s techniques make privacy features in any digital currencies like Ethereum or Ripple “totally worthless” for those targeted.

The NSA’s interest in cryptocurrency is “bad news for privacy, because it means that in addition to the really hard problem of making the actual transactions private … you also have to make sure all the network connections [are secure],” Green added. Green said he is “pretty skeptical” that using Tor, the popular anonymizing browser, could thwart the NSA in the long term. In other words, even if you trust bitcoin’s underlying tech (or that of another coin), you’ll still need to be able to trust your connection to the internet — and if you’re being targeted by the NSA, that’s going to be a problem.

NSA documents note that although MONKEYROCKET works by tapping an unspecified “foreign” fiber cable site, and that data is then forwarded to the agency’s European Technical Center in Wiesbaden, Germany, meetings with the corporate partner that made MONKEYROCKET possible sometimes took place in Virginia. Northern Virginia has for decades been a boomtown for both the expansive national security state and American internet behemoths — telecoms, internet companies, and spy agencies call the area’s suburbs and office parks home.

Bitcoin may have been the NSA’s top cryptocurrency target, but it wasn’t the only one. The March 15, 2013 NSA report detailed progress on MONKEYROCKET’s bitcoin surveillance and noted that American spies were also working to crack Liberty Reserve, a far seedier predecessor. Unlike bitcoin, for which facilitating drug deals and money laundering was incidental to bigger goals, Liberty Reserve was more or less designed with criminality in mind. Despite being headquartered in Costa Rica, the site was charged with running a $6 billion “laundering scheme” and triple-teamed by the U.S. Department of Justice, Homeland Security, and the IRS, resulting in a 20-year conviction for its Ukrainian founder. As of March 2013 — just two months before the Liberty Reserve takedown and indictment — the NSA considered the currency exchange its No. 2 target, second only to bitcoin. The indictment and prosecution of Liberty Reserve and its staff made no mention of help from the NSA.

Just five months after Liberty Reserve was shuttered, the feds turned their attention to Ross Ulbricht, who would go on to be convicted as the mastermind behind notorious darkweb narcotics market Silk Road, where transactions were conducted in bitcoin, with a cut going to the site’s owner. Ulbricht reportedly held bitcoins worth $28.5 million at the time of his arrest. Part of his unsuccessful defense was the insistence that the FBI’s story of how it found him did not add up, and that the government may have discovered and penetrated the Silk Road’s servers with the help of the NSA — possibly illegally. The prosecution dismissed this theory in no uncertain terms:

“Having failed in his prior motion to dismiss all of the Government’s charges, Ulbricht now moves this Court to suppress virtually all of the Government’s evidence, on the ground that it was supposedly obtained in violation of the Fourth Amendment. Ulbricht offers no evidence of any governmental misconduct to support this sweeping claim. Instead, Ulbricht conjures up a bogeyman – the National Security Agency (“NSA”) – which Ulbricht suspects, without any proof whatsoever, was responsible for locating the Silk Road server, in a manner that he simply assumes somehow violated the Fourth Amendment.”

Though the documents leaked by Snowden do not address whether the NSA aided the FBI’s Silk Road investigation, they show the agency working to unmask bitcoin users about six months before Ulbricht was arrested, and that it had worked to monitor Liberty Reserve around the same time. The source of the bitcoin and Liberty Reserve monitoring, MONKEYROCKET, is governed by an overseas surveillance authority known as Executive Order 12333, the language of which is believed to give U.S. law enforcement agencies wide latitude to use the intelligence when investigating U.S. citizens.

Civil libertarians and security researchers have long been concerned that otherwise inadmissible intelligence from the agency is used to build cases against Americans though a process known as “parallel construction”: building a criminal case using admissible evidence obtained by first consulting other evidence, which is kept secret, out of courtrooms and the public eye. An earlier investigation by The Intercept, drawing on court records and documents from Snowden, found evidence the NSA’s most controversial forms of surveillance, which involve warrantless bulk monitoring of emails and fiber optic cables, may have been used in court via parallel construction.

Patrick Toomey, an attorney with the ACLU’s National Security Project, said the NSA bitcoin documents, although circumstantial, underscore a serious and ongoing question in American law enforcement:

“If the government’s criminal investigations secretly relied on NSA spying, that would be a serious concern. Individuals facing criminal prosecution have a right to know how the government came by its evidence, so that they can challenge whether the government’s methods were lawful. That is a basic principle of due process. The government should not be hiding the true sources for its evidence in court by inventing a different trail.”

Although an NSA document about MONKEYROCKET stated the program’s “initial” concern was counterterrorism, it also said that “other targeted users will include those sought by NSA offices such as Int’l Crime & Narcotics, Follow-The-Money and Iran.” A March 8, 2013 NSA memo said agency staff were “hoping to use [MONKEYROCKET] for their mission of looking at organized crime and cyber targets that utilize online e-currency services to move and launder money.” There’s no elaboration on who is considered a “cyber target.”
https://theintercept.com/2018/03/20/...uments-reveal/





It’s Not Just Cambridge Analytica, Facebook, & Trump—The Whole Web Is Stalking You

The internet was never designed for privacy, and between consumer profiling and government surveillance, there’s been little incentive to make it any more private.
David Auerbach

After the news broke that voter-profiling company Cambridge Analytica harvested the personal data of 50 million Facebook users to help market Donald Trump to them, Facebook has insisted it did nothing wrong and has pushed back on news reports claiming its data was breached.

Facebook is right: There was no breach. That’s the problem.

Cambridge Analytica is no outlier. The horror is not that Cambridge Analytica demographically and psychologically profiled 50 million Facebook users, but that everyone is doing it. Cambridge Analytica did not exploit a loophole. Rather, they used Facebook’s big data the way it was intended. They just marketed a candidate instead of a product. (These days, there’s not that much of a difference.)

That’s the problem with big data: It doesn’t come with signs, let alone rules, declaring CAN ONLY BE USED FOR GOOD. Once the data is out there, people will find a way to use it for whatever they want: marketing, surveillance, or propaganda.

Or, as the managing director of Cambridge Analytica Political Global put it: “We just put information into the bloodstream of the internet, and then, and then watch it grow, give it a little push every now and again,” he said. “Like a remote control. It has to happen without anyone thinking, ‘that’s propaganda’, because the moment you think ‘that’s propaganda’, the next question is, ‘who’s put that out?’”

Facebook admits that the data collection itself, obtained in part through an personality profile app called “thisisyourdigitallife” created by Cambridge psychology professor Aleksandr Kogan, was not a violation of its terms. Used to harvest names, locations, friends lists, Likes, and more, the data was well-suited for helping to identify and motivate potential voters. Last Friday, Facebook wrote:

“Approximately 270,000 people downloaded the app. In so doing, they gave their consent for Kogan to access information such as the city they set on their profile, or content they had liked, as well as more limited information about friends who had their privacy settings set to allow it.

Although Kogan gained access to this information in a legitimate way and through the proper channels that governed all developers on Facebook at that time, he did not subsequently abide by our rules. By passing information on to a third party, including SCL/Cambridge Analytica and Christopher Wylie of Eunoia Technologies, he violated our platform policies.

— Facebook”

There is an absurdity here: Facebook admits it allows third-party apps to collect intimate data on its users. The only restriction is that the app creators can’t then reshare the data. In other words, the restrictions are on things that Facebook cannot possibly police proactively.

This, too, has long been Facebook’s strategy. In 2008, Facebook rolled out Facebook Beacon, a program to put invisible “web bugs” on pages across the internet. Beacon tracked Facebook users across the internet and posted their activity to their Facebook walls. Buy some baby clothes on Amazon, and it would announce the fact on Facebook, without your permission. There was an outcry, and eventually Facebook shut Beacon down.

But it didn’t give up. By splashing its Like button across the internet, by placing third-party cookies and Facebook logins on other sites, and by acquiring user data directly from third parties, Facebook continued devouring personal information. That personal information is what made Facebook a treasure trove for advertisers and marketers—and, as we now know, for Cambridge Analytica, the political microtargeting firm that helped get Donald Trump elected. An enormous industry of hundreds of firms sprung up around demographic profiling, marketing, and real-time bidding in order to sell online advertising to the right user at the right time. Facebook is the highest-profile broker for this kind of advertising, but far from the only one.

That’s why questions of illegality and terms-of-service violations are misleading. Even in the absence of violations, this sort of data collection is ubiquitous and omnivorous. Facebook would likely prefer the discussion to focus on questions of law, since United States data-sharing laws are quite lax, relying on “self-policing” by industry groups like the Network Advertising Initiative as a way to fend off actual legislation or regulation. European strategies like the “right to be forgotten” focus only on the publication of personal data rather than its initial collection—the very essence of closing the stable door after the horse has bolted.

The problem, as law professor Frank Pasquale puts it in The Black Box Society: The Secret Algorithms That Control Money and Information, is “runaway data.” We are accustomed to thinking that once we share data with a particular entity—be it Google, Facebook, Amazon, or any one of thousands of shadowy data marketing firms—our data is somehow siloed there. Nothing could be further from the truth. In fact, firms are constantly sharing, selling, and coalescing data piecemeal in order to construct increasingly elaborate profiles of customers and citizens, and this data is frequently available in one form or another to anyone with the money to buy it. Entire companies like Interclick have been bought specifically for the hundreds of millions of demographic profiles they had accumulated.

If Cambridge Analytica hadn’t been able to get what it needed through Facebook, it could have gone to any number of other data brokers to get it, and then cross-referenced it to target Facebook users. If you’re a likely Trump voter, Facebook is not the only company with the evidence to prove it. Your credit card receipts, website habits, and demographic profile are frequently just as good, and there is no shortage of companies offering your data, without your permission.

Take Acxiom, a company which offers “Identity Resolution & People-Based Marketing.” In a series of articles in The New York Times, Natasha Singer explored how this veteran marketing technology company (founded in 1969) has profiled 500 million users, 10 times the 50 million that Facebook offered to Cambridge Analytica, and sells these “data products” in order to help marketers target customers based on interest, race, gender, political alignment, and more. WPP and GroupM’s “digital media platform” Xaxis has also claimed 500 million consumer profiles. Other marketing companies, like Qualia, track users across platforms and devices as they browse the web. There’s no sign-up or opt-in involved. These companies simply cyberstalk users en masse.

Even if Facebook were to seal off its data from the rest of the world, Cambridge Analytica could go to Acxiom, or any other company like it, to find the right voters and then locate them by name alone on Facebook (or elsewhere). How many of these companies would be able to tell the difference between an ordinary client using its data versus Cambridge Analytica, or even Russia? Once you’ve got a user’s data without their permission, most everything is on the table.

The only surprise is in how long it took for this data hydra to create havoc. Given the close margin of the election, it’s quite possible that Cambridge Analytica’s work (like so many other small factors) was enough to shift the election in Trump’s favor. The keys have been hanging from the door lock for over a decade, and few cared until now.

Facebook’s decision to suspend Cambridge Analytica now is little comfort. Notably, Facebook said nothing about preventing new companies from doing the same sorts of data harvesting and marketing. And even if Facebook did decide to sacrifice the revenue and somehow cut off all such activity, a peek inside Amazon, Google, Oracle, or Acxiom will reveal petabytes of similar personal data, ready to be shared, studied, and sold.

The internet was never designed for privacy, and between consumer profiling and government surveillance, there’s been little incentive to make it any more private. Cambridge Analytica’s work is not an aberration; it is an inevitability.

In 2013, I wrote of the growing threat not of Big Brother, but Big Salesman: For internet marketing companies, you are what you click. Big Salesman appears more innocuous when he’s marketing shoes and cars. Now that he’s marketing Donald Trump, we are beginning to wake up to his danger. With all our data out there, being gathered and sold by hundreds of companies, there will always be a Cambridge Analytica, a Steve Bannon, or a Vladimir Putin ready to make use of it.
https://www.thedailybeast.com/its-no...s-stalking-you





Hollywood's Behind-The-Scenes Support For SESTA Is All About Filtering The Internet
Mike Masnick

Over at the EFF blog, Joe Mullin has an excellent discussion on why Hollywood is such a vocal supporter of SESTA, despite having nothing to do with Hollywood. It's because the bill actually accomplishes a goal that Hollywood has dreamed about for years: mandatory filtering of all content on the internet.

For legacy software and entertainment companies, breaking down the safe harbors is another road to a controlled, filtered Internet—one that looks a lot like cable television. Without safe harbors, the Internet will be a poorer place—less free for new ideas and new business models. That suits some of the gatekeepers of the pre-Internet era just fine.

The not-so-secret goal of SESTA and FOSTA is made even more clear in a letter from Oracle. “Any start-up has access to low cost and virtually unlimited computing power and to advanced analytics, artificial intelligence and filtering software,” wrote Oracle Senior VP Kenneth Glueck. In his view, Internet companies shouldn’t “blindly run platforms with no control of the content.”

That comment helps explain why we’re seeing support for FOSTA and SESTA from odd corners of the economy: some companies will prosper if online speech is subject to tight control. An Internet that’s policed by “copyright bots” is what major film studios and record have advocated for more than a decade now. Algorithms and artificial intelligence have made major advances in recent years, and some content companies have used those advances as part of a push for mandatory, proactive filters. That’s what they mean by phrases like “notice-and-stay-down,” and that’s what messages like the Oracle letter are really all about.


There's a lot more in Mullin's post, but it actually goes much beyond that. Every rock you lift up in looking at where SESTA's support has come from, you magically find Hollywood people scurrying quietly around. We've already noted that much of the initial support for SESTA came from a group whose then board chair was a top lobbyist for News Corp.. And, as we reported last month, after a whole bunch of people we spoke to suggested that much of the support for SESTA was being driven by former top News Corp. lobbyist, Rick Lane, we noticed that a group of people who went around Capitol Hill telling Congress to support SESTA publicly thanked their "partner" Rick Lane for showing them around.

In other words, it's not just Hollywood seeing a bill that gets them what it wants and suddenly speaking up in favor of it... this is Hollywood helping to make this bill happen in the first place as part of its ongoing effort to remake the internet away from being a communications medium for everyone, and into a broadcast/gatekeeper dominated medium where it gets to act as the gatekeeper.

And if you think that Hollywood big shots are above pumping up a bogus moral panic to get their way, you haven't been paying attention. Remember, for years Hollywood has also pushed the idea that the internet requires filters and censorship for basically any possible reason. Back during the SOPA days, it focused on "counterfeit pharmaceuticals." Again, not an issue that Hollywood is actually concerned with, but if it helped force filters and stopped user-generated content online, Hollywood was quick to embrace it.

Remember, after all, that the MPAA set up Project Goliath to attack Google, and a big part of that was paying its own lawyers at the law firm of Jenner & Block to write demand letters for state Attorneys General, like Mississippi Attorney General Jim Hood, who sent a bogus subpoena and demand letter to Google (written by the MPAA's lawyers and on the MPAA's bill). And what did Hood complain about to Google in that letter written by the MPAA's lawyers? You guessed it:

Hood accused Google of being “unwilling to take basic actions to make the Internet safe from unlawful and predatory conduct, and it has refused to modify its own behavior that facilitates and profits from unlawful conduct.” His letter cites not just piracy of movies, TV shows and music but the sale of counterfeit pharmaceuticals and sex trafficking.

The MPAA has cynically been using the fact that there are fake drugs and sex trafficking on the internet for nearly decade to push for undermining the core aspects of the internet. They don't give a shit that none of this will stop sex trafficking (or that it will actually make life more difficult for victims of sex trafficking). The goal, from the beginning was to hamstring the internet, and return Hollywood to what it feels is its rightful place as the gatekeeper for all culture.

Indeed, our post earlier about Senator Blumenthal's bizarre email against a basic SESTA amendment from Senator Wyden to fix the "moderator's dilemma" aspect was quite telling. He falsely claimed that adding in that amendment -- that merely states that the act of doing some moderation or filtering doesn't append liability to the site for content they fail to filter or moderate (which is the crux of CDA 230's "Good Samaritan" language) -- would create problems for Hollywood. Indeed, a key part of Blumenthal's letter is that this amendment "has the potential to disrupt other areas of the law, such as copyright protections."

But that makes zero sense at all. CDA 230 does not apply to copyright. It doesn't apply to any intellectual property law, as intellectual property is explicitly exempted from all of CDA 230 and has been from the beginning. Nothing in the Wyden amendment changes that. And... it does seem quite odd for Blumenthal to suddenly be bringing up copyright in a discussion about CDA 230, unless it's really been Hollywood pushing these bills all along, and thus in Blumenthal's mind, SESTA and copyright are closely associated. As Prof. Eric Goldman notes, talking nonsensically about copyright in this context appears to be quite a tell by Senator Blumenthal.
https://www.techdirt.com/articles/20...internet.shtml





Senate Passes Controversial Online Sex Trafficking Bill
Harper Neidig

The Senate on Wednesday passed a controversial online sex trafficking bill, sending it to President Trump’s desk and capping off a months-long legislative fight over concerns from the tech industry.

The bill was approved overwhelmingly in a 97-2 vote. Sens. Ron Wyden (D-Ore.) and Rand Paul (R-Ky.) were the only votes against the bill.

The legislation, called the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA), but also referred to as SESTA after the original Senate bill, would cut into the broad protections websites have from legal liability for content posted by their users.

"We now have the ability to go after these websites who are exploiting women and children online," Sen. Rob Portman (R-Ohio), one of the original authors of the bill, said at a press conference after the vote.

The House overwhelmingly passed the bill last month, and President Trump is expected to sign it.

The legal liability protections are codified in Section 230 of the Communications Decency Act from 1996, a law that many internet companies see as vital to protecting their platforms. SESTA would amend that law to create an exception for sex trafficking, making it easier to target websites with legal action for enabling such crimes.

Wyden, the most outspoken critic of SESTA and one of the authors of the Communications Decency Act, said that making exceptions to Section 230 will lead to small internet companies having to face an onslaught of frivolous lawsuits.

"In the absence of Section 230, the internet as we know it would shrivel," Wyden said on the Senate floor ahead of the vote Wednesday. "Only the platforms run by those with deep pockets, and an even deeper bench of lawyers, would be able to make it."

The Oregon Democrat also noted opposition from groups as varied as the Cato Institute, the Human Rights Campaign and the ACLU.

But some lawmakers and anti-sex trafficking advocates think the law has gotten in the way of efforts to go after online trafficking suspects like Backpage.com.

Sen. Richard Blumenthal (D-Conn.), a co-author of SESTA with Portman and a former prosecutor, called Section 230 "outdated and obsolete" during Wednesday's press conference.

Most major internet giants have gone quiet in the fight over the controversial bill. Facebook endorsed SESTA as the company faces scrutiny on other fronts, in particular alleged Russian efforts to use their platform to conduct a disinformation campaign targeting U.S. voters.

But the bill was also championed by technology companies, such as IBM, Oracle and Hewlett Packard, that have been at odds with Silicon Valley. They argued that online companies enjoy overly broad legal protections while being subject to very little regulation, leading to pervasive problems like online sex trafficking.

The passage of the bill is widely seen as a major legislative loss for Silicon Valley, and perhaps the first in an era where the industry is being viewed much more critically by lawmakers.

During Wednesday's press conference, Sen. John Thune (R-S.D.), the chairman of the Senate Commerce Committee, said he believes the bill sends a message to tech giants.

"I think that in the future tech companies have to understand that it’s not the Wild West and they have to exercise responsibility," Thune said.
http://thehill.com/policy/technology...afficking-bill

















Until next week,

- js.



















Current Week In Review


Recent WiRs -

March 17th, March 10th, March 3rd, February 24th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 02:58 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)