P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 13-03-19, 08:37 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - March 2nd, ’19

Since 2002































March 2nd, 2019




Europe Against the Net
Jeff Jarvis

I’ve spent a worrisome weekend reading three documents from Europe about regulating the net:

• The revived, revised, and worsened Articles 11 and 13 of the European Copyright Directive and Julia Reda’s devastating review of the impact.
• The Cairncross Review of the state of journalism and the net in the UK.
• The House of Commons Digital, Culture, Media, and Sport Committee Disinformation and ‘Fake News’ report.

In all this, I see danger for the net and its freedoms posed by corporate protectionism and a rising moral panic about technology. One at a time:
Articles 11 & 13: Protectionism gone mad

Article 11 is the so-called link tax, the bastard son of the German Leistungsschutzrechtor ancillary copyright that publishers tried to use to force Google to pay for snippets. They failed. They’re trying again. Reda, a member of the European Parliament, details the dangers:

• Reproducing more than “single words or very short extracts” of news stories will require a licence. That will likely cover many of the snippets commonly shown alongside links today in order to give you an idea of what they lead to….

• No exceptions are made even for services run by individuals, small companies or non-profits, which probably includes any monetised blogs or websites.

European journalists protest that this will serve media corporations, not journalists. Absolutely.

But the danger to free speech, to the public conversation, and to facts and evidence are greater. Journalism and the academe have long depended on the ability to quote — at length — source material to then challenge or expand upon or explain it. This legislation begins to make versions of that act illegal. You’d have to pay a license to a news property to quote it. Nevermind that 99.9 percent of journalism quotes others. The results: Links become blind alleys sending you to god-knows-what dark holes exploited by spammers and conspiracy theories. News sites lose audience and impact (witness how a link tax forced Google News out of Spain). Even bloggers like me could be restricted from quoting others as I did above, killing the web’s magnificent ability to foster conversation with substance.

Why do this? Because publishers think they can use their clout to get legislators to bully the platforms into paying them for their “content,” refusing to come to grips with the fact that the real value now is in the audience the platforms send to the publishers. It is corporate protectionism born of political capital. It is corrupt and corrupting of the net. It is a crime.

Article 13 is roughly Europe’s version of the SOPA/PIPA fight in the U.S.: protectionism on behalf of entertainment media companies. It requires sites where users might post material —isn’t that every interactive site on the net ?— to “preemptively buy licenses for anything that users may possibly upload,” in Reda’s explanation. They will also have to deploy upload filters — which are expensive to operate and notoriously full of false positives — to detect anything that is not licensed. The net: Sites will not allow anyone to post any media that could possibly come from anywhere.

So we won’t be able to quote or adapt. Death to the meme. Yes, there are exceptions for criticism, but as Lawrence Lessig famously said “fair use is the right to hire a lawyer.” This legislation attempts to kill what the net finally brought to society: diverse and open conversation.

Cairncross Review: Protecting journalism as it was

The UK dispatched Dame Frances Cairncross, a former journalist and economist, to review the imperiled state of news and she returned with a long and well-intentioned but out-of-date document. A number of observations:

• She fails — along with many others — to define quality journalism. “Ultimately, ‘high quality journalism’ is a subjective concept that depends neither solely on the audience nor the news provider. It must be truthful and comprehensive and should ideally — but not necessarily — be edited. You know it when you see it….” (Just like porn, but porn’s easier.) Thus she cannot define the very thing her report strives to defend. A related frustration: She doesn’t very much criticize the state of journalism or the reasons why trust in it is foundering, only noting its fall.
• I worry greatly about her conclusion that “intervention may be needed to determine what, and how, news is presented online.” So you can’t define quality but you’re going to regulate how platforms present it? Oh, the platforms are trying to understand quality in news. (Disclosure: I’m working on just such a project, funded by but independent of Facebook.) But the solutions are not obvious. Cairncross wants the platforms to have an obligation “to nudge people towards reading news of high quality” and even to impose quotas for quality news on the platforms. Doesn’t that make the platforms the editors? Is that what editors really want? Elsewhere in the report, she argues that “this task is too important to leave entirely to the judgment of commercial entities.” But BBC aside, that is where the task of news lies today: in commercial entities. Bottom line: I worry about *any* government intervention in speech and especially in journalism.
• She rightly focuses less on national publications and more on the loss of what she calls “public interest news,” which really means local reporting on government. Agreed. She also glances by the paradox that public-interest news “is often of limited interest to the public.” Well, then, I wish she had looked at the problem and opportunity from the perspective of what the net makes possible. Why not start with new standards to require radical transparency of government, making every piece of legislation, every report, every budget public? There have been pioneering projects in the UK to do just that. That would make the task of any journalist more efficient and it would enable collaborative effort by the community: citizens, librarians, teachers, classes…. She wants a government fund to pay for innovations in this arena. Fine, then be truly innovative. She further calls for the creation of an Institute for Public Interest News. Do we need another such organization? Journalism has so many.
• She explores a VAT tax break for subscriptions to online publications. Sounds OK, but I worry that this would motivate more publications to put up paywalls, which will further redline quality journalism for those who can afford it.
• She often talked about “the unbalanced relationship between publishers and online platforms.” This assumes that there is some natural balance, some stasis that can be reestablished, as if history should be our only guide. No, life changed with the internet.
• She recommends that the platforms be required to set out codes of conduct that would be overseen by a regulator “with powers to insist on compliance.” She wants the platforms to commit “not to index more than a certain amount of a publisher’s content without an explicit agreement.” First, robots.txt and such already put that in publishers’ control. Second, Cairncross acknowledges that links from platforms are beneficial. She worries about — but does not define — too much linking. I see a slippery slope to Article 11 (above) and, really, so does Cairncross: “There are grounds for worrying that the implementation of Article 11 in the EU may backfire and restrict access to news.” In her code of conduct, platforms should not impose their ad platforms on publishers — but if publishers want revenue from the platforms they pretty much have to. She wants platforms to give early warnings of changes in algorithms but that will be spammed. She wants transparency of advertising terms (what other industries negotiate in public?).
• Cairncross complains that “most newspapers have lacked the skills and resources to make good use of data on their readers” and she wants the platforms to share user data with publishers. I agree heartily. This is why I worry that another European regulatory regime — GDPR — makes that nigh unto impossible.
• She wants a study of the competitive landscape around advertising. Yes, fine. Note, thought, that advertising is becoming less of a force in publishers’ business plans by the day.
• Good news: She rejects direct state support for journalism because “the effect may be to undermine trust in the press still further, at a time when it needs rebuilding.” She won’t endorse throttling the BBC’s digital efforts just because commercial publishers resent the competition. She sees danger in giving the publishing industry an antitrust exception to negotiate with the platforms (as is also being proposed in the U.S.) because that likely could lead to higher prices. And she thinks government should help publishers adapt by “encouraging the development and distribution of new technologies and business models.” OK, but what publishers and which technologies and models? If we knew which ones would work, we’d already be using them.
• Finally, I note a subtle paternalism in the report. “The stories people want to read may not always be the ones they ought to read in order to ensure that a democracy can hold its public servants properly to account.” Or the news people need in their lives might not be the news that news organizations are reporting. Also: Poor people — who would be cut off by paywalls — “are not just more likely to have lower levels of literacy than the better-off; their digital skills also tend to be lower.” Class distinctions never end.

It’s not a bad report. It is cautious. But it’s also not visionary, not daring to imagine a new journalism for a new society. That is what is really needed.

The Commons report: Finding fault

The Digital, Culture, Media and Sport Committee is famously the body Mark Zuckerberg refused to testify before. And, boy, are they pissed. Most of this report is an indictment of Facebook on many sins, most notably Cambridge Analytica. For the purposes of this post, about possible regulation, I won’t indulge in further prosecuting or defending the case against Facebook (see my broader critique of the company’s culture here). What interests me in this case is the set of committee recommendations that could have an impact on the net, including our net outside of the UK.

The committee frets — properly — over malicious impact of Brexit. And where did much of the disinformation that led to that disaster come from? From politicians: Nigel Farage, Boris Johnson, et al. This committee, headed by a conservative, makes no mention of colleagues. As with the Cairncross report, why not start at home and ask what government needs to do to improve the state of its contribution to the information ecosystem? A few more notes:

• Just as Cairncross has trouble defining quality journalism, the Commons committee has trouble defining the harm it sees everywhere on the internet. It puts off that critical and specific task to an upcoming Online Harms white paper from the government. (Will there also be an Online Benefits white paper?) The committee calls for holding social media companies — “which is not necessarily either a ‘platform’ or a ‘publisher’,” the report cryptically says — liable for “content identified as harmful after it has been posted by users.” The committee then goes much farther, threatening not just tech companies but technologists. My emphasis: “If tech companies (including technological engineers involved in creating the software for the companies) are found to have failed to meet their obligations under such a Code [of Ethics], and not acted against the distribution of harmful and illegal content, the independent regulator should have the ability to launch legal proceedings against them, with the prospect of large fines being administered….” Them’s fightin’ words, demonizing not just the technology and the technology company but the technologist.
• Again and again in reading the committee’s report, I wrote in the margin “China” or “Iran,” wondering how the precedents and tools wished for here could be used by authoritarian regimes to control speech on the net. For example: “There is now an urgent need to establish independent regulation. We believe that a compulsory Code of Ethics should be established, overseen by an independent regulator, setting out what constitutes harmful content.” How — except in the details — does that differ from China deciding what is harmful to the minds of the masses? Do we really believe that a piece of “harmful content” can change the behavior of a citizen for the worse without many other underlying causes? Who knows best for those citizens? The state? Editors? Technologists? Or citizens themselves? The committee notes — with apparent approval — a new French law that “allows judges to order the immediate removal of online articles that they decide constitute disinformation.” All this sounds authoritarian to me and antithetical to the respect and freedom the net gives people.
• The committee expands the definition of personal data — which, under GDPR, is already ludicrously broad, to include, for example, your IP address. It wants to include “inferred data.” I hate to think what that could do to the discipline of machine learning and artificial intelligence — to the patterns and inferences that will compose patterns discerned and knowledge produced by machines.
• The committee wants to impose a 2% “digital services tax on UK revenues of big technology companies.” On what basis, besides vendetta against big (American) companies?
• The Information Commissioner told the committee that “Facebook needs to significantly change its business model and its practices to maintain trust.” How often does government get into the nitty-gritty of companies’ business models? And let’s be clear: The problem with Facebook’s business model — click-based, volume-based, attention-based advertising — is precisely what drove media into the abyss of mistrust. So should the government tell media to change its business model? They wouldn’t dare.
• The report worries about the “pernicious nature of micro-targeted political adverts” and quotes the Coalition for Reform in Political Advertising recommending that “all factual claims used in political ads be pre-cleared; an existing or new body should have the power to regulate political advertising content.” So government in power would clear the content of ads of challengers? What could possibly go wrong? And micro-targeting of one sort or another is also what enables small communities with specific interests to find each other and organize. Give up your presumptions of the mass.
• The report argues “there needs to be absolute transparency of online political campaigning.” I agree. Facebook, under pressure, created a searchable database of political ads. I think Facebook should do more and make targeting data public. And I think every — every — other sector of media should match Facebook. Having said that, I still think we need to be careful about setting precedents that might not work so well in countries like, say, Hungry or Turkey, where complete transparency in political advertising and activism could lead to danger for opponents of authoritarian regimes.
• The committee, like Cairncross, expresses affection for eliminating VAT taxes on digital subscriptions. “This would eliminate the false incentive for news companies against developing more paid-for digital services.” Who said what is the true or false business model? I repeat my concern that government meddling in subscription models could have a deleterious impact on news for the public at large, especially the poor. It would also put more news behind paywalls, with less audience, resulting in less impact from it. (A hidden agenda, perhaps?)
• “The Government should put pressure on social media companies to publicize any instances of disinformation,” the committee urges. OK. But define “disinformation.” You’ll find it just as challenging as defining “quality news” and “harm.”
• The committee, like Cairncross, salutes the flag of media literacy. I remain dubious.
• And the committee, like Cairncross, sometimes reveals its condescension. “Some believe that friction should be reintroduced into the online experience, by both tech companies and by individual users themselves, in order to recognize the need to pause and think before generating or consuming content.” They go so far as to propose that this friction could include “the ability to share a post or a comment, only if the sharer writes about the post; the option to share a post only when it has been read in its entirety.” Oh, for God’s sake: How about politicians pausing and thinking before they speak, creating the hell that is Brexit or Trump?

In the end, I fear all this is hubris: to think that we know what the internet is and what its impact will be before we dare to define and limit the opportunities it presents. I fear the paternalistic unto authoritarian worldview that those with power know better than those without. I fear the unintended — and intended — consequences of all this regulation and protectionism. I trust the public to figure it out eventually. We figured out printing and steam power and the telegraph and radio and television. We will figure out the internet if given half a chance.

And I didn’t even begin to examine what they’re up to in Australia…
https://buzzmachine.com/2019/02/24/e...ainst-the-net/





Netflix is Killing Content Piracy
Tuesday, 26 February 2019, 4:14 pm
Press Release: The Mail Room

Legitimate streaming content providers are achieving what was impossible for Hollywood to get right: they are stamping out piracy by making available the shows people want to enjoy at reasonable cost and with maximum convenience.

That’s borne out in independent research commissioned by Vocus Group NZ which confirms piracy is dying a natural death as more New Zealanders choose to access their content legitimately. “In short, the reason people are moving away from piracy is that it’s simply more hassle than it’s worth,” says Taryn Hamilton, Consumer General Manager at Vocus Group New Zealand which operates brands including Slingshot, Orcon and Flip.

“The research confirms something many internet pundits have long instinctively believed to be true: piracy isn’t driven by law-breakers, it’s driven by people who can’t easily or affordably get the content they want.”

Conducted by Perceptive in December and polling more than a thousand New Zealanders from all walks of life, the study confirms that when content is made available at a fair price, people pay for it instead of pirating. Moreover, the study doesn’t just show that most of us can’t be bothered with piracy, it also confirms that fewer people who once pirated regularly, are doing it now.

“Around half of all respondents have watched something at some point in their lives that may have been pirated – however, the majority rarely or never do that nowadays,” says Hamilton, noting only around 10 percent of respondents admit to viewing pirate content in the present day.

This emerges in the preferred way that people like to take in their shows or sports. While free-to-air TV rates highly, at 22 percent, paid streaming is the standout figure, at 29 percent. Free streaming services add a further 6 percent, making streaming by far the most popular way to watch for New Zealanders. By contrast, paid satellite TV is the choice for a little over 14 percent of respondents – and only 3 percent prefer to watch pirated content.

“People are watching less pirated material now than they used to, and they assume they'll continue to watch less in the future. This is largely because of the cheap, easy access to free and paid material on the likes of Netflix and YouTube,” Hamilton adds. “Compare that with pirating a show: piracy requires some technical ability and it is risky.

Hamilton says Kiwi consumers are a savvy lot, too. While the research shows that in general people don't have much appetite for pirating, there is much higher agreement that ‘It would be almost impossible to stop people doing this’. “The simple fact for those who know anything about the internet, is that censoring the internet doesn’t work. People know there are multiple sites where it is possible to download illegal material. They also know that blocking the most popular ones simply means you’ll get pirated material elsewhere.”

But the really interesting thing, says Hamilton, is a question around what would stop those who still occasionally view pirate content from doing so. “Overwhelmingly, New Zealanders said ‘cheaper streaming services’ and ‘more content available on existing streaming services’. These two options were by far ahead of other options, at 57 and 48 percent respectively. Punitive measures, such as prosecution for pirates and censorship of pirate sites, were only thought likely to be effective by 33 and 22 percent of people, respectively.”

Internet NZ’s Andrew Cushen says he’s not surprised by the findings: “Rights holders have done well by innovating and building great ways of sharing content at fair prices. Piracy isn’t the big challenge it once was because of this innovation, which consumers are using in droves.

“The upcoming copyright review is an opportunity to enable greater collaboration and creativity through harnessing the power of new tech,” Cushen says.

Historically, industry organisations like the Recording Industry Association and the Motion Picture Association of America have invested millions of dollars in pursuit of pirates, with no apparent abatement in illegal content distribution. By contrast, the simple process of introducing paid streaming services such as Netflix, YouTube, Google Play Movies, and any number of local and international contenders, has achieved far more. A Bloomberg article puts it succinctly, noting that ‘Subscription-based business models in content distribution is making piracy pointless’.

It’s the classic carrot and stick approach, which further highlights the difference between the efforts of the MPAA and other industry bodies, versus the ‘organic’ action of Netflix, Google Movies and other streaming content providers. “Piracy is finally dying. The reason for that requires an understanding of why people pirated in the first place. They didn’t do it because of inherent criminality, but rather because they couldn’t get the shows they wanted at a price they were prepared to pay,” Hamilton concludes.
http://www.scoop.co.nz/stories/BU190...ent-piracy.htm





Netflix May be Losing $192M Per Month from Piracy, Cord Cutting Study Claims
Sarah Perez

As many as 1 in 5 people today are mooching off of someone else’s account when streaming video from Netflix, Hulu or Amazon Video, according to a new study from CordCutting.com. Of these, Netflix tends to be pirated for the longest period — 26 months, compared with 16 months for Amazon Prime Video or 11 months for Hulu. That could be because Netflix freeloaders often mooch off their family instead of a friend — 48 percent use their parents’ login, while another 14 percent use their sister or brother’s credentials, the firm found.

At a base price of $7.99 per month (the study was performed before Netflix’s January 2019 price increase), freeloading users could save $207.74 over a 26-month period. At scale, these losses can add up, the study claims.

The report estimates Netflix could be losing $192 million in monthly revenue from piracy — more than either Amazon or Hulu, at $45 million per month and $40 million per month, respectively.

Millennials, not surprisingly, account for much of the freeloading. They’re the largest demographic pirating Netflix (18 percent) and Hulu’s service (20 percent). But oddly, it was Baby Boomers who were more likely to borrow someone else’s account to access Amazon Prime Video.

There’s an argument that those who pirate would never be paying customers, so these aren’t true losses. It’s the same sort of thing that was said about Napster mp3 downloads back in the day, or about those pirating movies through The Pirate Bay. But there is some portion of the freeloading population that claims they would pay, if they lost access.

According to the study, 59.3 percent said they would pay for Netflix (or around 14 million people), contributing at least $112 million in monthly revenue, if they lost access. And 37.8 percent, or 2 million, said they’d pay for Hulu; 27.6 percent, or 1 million people, said they’d pay for Prime Video.

Of course, there can be discrepancies between what consumers say they will do versus what they actually end up doing. So such claims that “I’d definitely pay,” have to be taken with the proverbial grain of salt.

It’s worth noting, too, this study calculated figures by looking at Netflix’s single-screen-at-a-time account — in theory, the one meant to be used by a single individual and not shared as a family plan, in order to keep the estimates conservative. The consumer survey defined mooching by asking users if they use a service they don’t pay for, then asked what they would or would not pay for themselves, if that access fell through.

Hulu, at least, has more recently tried to make its service more appealing to penny-pinchers. At its new price — $5.99 per month, rolled out this week — it’s making it harder to justify freeloading.

Netflix, on the other hand, seems to know its value, and raised prices this year so its base plan is a dollar more at $8.99 per month, and its most popular plan has climbed to $12.99 per month.

The full study offers other details on cord-cutting trends, including breakdowns by gender and details on who accounts are mooched from, among other things.
https://techcrunch.com/2019/02/27/ne...-study-claims/





FCC Says Gutting ISP Oversight Was Great For Broadband

Ajit Pai's FCC insists that ignoring consumers and gutting oversight of major ISPs dramatically boosted network investment. Reality suggests something else entirely.
Karl Bode

The FCC this week proclaimed that broadband connectivity saw unprecedented growth last year thanks to agency policies like killing net neutrality. The problem? That doesn’t appear to be true.

By law, the FCC is required to submit a periodic report on the state of U.S. broadband, noting whether or not affordable internet is being deployed on a “reasonable and timely basis.” While the FCC didn’t release its full data to the public, it did issue a press release citing some very specific statistics agency boss Ajit Pai claimed proved his agency was curing the “digital divide.”

Among the claims the FCC uses to support its position is that the availability of 100 Mbps broadband connections grew by nearly 20 percent in 2018, from 244.3 million to 290.9 million.

But the lion’s share of these improvements are courtesy of DOCSIS 3.1 cable upgrades, most of which began before Pai even took office and have nothing to do with FCC policy. Others are likely courtesy of build-out conditions affixed to AT&T’s merger with DirecTV, again the result of policies enacted before Pai was appointed head of the current FCC.

Meanwhile, last year’s FCC report (showcasing data up to late 2016) showed equal and in some instances faster growth in rural broadband deployment—despite Pai having not been appointed yet.

The broadband industry’s biggest issue remains a lack of competition. That lack of competition results in Americans paying some of the highest prices for broadband in the developed world, something the agency routinely fails to mention and does so again here.

With many of the nation’s phone companies refusing to upgrade or even repair their aging DSL lines, cable giants like Comcast are securing a greater monopoly over faster broadband across huge swaths of the country. That in turn is resulting in higher rates and little incentive to improve terrible customer service. The telecom lobby works tirelessly to keep this status quo intact.

Still, Pai was quick to take a victory lap in the agency release.

"For the past two years, closing the digital divide has been the FCC's top priority," Pai said yesterday. "We've been tackling this problem by removing barriers to infrastructure investment, promoting competition, and providing efficient, effective support for rural broadband expansion through our Connect America Fund. This report shows that our approach is working.”

One of those supposed “barriers to broadband investment” were the former FCC’s net neutrality rules designed to keep natural monopolies like Comcast from behaving anti-competitively. Polls repeatedly indicate those rules had the overwhelming bipartisan support of the public.

The idea that net neutrality somehow stifled sector investment has been a common refrain for the Pai FCC. As has the claim that eliminating the rules boosted said investment. That same claim is also frequently mirrored by claims from telecom lobbying organizations like US Telecom, who routinely insists the U.S. broadband market is fiercely competitive.

“Overall, capital expenditures by broadband providers increased in 2017, reversing declines that occurred in both 2015 and 2016,” the FCC claimed, again hinting that the repeal of net neutrality directly impacted CAPEX and broadband investment.

A problem with that claim: the FCC’s latest report only includes data up to June 2018, the same month net neutrality was formally repealed. As such the data couldn’t possibly support the idea that the elimination of net neutrality was responsible for this otherwise modest growth.

Another problem: that claim isn’t supported by ISP earnings reports or the public statements of numerous telecom CEOs, who say net neutrality didn’t meaningfully impact their investment decisions one way or another. Telecom experts tell Motherboard that’s largely because such decisions are driven by a universe of other factors, including the level of competition (or lack thereof) in many markets.

“The unsubstantiated allegation that Title II in particular had a negative effect on broadband investment was wrong when embraced by the Pai in 2017 and it’s still wrong—investment decisions are based on factors like competition, the economy and changes in technology,” former FCC lawyer Gigi Sohn told Motherboard via email.

The FCC did not respond to a request for comment seeking clarification on its claims.

Consumer groups like Fight For the Future were unsurprisingly unimpressed by the FCC’s victory lap.

“From what we can see, this report looks like it was written by a telecom lobbyist and bears no resemblance to what Internet users are experiencing in their everyday lives,” said the group in a statement. “U.S. residents are already paying more money for less Internet than nearly anywhere in the world, so it’s awfully strange that the FCC’s media sheet said nothing about price and competition.”
https://motherboard.vice.com/en_us/a...-youre-welcome





Ubiquitilink Advance Means Every Phone is Now a Satellite Phone
Devin Coldewey

Last month I wrote about Ubiquitilink, which promised, through undisclosed means, it was on the verge of providing a sort of global satellite-based roaming service. But how, I asked? (Wait, they told me.) Turns out our phones are capable of a lot more than we think: they can reach satellites acting as cell towers in orbit just fine, and the company just proved it.

Utilizing a constellation of satellites in low Earth orbit, Ubiquitilink claimed during a briefing at Mobile World Congress in Barcelona that pretty much any phone from the last decade should be able to text and do other low-bandwidth tasks from anywhere, even in the middle of the ocean or deep in the Himalayas. Literally (though eventually) anywhere and any time.

Surely not, I hear you saying. My phone, that can barely get a signal on some blocks of my neighborhood, or in that one corner of the living room, can’t possibly send and receive data from space… can it?

“That’s the great thing — everybody’s instinct indicates that’s the case,” said Ubiquitilink founder Charles Miller. “But if you look at the fundamentals of the RF [radio frequency] link, it’s easier than you think.”

The issue, he explained, isn’t really that the phone lacks power. The limits of reception and wireless networks are defined much more by architecture and geology than plain physics. When an RF transmitter, even a small one, has a clear shot straight up, it can travel very far indeed.
Space towers

It’s not quite as easy as that, however; there are changes that need to be made, just not anything complex or expensive like special satellite antennas or base stations. If you know that modifying the phone is a non-starter, you have to work with the hardware you’ve got. But everything else can be shaped accordingly, Miller said — three things in particular.

• Lower the orbit. There are limits to what’s practical as far as the distance involved and the complications it brings. The orbit needs to be under 500 kilometers, or about 310 miles. That’s definitely low — geosynchronous is 10 times higher — but it’s not crazy either. Some of SpaceX’s Starlink communications satellites are aiming for a similar orbit.
• Narrow the beam. The low orbit and other limitations mean that a given satellite can only cover a small area at a time. This isn’t just blasting out data like a GPS satellite, or communicating with a specialized ground system like a dish that can reorient itself. So on the ground you’ll be looking at a 45 degree arc, meaning you can use a satellite that’s within a 45-degree-wide cone above you.
• Lengthen the wavelength. Here simple physics come into play: generally, the shorter the wavelength, the less transparent the atmosphere is to it. So you want to use bands on the long (lower Hz) side of the radio spectrum to make sure you maximize propagation.

Having adjusted for these things, an ordinary phone can contact and trade information with a satellite with its standard wireless chip and power budget. But there’s one more obstacle, one Ubiquitilink spent a great deal of time figuring out.

Although a phone and satellite can reach one another reliably, a delay and Doppler shift in the signal due to the speeds and distances involved are inescapable. Turns out the software that runs towers and wireless chips isn’t suited for this; the timings built into the code assume the distance will be less than 30 km, since the curvature of the Earth generally prevents transmitting farther than that.

So Ubiquitilink modified the standard wireless stacks to account for this, something Miller said no one else had done.

“After my guys came back and told me they’d done this, I said, ‘well let’s go validate it,’ ” he told me. “We went to NASA and JPL and asked what they thought. Everybody’s gut reaction was ‘well, this won’t work,’ but then afterwards they just said ‘well, it works.’ ”

The theory became a reality earlier this year after Ubiquitilink launched their prototype satellites. They successfully made a two-way 2G connection between an ordinary ground device and the satellite, proving that the signal not only gets there and back, but that its Doppler and delay distortions can be rectified on the fly.

“Our first tests demonstrated that we offset the Doppler shift and time delay. Everything else is leveraging commercial software,” Miller said, though he quickly added: “To be clear, there’s plenty more work to be done, but it isn’t anything that’s new technology. It’s good solid hardcore engineering, building nanosats and that sort of thing.”

Since his previous company was Nanoracks and he’s been in the business for decades, he’s qualified to be confident on this part. It’ll be a lot of work and a lot of money, but they should be launching their first real satellites this summer. (And it’s all patented, he noted.)

Global roaming

The way the business will work is remarkably simple given the complexity of the product. Because the satellites operate on modified but mostly ordinary off-the-shelf software and connect to phones with no modifications necessary, Ubiquitilink will essentially work as a worldwide roaming operator that mobile networks will pay to access. (Disclosure: Verizon, obviously a mobile network, owns TechCrunch, and for all I know will use this tech eventually. It’s not involved with any editorial decisions.)

Normally, if you’re a subscriber of network X, and you’re visiting a country where X has no coverage, X will have an agreement with network Y, which connects you for a fee. There are hundreds of these deals in play at any given time, and Ubiquitilink would just be one more — except its coverage will eventually be global. Maybe you can’t reach X or Y; you’ll always be able to reach U.

The speeds and services available will depend on what mobile networks want. Not everyone wants or needs the same thing, of course, and a 3G fallback might be practical where an LTE connection is less so. But the common denominator will be data enough to send and receive text at the least.

It’s worth noting also that this connection will be in some crucial ways indistinguishable from other connections: it won’t affect encryption, for instance.

This will of course necessitate at least a thousand satellites, by Miller’s count. But in the meantime, limited service will also be available in the form of timed passes — you’ll have no signal for 55 minutes, then signal for five, during which you can send and receive what may be a critical text or location. This is envisioned as a specialty service at first, then as more satellites join the constellation, that window expands until it’s 24/7 and across the whole face of the planet, and it becomes a normal consumer good.

Emergency fallback

While your network provider will probably charge you the usual arm and leg for global roaming on demand (it’s their prerogative), there are some services Ubiquitilink will provide for free; the value of a global communication system is not lost on Miller.
“Nobody should ever die because the phone in their pocket doesn’t have signal,” he said. “If you break down in the middle of Death Valley you should be able to text 911. Our vision is this is a universal service for emergency responders and global E-911 texting. We’re not going to charge for that.”

An emergency broadcast system when networks are down is also being planned — power outages following disasters are times when people are likely to panic or be struck by a follow-up disaster like a tsunami or flooding, and reliable communications at those times could save thousands and vastly improve recovery efforts.

“We don’t want to make money off saving people’s lives, that’s just a benefit of implementing this system, and the way it should be,” Miller said.

It’s a whole lot of promises, but the team and the tech seem capable of backing them up. Initial testing is complete and birds are in the air — now it’s a matter of launching the next thousand or so.
https://techcrunch.com/2019/02/25/ub...tellite-phone/





Google is Bringing the Assistant to its Messages App
Jimmy Westenberg

The Google Assistant is everywhere. To access it on your Android phone, you can either long-press your phone’s home button, use the “Okay, Google” hotword, or, if you happen to own a Pixel 2 or 3, you can squeeze the sides of your phone to activate your Assistant.

Now, the Assistant is coming to Google’s Messages app.

In the coming months for English users, Messages (formerly Android Messages) will begin showing you suggestions relating to your ongoing conversations. If you and a friend are talking about a specific restaurant, a movie, or the weather conditions for instance, the Messages app will display Google Assistant links below the conversation so you can quickly access more information about that topic.

Google says Messages will utilize on-device AI for this feature, so none of your conversations will be sent to Google. In fact, no content from your conversations will be sent to Google — only the information listed in the Google Assistant link below your conversation.

At CES 2019, Google brought the Assistant to Google Maps, allowing users to ask for directions, make calls, and play music hands-free from within the app. Google says it’s seen a 15x increase in the number of queries asking to send messages and read incoming texts since the Assistant’s rollout in Google Maps. In the coming weeks, Google Assistant in Google Maps will roll out to all Assistant phone languages. If the Assistant is available in your phone’s language, stay tuned for that!
https://www.androidauthority.com/goo...ssages-958063/





Gab Wants to Add a Comments Section to Everything on the Internet

A tool called Dissenter lets you comment on tweets, websites and anything else with a URL.
Erin Carson

A new tool from fringe social network Gab aims to add a comments section to anything and everything on the internet.

Dissenter is a browser extension that lets users make comments on anything from Facebook pages to specific tweets and local news sites. Users can also up- and down-vote other comments. The comments are visible to anyone, but commenting requires a Gab account.

"This is basically adding a public square to every URL," Gab CEO Andrew Torba said in a 20-minute Periscope video.

Gab didn't immediately respond to a request for comment.

In the video, Torba walked through Dissenter and gave examples of places to leave comments. He also ran through the platform's other functions, including a trending news ticker, which is a discovery feature he compared to Pinterest, and a random button that will surface different pages.

Dissenter comes at a time when platforms are struggling with how to manage what gets posted on their services. In February, YouTube disabled comments for tens of millions of videos and also booted more than 400 channels for the comments that had been left. YouTube also said it reported illegal comments to law enforcement.

Also in February, Rotten Tomatoes changed its policy so users can't leave reviews on a movie that hasn't come out yet. The page for the upcoming Captain Marvel movie was already racking up negative reviews, despite not having hit theaters yet. Paul Yanover, the president of Fandango owner Rotten Tomatoes, said the change wasn't a direct result of the Captain Marvel issues.

Meanwhile, Gab has had its share of controversy. Built as an alternative to social media platforms like Twitter, the site lost its GoDaddy domain after reports surfaced that the suspect in the Pittsburgh synagogue shooting had made anti-Semitic posts there. Other services like Stripe and PayPal also dropped support for Gab. Gab markets itself as a bastion of free speech compared with Facebook and Twitter, and has become a popular place to express white nationalist, anti-Semitic and Islamohobic views.

In the video, Torba said he expected Dissenter would get banned from extension stores, but also said it's possible to install it without getting it from Google. Torba also mentioned Gab might build its own browser in the future that has Dissenter built in.

Star Wars Episode 9 filming wraps: But where's that title? Director J.J. Abrams says the milestone "feels impossible."

The great white shark's genetic healing powers have been decoded: We know more about how they potentially stave off cancer.
https://www.cnet.com/news/gab-wants-...-the-internet/





1TB microSD Cards are Now a Thing

That’s a lot of Switch games
Sam Byford

The inexorable march of increasing storage capacities continues today with the announcement of the world’s first 1-terabyte microSD cards. Micron and Western Digital’s SanDisk brand have both launched UHS-I microSDXC products today at Mobile World Congress, which will be good news for anyone looking at Samsung’s new 1TB Galaxy S10 Plus and thinking “what if that, but doubled?”

Of the two cards, Western Digital is claiming a performance advantage by citing up to 160MB/s read speed versus 100MB/s for Micron’s. The Micron card’s max write performance is 5MB/s faster at up to 95MB/s, however.

The SanDisk card will be available from April for $449.99, which is a pretty high convenience premium considering the new 512GB card in the same line will sell for $199.99. Pricing for the Micron card hasn’t yet been announced, but it’ll be out in the second quarter of this year.
https://www.theverge.com/circuitbrea...-price-release





microSD Express Unlocks Hyper-Fast Data Speeds for Mobile Devices

It opens up new possibilities for mobile gaming, video and virtual reality.
Steve Dent

The SD Association has unveiled microSD Express, a new format that will bring speeds of up to 985 Mb/s to the tiny memory cards used in smartphones and other devices. Like SD Express, it exploits the NVMe 1.3 and PCIe 3.1 interfaces used in PCs to power high-speed SSDs. The tech is incorporated onto the second row of microSD pins, so the cards will work faster in next-gen devices while maintaining backward compatibility with current microSD tech.

PCIe 3.1 allows for low power sub-states, so the cards will not only offer much (much) higher transfer speeds, but consume less power than regular microSD cards. It'll also open up features like bus mastering, which lets memory cards and other components communicate without going through the CPU first.

The prospect of having tiny, extremely fast 1TB memory cards has tantalizing potential for other gadgets, too. It could enable a new generation of smaller cameras or even smartphones that can capture RAW video, for instance. It would also be particularly handy for drones, letting them capture high-resolution video while carrying less weight.

Now, all we have to do is wait for memory companies to build microSD Express cards and device manufacturers to support them. Don't hold your breath, though. There are still few, if any, smartphones that support microSD UHS II cards, and that standard was first introduced in 2011.
https://www.engadget.com/2019/02/25/...e-data-speeds/





It’s Cool to Spool Again as the Cassette Returns on a Wave of Nostalgia

Sales are soaring and current stars are releasing tracks on the format… but is anyone actually listening to them?
Nosheen Iqbal

Pause. Stop. Rewind! The cassette, long consigned to the bargain bin of musical history, is staging a humble comeback. Sales have soared in the last year – up 125% in 2018 on the year before – amounting to more than 50,000 cassette albums bought in the UK, the highest volume in 15 years.

It’s quite a fall from the format’s peak in 1989 when 83 million cassettes were bought by British music fans, but when everyone from pop superstar Ariana Grande to punk duo Sleaford Mods are taking to tape, a mini revival seems afoot. But why?

“It’s the tangibility of having this collectible format and a way to play music that isn’t just a stream or download,” says techno DJ Phin, who has just released her first EP on cassette as label boss of Theory of Yesterday.

“I find them much more attractive than CDs. Tapes have a lifespan, and unlike digital music, there is decay and death. It’s like a living thing and that appeals to me.” Phin left the bulk of her own 100-strong cassette collection in Turkey, carefully stored at her parents’ home, but bought “20 or 25 really special ones” when she moved to London. “I’m from that generation,” she says. “It’s a nostalgia thing – I like the hiss.”

That familiar thunk, click and whir of a cassette being played in a stereo makes up the opening note of Calvin Harris and Dua Lipa’s global floorfiller One Kiss; the track was the UK’s biggest single of 2018 and spent eight weeks at No 1. Its digital “sleeve” was designed especially as a tape cover. Fast forward to last week, and British songwriter Jade Bird announced her new release as a limited edition cassette, Ariana Grande’s Thank U, Next is top of the tape chart (with 540 copies sold on cassette last week) and Urban Outfitters is selling four different kinds of cassette players to its primarily twentysomething audience. Hi-fi store Richer Sounds is selling two.

At the independent record store Rough Trade, marketing manager Emily Waller can’t say whether customers are actually playing the cassettes they buy. “It’s still a nice put-me-in-your-pocket keepsake or collectible for a fan. Old stuff is hip, right? We’ve seen through vinyl sales the increase in demand for these ‘retro’ formats, particularly among young people.”

Cassette culture is thriving in the electronic music, DIY and avant-garde scenes where labels such as Manchester-based Sacred Tapes and Ireland’s Fort Label Fruit are pushing tapes to fans via online music platform Bandcamp. The cheap cost and fast turnaround of manufacturing cassettes is a key part of the appeal for those committed to the cause.

“Vinyl has got so expensive to manufacture these days, especially if it’s only a seven-inch you’re putting out. You’ll only lose money on a seven-inch release,” says Tallulah Webb, who runs cassette-only label Sad Club Records. “Cassettes are an exciting way to put music out, in the same way that seven-inch singles were exciting for punk. They have always been a crucial part of the DIY scene.”

Retromania is nothing new. The fondness for recycling pop culture’s past has become a defining marker of millennial culture: the industry for “nostalgia marketing” has boomed for brands selling to the under-35s. And so Instagram is awash with #vintage #cassette posts, and over on Etsy, a replica plastic Sony Walkman (one that can’t actually play any music) is being sold for £79.

But not everyone is keen. Peter Robinson, founder and editor of Popjustice, believes the trend for tapes is a gimmick too far. “Cassettes are the worst-ever music format, and I say that as someone who owns a Keane single on a USB stick,” he says. “I can understand the romance and the tactile appeal of the vinyl revival, but I’m actually quite amused by the audacity of anyone attempting to drum up some sense of nostalgia for a format that was barely tolerated in its supposed heyday. It’s like someone looked at the vinyl revival and said: what this needs is lower sound quality and even less convenience.”

Streaming now accounts for nearly two-thirds of music consumption in the UK and while demand for vinyl is up by 2,000% since the format’s low point in 2007, the market share for cassettes is tiny. Robinson is cynical about cassette sales whirring back to life.

“I think labels know full well that almost every cassette they sell is going straight on a shelf as some sort of dreadful plastic ornament,” he says. “I don’t think it’s much different to the recent trend for pop stars adding pairs of socks to their merchandise lines, the crucial difference being that, for better or worse, socks don’t count towards the album chart.”
https://www.theguardian.com/music/20...ro-chic-rewind





U.S. Music Industry Posts Third Straight Year of Double-Digit Growth as Streaming Soars 30%
Jem Aswad

The U.S. music industry posted its third consecutive year of double-digit growth, according to the RIAA’s year-end revenue report issued today.

The report notes that in 2018 U.S. recorded-music revenues rose 12% to their highest level in 10 years — $9.8 billion, up from $8.8 billion the previous year but still below 2007’s $10.7 billion. This was largely due to the boost in paid music subscriptions, which rose 42% to 50.2 million from 35.3 million the previous year (and 10.8 million in 2015), while streaming revenues soared 30% to $7.4 billion from $5.7 billion in 2017 (and $2.3 billion in 2015).

Total subscription revenues increased 32% to $5.4 billion, the report says. That figure includes $747 million in revenues from “limited tier” paid subscriptions (i.e. ones without full mobile or on-demand access, such as Amazon Prime and Pandora Plus).

Streaming revenues accounted for 75% of the total U.S. industry revenue, with physical accounting for 12%, digital downloads for 11% and synch for 3%.

“Fifty million subscriptions illustrate fans’ unrivaled love for music and the way it shapes our identities and culture — and showcases an industry that has embraced the future and found a healthy path forward in the digital economy,” said Mitch Glazer, the RIAA’s new chairman/CEO, in a blog post. But he also notes, “Make no mistake, many challenges continue to confront our community. As noteworthy as it is for the business to approach $10 billion in revenues again, that only returns U.S. music to its 2007 levels. Stream-ripping, and a lack of accountability for many Big Tech companies that drive down the value of music, remain serious threats as the industry strives for additional growth.”

Indeed, how long this double-digit growth will continue remains to be seen. But as Glazer notes, “As our report illustrates, there are reasons to be excited for today and eager for tomorrow.”

Read the full report here.
https://variety.com/2019/biz/news/u-...30-1203152036/





“Hearing” the Hammond Organ
Kelly Hiser

The Hammond Organ was the first electronic musical instrument to become commercially successful. Just two years after it went on sale in 1935, major radio stations and Hollywood studios, hundreds of individuals, and over 2,500 churches had purchased a Hammond. The instrument had a major impact on the soundscape of both popular and religious musical life in the U.S., but it has been largely ignored by electronic music historians. Like the Telharmonium and theremin, whose own popular pasts are not widely known, the Hammond’s early history has much to teach us about how American audiences first encountered and understood electronic musical sound.

In fact, the Federal Trade Commission held an entire hearing in 1937 to evaluate the Hammond’s sonority. The Commission sought to determine whether a series of advertising claims about the Hammond’s timbre were “deceptive, misleading and false.” Though many of the hearing’s participants believed their testimony would go down in history as an important reckoning of what constituted “real” and “good” musical sound, the affair is largely forgotten today. What the hearing does offer is an unusually detailed record of contemporaneous arguments over the quality and value of a new electronic sound.

The Sacred Hammond Organ

The Hammond Organ became an immediate success when it hit the market in April 1935. By the end of the year, the Hammond Clock Company (soon to change its name to the Hammond Instrument Company) had sold over 800 units; the following year, Hammond added 300 new manufacturing jobs. Profits topped $100k by the end of 1936 and more than tripled the following year, during the height of the Great Depression.

Religious institutions made up the largest share of Hammond buyers, accounting for half of all sales in the 1930s. Advertisements targeted churches on a budget, highlighting practical concerns like the Hammond’s low cost and ease of installation. Ads claimed that all could now own an instrument that produced “fine” organ music, with tones comparable to that of a “concert” or “cathedral” organ. At $1,250, a standard Hammond installation was roughly double the price of a new Chevrolet sedan in 1935, but markedly less than all but the most modest pipe organ installations.
Hammond ad that ran in Christian Herald and Extension Magazine, early 1936

Religious organizations of all types bought Hammonds, from mainstream American Protestants to Catholics, as well as newer denominations like Evangelical churches. In general, though, congregations owning a Hammond in the 1930s tended to be smaller, poorer, and more rural than those that possessed a pipe organ. Many Hammond buyers had apparently not been in the market for an organ at all until the Hammond became available.

Advertising that pitted the Hammond against pipe organs stoked outrage among a small but well-organized community of pipe organ performers, designers, and manufacturers. Many of the community’s most prominent members registered their displeasure in trade journals like The American Organist. In a January 1936 survey printed in the journal, renowned organ designer Emerson Richards derided the Hammond’s sound as “hollow, dry, and dead.” In his response to the same survey, Chicago pipe organist and historian William Barnes declared that the Hammond would “not bear direct comparison with the tone of any natural musical instrument.”

Barnes and Richards spoke on behalf of an industry that had entered the Great Depression already in turmoil. Organ business boomed during the silent film era, when nearly every existing pipe organ manufacturer and many new enterprises rushed to meet demand for theater organs, but talkies decimated that market by the early 1930s. When the Hammond arrived on the scene, the pipe organ industry was reeling. Although no surviving evidence suggests that the Hammond made a measurable impact on pipe organ sales, men like Barnes and Richards quickly adopted the new electronic instrument as a convenient scapegoat for the industry’s ills.

The Federal Trade Commission Gets Involved

Eight months after the Hammond went on sale, Emerson Richards petitioned the Federal Trade Commission to act against the Hammond Clock Company. In 1936 the Commission issued a formal complaint accusing the company of advertising claims that “unfairly diverted” trade away from its pipe organ competitors. The claims at issue ranged from the concrete (the Hammond’s price point) to the nebulous (its suitability for the performance of “great works”), but the complaint and resultant hearing centered around one question: did this new electronic instrument sound “as good as” a pipe organ?

To find an answer, defense and prosecution attorneys assembled a gallery of witnesses—pipe organ experts, performers, Hammond employees, even a physicist who traveled from the University of Texas to Chicago for the hearing. Both Richards and Barnes (quoted above) acted as consultants for the Commission’s prosecutor and testified as expert witnesses. Together with the prosecuting attorney, the men performed multiple tests and demonstrations. In one challenge, “expert” human listeners tried to differentiate between a Hammond and a $75,000 pipe organ in the University of Chicago’s Rockefeller Memorial Chapel. For another series of “machine” tests, physicist Dr. Charles Boner took electrical measurements of Hammond and pipe organ tones using an instrument of his own design that identified and measured the intensity of each of the various harmonics present in a single pitch. Boner then plotted the results in a series of 36 charts that visually depicted the amplitude of each harmonic present in a given tone.

Both sides attempted to spin the battery of test results in their favor. Their testimony was acrimonious and lengthy, and filled nearly three thousand pages, preserved in the Chicago History Museum and the National Archives. Evidence, correspondence, attorney’s briefs, and the like add another thousand pages to the record. Those on the side of the pipe organ interpreted the visual differences between pipe and Hammond organ tones in Boner’s charts as scientific proof of the superiority of pipe organ tones. Meanwhile, Laurens Hammond maintained the human ear could not detect the differences registered by Boner’s instrument. When Hammond held up the lackluster listening test results of expert listeners for a series of excerpts played by a Hammond employee, the prosecution accused them of deliberately registering the pipe organ used for the test to imitate the Hammond, rather than the other way around.

Other Hearings

In hundreds of hours of testimony, not once did the Commission consider the experiences of performers and congregations who used the Hammond Organ for worship. When the Hammond defense attempted to submit statements of support from church organists across the country, the Commission rejected them on the grounds that “testimonials are not good evidence.”

The hearing’s final set of demonstrations starkly illustrate this exclusivity. The Commission traveled to Atlantic City, New Jersey, to compare two organs: a famous pipe organ at Boardwalk Hall (designed by Emerson Richards himself) and the only nearby Hammond Organ, located at the African-American St. Augustine’s Episcopal Church. At St. Augustine’s, Richards played a series of scales and chords using the preset sounds of the Hammond Organ; he repeated this process, playing registrations that corresponded to the Hammond’s preset timbres, on the organ at Boardwalk Hall. Later, Richards and the famous organist Charles Courbin, who was present for the demonstrations, testified on the differences they perceived between the organs.

An unnamed figure appears in testimony, who Richards describes only as “a young colored man who sometimes plays the organ” at the church. For a brief moment, then, the hearing’s participants encountered a musician who actually played the Hammond for church services. Here was someone who might have provided testimony grounded in lived experience with the new instrument. But the lived musical life and opinions of this unidentified organist was irrelevant to the men running the hearing. He is mentioned just once in the recorded testimony. He does not speak or act. His body registered, briefly, in the official record, but his opinions—and his name—did not.

Though this organist and his musical practices failed to register as matters of concern, his presence in the record is silent testimony to the Hammond’s existence and impact outside the context of the Commission’s hearing. Historian Ashon Crawley, who is currently at work on a book on the role of the Hammond in Black Pentecostal churches, notes that the instrument’s sound was taken up as the sound of the Black Pentecostal movement itself where it was heard as a “human-like” voice. Crawley sketches an understanding of the Hammond’s sound in the Black Pentecostal church that is rich with religious and social meaning, an understanding far more complex and profound than the white men involved in the Commission’s Hammond hearing were able to imagine.

A Hollow Victory?

In July of 1938, when five Commissioners met to rule on the hearing, they agreed with the prosecution and its witnesses on nearly all points. The Commissioners ordered Hammond to cease making a number of claims, including that the Hammond could produce sonorities equivalent to a pipe organ’s, that it could “properly” render “the great works of classical organ literature,” and that it was comparable to a “$10,000 pipe organ.”

And yet, after hundreds of hours of testimony, tests, and demonstrations, the decision mattered little. Both sides claimed victory, but the hearing appeared to have no appreciable impact on either. Indeed, the Hammond seemed to expand rather than invade the pipe organ market. Until World War II, when both the Hammond Company and the pipe organ industry converted to war-time operations, Hammonds continued to sell at a brisk pace. And after a low point in 1935, a slow recovery began in the pipe organ industry, with sales and the number of pipe organ firms on the rise by the end of the decade.

Defenders of the pipe organ failed to grasp that not every church wanted or needed a pipe organ, and that the Hammond was actually well suited to many religious musical practices. We can only imagine what other kinds of testimony would exist had the Commission considered the experiences and opinions of the congregations who worshiped with a Hammond. The exclusivity was the point, though. It was the Hammond’s marketing intrusion into elite pipe organ spaces and practices that precipitated the complaint, and that elite ground was the only one the pipe organ experts believed was worth defending.

The dismissal of supposed non-expert Hammond players and listeners in the Commission’s hearing mirrors electronic music history’s failure to register the histories of early electronic instruments. Because instruments like the theremin and the Hammond first circulated in spaces outside of elite compositional spheres, their performers, practices, and receptions seem to exist outside of history altogether. In both the Hammond’s hearing and mainstream electronic music history, the cordoning off of everyday and popular music experiences drastically narrows our understanding of how electronic musical sound became meaningful and why it matters.
https://nmbx.newmusicusa.org/hearing-the-hammond-organ/





Starbucks' Music is Driving Employees Nuts. A Writer Says It's a Workers' Rights Issue

Adam Johnson compared use of music at businesses to tactics used at Guantanamo
Kirsten Fenn

You may not give a second thought to the tunes spinning on a constant loop at your favourite café or coffee shop, but one writer and podcaster who had to listen to repetitive music for years while working in bars and restaurants argues it's a serious workers' rights issue.

"[It's] the same system that's used to … flood people out of, you know, the Branch Davidian in Waco or was used on terror suspects in Guantanamo — they use the repetition of music," Adam Johnson told The Current's Anna Maria Tremonti.

"I'm not suggesting that working at Applebee's is the same as being at Guantanamo, but the principle's the same."

Earlier this year, irritated Starbucks employees took to Reddit to rage about how they had to listen to the same songs from the Broadway hit musical Hamilton on repeat while on the job. One user wrote that if they heard a Hamilton song one more time, "I'm getting a ladder and ripping out all of our speakers from the ceiling."

Johnson argues it wouldn't take years of research to understand that "yes, playing the same music over and over again has a deleterious effect on one's mental well-being."

Snitch line for complaints?

As a solution, he suggested health inspectors could enforce better working conditions, or a tip line could be created for people to report poor working conditions, like repetitive music.

Another solution? Communication, says neuroscientist Jessica Grahn.

She studies music, which science has shown to be one of the strongest influencers of mood, she said. It can calm dementia patients struggling with depression or anger, or increase our endurance when we're working out.

However, there are downsides to the power of music. Unlike how we can close our eyes to things we don't want to see, we can't close our ears to sound.

Control makes a difference

"So it can be a very effective way of the external environment impinging, without our control, on our sensory processing," Grahn said.

"Because we can't close our ears, it's very effective if somebody else has control of our sonic environment. We can do nothing about that, and that can be pretty debilitating."

Having control over one's environment can make a big difference, said Grahn, which is why she recommends employers and employees talk about why certain music is being played, or what they can do to switch things up.

"I think when people have input or a sense of being listened to, that control actually makes a very big difference in their response to what they're listening to."
https://www.cbc.ca/radio/thecurrent/...ssue-1.5028163





The Trauma Floor

The secret lives of Facebook moderators in America
Casey Newton

Content warning: This story contains discussion of serious mental health issues and racism.

The panic attacks started after Chloe watched a man die.

She spent the past three and a half weeks in training, trying to harden herself against the daily onslaught of disturbing posts: the hate speech, the violent attacks, the graphic pornography. In a few more days, she will become a full-time Facebook content moderator, or what the company she works for, a professional services vendor named Cognizant, opaquely calls a “process executive.”

For this portion of her education, Chloe will have to moderate a Facebook post in front of her fellow trainees. When it’s her turn, she walks to the front of the room, where a monitor displays a video that has been posted to the world’s largest social network. None of the trainees have seen it before, Chloe included. She presses play.
"Someone is stabbing him, dozens of times, while he screams and begs for his life."

The video depicts a man being murdered. Someone is stabbing him, dozens of times, while he screams and begs for his life. Chloe’s job is to tell the room whether this post should be removed. She knows that section 13 of the Facebook community standards prohibits videos that depict the murder of one or more people. When Chloe explains this to the class, she hears her voice shaking.

Returning to her seat, Chloe feels an overpowering urge to sob. Another trainee has gone up to review the next post, but Chloe cannot concentrate. She leaves the room, and begins to cry so hard that she has trouble breathing.

No one tries to comfort her. This is the job she was hired to do. And for the 1,000 people like Chloe moderating content for Facebook at the Phoenix site, and for 15,000 content reviewers around the world, today is just another day at the office.

Over the past three months, I interviewed a dozen current and former employees of Cognizant in Phoenix. All had signed non-disclosure agreements with Cognizant in which they pledged not to discuss their work for Facebook — or even acknowledge that Facebook is Cognizant’s client. The shroud of secrecy is meant to protect employees from users who may be angry about a content moderation decision and seek to resolve it with a known Facebook contractor. The NDAs are also meant to prevent contractors from sharing Facebook users’ personal information with the outside world, at a time of intense scrutiny over data privacy issues

But the secrecy also insulates Cognizant and Facebook from criticism about their working conditions, moderators told me. They are pressured not to discuss the emotional toll that their job takes on them, even with loved ones, leading to increased feelings of isolation and anxiety. To protect them from potential retaliation, both from their employers and from Facebook users, I agreed to use pseudonyms for everyone named in this story except Cognizant’s vice president of operations for business process services, Bob Duncan, and Facebook’s director of global partner vendor management, Mark Davidson.

Collectively, the employees described a workplace that is perpetually teetering on the brink of chaos. It is an environment where workers cope by telling dark jokes about committing suicide, then smoke weed during breaks to numb their emotions. It’s a place where employees can be fired for making just a few errors a week — and where those who remain live in fear of the former colleagues who return seeking vengeance.

It’s a place where, in stark contrast to the perks lavished on Facebook employees, team leaders micromanage content moderators’ every bathroom and prayer break; where employees, desperate for a dopamine rush amid the misery, have been found having sex inside stairwells and a room reserved for lactating mothers; where people develop severe anxiety while still in training, and continue to struggle with trauma symptoms long after they leave; and where the counseling that Cognizant offers them ends the moment they quit — or are simply let go.

KEY FINDINGS

• Moderators in Phoenix will make just $28,800 per year — while the average Facebook employee has a total compensation of $240,000.
• In stark contrast to the perks lavished on Facebook employees, team leaders micro-manage content moderators’ every bathroom break. Two Muslim employees were ordered to stop praying during their nine minutes per day of allotted “wellness time.”
• Employees can be fired after making just a handful of errors a week, and those who remain live in fear of former colleagues returning to seek vengeance. One man we spoke with started bringing a gun to work to protect himself.
• Employees have been found having sex inside stairwells and a room reserved for lactating mothers, in what one employee describes as “trauma bonding.”
• Moderators cope with seeing traumatic images and videos by telling dark jokes about committing suicide, then smoking weed during breaks to numb their emotions. Moderators are routinely high at work.
• Employees are developing PTSD-like symptoms after they leave the company, but are no longer eligible for any support from Facebook or Cognizant.
• Employees have begun to embrace the fringe viewpoints of the videos and memes that they are supposed to moderate. The Phoenix site is home to a flat Earther and a Holocaust denier. A former employee tells us he no longer believes 9/11 was a terrorist attack.

The moderators told me it’s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea that the Earth is flat. A former employee told me he has begun to question certain aspects of the Holocaust. Another former employee, who told me he has mapped every escape route out of his house and sleeps with a gun at his side, said: “I no longer believe 9/11 was a terrorist attack.”

Chloe cries for a while in the break room, and then in the bathroom, but begins to worry that she is missing too much training. She had been frantic for a job when she applied, as a recent college graduate with no other immediate prospects. When she becomes a full-time moderator, Chloe will make $15 an hour — $4 more than the minimum wage in Arizona, where she lives, and better than she can expect from most retail jobs.

The tears eventually stop coming, and her breathing returns to normal. When she goes back to the training room, one of her peers is discussing another violent video. She sees that a drone is shooting people from the air. Chloe watches the bodies go limp as they die.

She leaves the room again.

Eventually a supervisor finds her in the bathroom, and offers a weak hug. Cognizant makes a counselor available to employees, but only for part of the day, and he has yet to get to work. Chloe waits for him for the better part of an hour.

When the counselor sees her, he explains that she has had a panic attack. He tells her that, when she graduates, she will have more control over the Facebook videos than she had in the training room. You will be able to pause the video, he tells her, or watch it without audio. Focus on your breathing, he says. Make sure you don’t get too caught up in what you’re watching.

”He said not to worry — that I could probably still do the job,” Chloe says. Then she catches herself: “His concern was: don’t worry, you can do the job.”

On May 3, 2017, Mark Zuckerberg announced the expansion of Facebook’s “community operations” team. The new employees, who would be added to 4,500 existing moderators, would be responsible for reviewing every piece of content reported for violating the company’s community standards. By the end of 2018, in response to criticism of the prevalence of violent and exploitative content on the social network, Facebook had more than 30,000 employees working on safety and security — about half of whom were content moderators.

The moderators include some full-time employees, but Facebook relies heavily on contract labor to do the job. Ellen Silver, Facebook’s vice president of operations, said in a blog post last year that the use of contract labor allowed Facebook to “scale globally” — to have content moderators working around the clock, evaluating posts in more than 50 languages, at more than 20 sites around the world.

The use of contract labor also has a practical benefit for Facebook: it is radically cheaper. The median Facebook employee earns $240,000 annually in salary, bonuses, and stock options. A content moderator working for Cognizant in Arizona, on the other hand, will earn just $28,800 per year. The arrangement helps Facebook maintain a high profit margin. In its most recent quarter, the company earned $6.9 billion in profits, on $16.9 billion in revenue. And while Zuckerberg had warned investors that Facebook’s investment in security would reduce the company’s profitability, profits were up 61 percent over the previous year.

Since 2014, when Adrian Chen detailed the harsh working conditions for content moderators at social networks for Wired, Facebook has been sensitive to the criticism that it is traumatizing some of its lowest-paid workers. In her blog post, Silver said that Facebook assesses potential moderators’ “ability to deal with violent imagery,” screening them for their coping skills.

Bob Duncan, who oversees Cognizant’s content moderation operations in North America, says recruiters carefully explain the graphic nature of the job to applicants. “We share examples of the kinds of things you can see … so that they have an understanding,” he says. “The intention of all that is to ensure people understand it. And if they don’t feel that work is potentially suited for them based on their situation, they can make those decisions as appropriate.”

Until recently, most Facebook content moderation has been done outside the United States. But as Facebook’s demand for labor has grown, it has expanded its domestic operations to include sites in California, Arizona, Texas, and Florida.

The United States is the company’s home and one of the countries in which it is most popular, says Facebook’s Davidson. American moderators are more likely to have the cultural context necessary to evaluate U.S. content that may involve bullying and hate speech, which often involve country-specific slang, he says.

Facebook also worked to build what Davidson calls “state-of-the-art facilities, so they replicated a Facebook office and had that Facebook look and feel to them. That was important because there’s also a perception out there in the market sometimes … that our people sit in very dark, dingy basements, lit only by a green screen. That’s really not the case.”

It is true that Cognizant’s Phoenix location is neither dark nor dingy. And to the extent that it offers employees desks with computers on them, it may faintly resemble other Facebook offices. But while employees at Facebook’s Menlo Park headquarters work in an airy, sunlit complex designed by Frank Gehry, its contractors in Arizona labor in an often cramped space where long lines for the few available bathroom stalls can take up most of employees’ limited break time. And while Facebook employees enjoy a wide degree of freedom in how they manage their days, Cognizant workers’ time is managed down to the second.

A content moderator named Miguel arrives for the day shift just before it begins, at 7 a.m. He’s one of about 300 workers who will eventually filter into the workplace, which occupies two floors in a Phoenix office park.

Security personnel keep watch over the entrance, on the lookout for disgruntled ex-employees and Facebook users who might confront moderators over removed posts. Miguel badges in to the office and heads to the lockers. There are barely enough lockers to go around, so some employees have taken to keeping items in them overnight to ensure they will have one the next day.

The lockers occupy a narrow hallway that, during breaks, becomes choked with people. To protect the privacy of the Facebook users whose posts they review, workers are required to store their phones in lockers while they work.

Writing utensils and paper are also not allowed, in case Miguel might be tempted to write down a Facebook user’s personal information. This policy extends to small paper scraps, such as gum wrappers. Smaller items, like hand lotion, are required to be placed in clear plastic bags so they are always visible to managers.

To accommodate four daily shifts — and high employee turnover — most people will not be assigned a permanent desk on what Cognizant calls “the production floor.” Instead, Miguel finds an open workstation and logs in to a piece of software known as the Single Review Tool, or SRT. When he is ready to work, he clicks a button labeled “resume reviewing,” and dives into the queue of posts.

Last April, a year after many of the documents had been published in the Guardian, Facebook made public the community standards by which it attempts to govern its 2.3 billion monthly users. In the months afterward, Motherboard and Radiolab published detailed investigations into the challenges of moderating such a vast amount of speech.

Those challenges include the sheer volume of posts; the need to train a global army of low-paid workers to consistently apply a single set of rules; near-daily changes and clarifications to those rules; a lack of cultural or political context on the part of the moderators; missing context in posts that makes their meaning ambiguous; and frequent disagreements among moderators about whether the rules should apply in individual cases.

Despite the high degree of difficulty in applying such a policy, Facebook has instructed Cognizant and its other contractors to emphasize a metric called “accuracy” over all else. Accuracy, in this case, means that when Facebook audits a subset of contractors’ decisions, its full-time employees agree with the contractors. The company has set an accuracy target of 95 percent, a number that always seems just out of reach. Cognizant has never hit the target for a sustained period of time — it usually floats in the high 80s or low 90s, and was hovering around 92 at press time.

Miguel diligently applies the policy — even though, he tells me, it often makes no sense to him.

A post calling someone “my favorite n-----” is allowed to stay up, because under the policy it is considered “explicitly positive content.”

“Autistic people should be sterilized” seems offensive to him, but it stays up as well. Autism is not a “protected characteristic” the way race and gender are, and so it doesn’t violate the policy. (“Men should be sterilized” would be taken down.)

In January, Facebook distributes a policy update stating that moderators should take into account recent romantic upheaval when evaluating posts that express hatred toward a gender. “I hate all men” has always violated the policy. But “I just broke up with my boyfriend, and I hate all men” no longer does.

Miguel works the posts in his queue. They arrive in no particular order at all.

Here is a racist joke. Here is a man having sex with a farm animal. Here is a graphic video of murder recorded by a drug cartel. Some of the posts Miguel reviews are on Facebook, where he says bullying and hate speech are more common; others are on Instagram, where users can post under pseudonyms, and tend to share more violence, nudity, and sexual activity.

Each post presents Miguel with two separate but related tests. First, he must determine whether a post violates the community standards. Then, he must select the correct reason why it violates the standards. If he accurately recognizes that a post should be removed, but selects the “wrong” reason, this will count against his accuracy score.

Miguel is very good at his job. He will take the correct action on each of these posts, striving to purge Facebook of its worst content while protecting the maximum amount of legitimate (if uncomfortable) speech. He will spend less than 30 seconds on each item, and he will do this up to 400 times a day.

When Miguel has a question, he raises his hand, and a “subject matter expert” (SME) — a contractor expected to have more comprehensive knowledge of Facebook’s policies, who makes $1 more per hour than Miguel does — will walk over and assist him. This will cost Miguel time, though, and while he does not have a quota of posts to review, managers monitor his productivity, and ask him to explain himself when the number slips into the 200s.

From Miguel’s 1,500 or so weekly decisions, Facebook will randomly select 50 or 60 to audit. These posts will be reviewed by a second Cognizant employee — a quality assurance worker, known internally as a QA, who also makes $1 per hour more than Miguel. Full-time Facebook employees then audit a subset of QA decisions, and from these collective deliberations, an accuracy score is generated.

Miguel takes a dim view of the accuracy figure.

“Accuracy is only judged by agreement. If me and the auditor both allow the obvious sale of heroin, Cognizant was ‘correct,’ because we both agreed,” he says. “This number is fake.”

Facebook’s single-minded focus on accuracy developed after sustaining years of criticism over its handling of moderation issues. With billions of new posts arriving each day, Facebook feels pressure on all sides. In some cases, the company has been criticized for not doing enough — as when United Nations investigators found that it had been complicit in spreading hate speech during the genocide of the Rohingya community in Myanmar. In others, it has been criticized for overreach — as when a moderator removed a post that excerpted the Declaration of Independence. (Thomas Jefferson was ultimately granted a posthumous exemption to Facebook’s speech guidelines, which prohibit the use of the phrase “Indian savages.”)

One reason moderators struggle to hit their accuracy target is that for any given policy enforcement decision, they have several sources of truth to consider.

The canonical source for enforcement is Facebook’s public community guidelines — which consist of two sets of documents: the publicly posted ones, and the longer internal guidelines, which offer more granular detail on complex issues. These documents are further augmented by a 15,000-word secondary document, called “Known Questions,” which offers additional commentary and guidance on thorny questions of moderation — a kind of Talmud to the community guidelines’ Torah. Known Questions used to occupy a single lengthy document that moderators had to cross-reference daily; last year it was incorporated into the internal community guidelines for easier searching.

A third major source of truth is the discussions moderators have among themselves. During breaking news events, such as a mass shooting, moderators will try to reach a consensus on whether a graphic image meets the criteria to be deleted or marked as disturbing. But sometimes they reach the wrong consensus, moderators said, and managers have to walk the floor explaining the correct decision.

The fourth source is perhaps the most problematic: Facebook’s own internal tools for distributing information. While official policy changes typically arrive every other Wednesday, incremental guidance about developing issues is distributed on a near-daily basis. Often, this guidance is posted to Workplace, the enterprise version of Facebook that the company introduced in 2016. Like Facebook itself, Workplace has an algorithmic News Feed that displays posts based on engagement. During a breaking news event, such as a mass shooting, managers will often post conflicting information about how to moderate individual pieces of content, which then appear out of chronological order on Workplace. Six current and former employees told me that they had made moderation mistakes based on seeing an outdated post at the top of their feed. At times, it feels as if Facebook’s own product is working against them. The irony is not lost on the moderators.

“It happened all the time,” says Diana, a former moderator. “It was horrible — one of the worst things I had to personally deal with, to do my job properly.” During times of national tragedy, such as the 2017 Las Vegas shooting, managers would tell moderators to remove a video — and then, in a separate post a few hours later, to leave it up. The moderators would make a decision based on whichever post Workplace served up.

“It was such a big mess,” Diana says. “We’re supposed to be up to par with our decision making, and it was messing up our numbers.”

Workplace posts about policy changes are supplemented by occasional slide decks that are shared with Cognizant workers about special topics in moderation — often tied to grim anniversaries, such as the Parkland shooting. But these presentations and other supplementary materials often contain embarrassing errors, moderators told me. Over the past year, communications from Facebook incorrectly identified certain U.S. representatives as senators; misstated the date of an election; and gave the wrong name for the high school at which the Parkland shooting took place. (It is Marjory Stoneman Douglas High School, not “Stoneham Douglas High School.”)

Even with an ever-changing rulebook, moderators are granted only the slimmest margins of error. The job resembles a high-stakes video game in which you start out with 100 points — a perfect accuracy score — and then scratch and claw to keep as many of those points as you can. Because once you fall below 95, your job is at risk.

If a quality assurance manager marks Miguel’s decision wrong, he can appeal the decision. Getting the QA to agree with you is known as “getting the point back.” In the short term, an “error” is whatever a QA says it is, and so moderators have good reason to appeal every time they are marked wrong. (Recently, Cognizant made it even harder to get a point back, by requiring moderators to first get a SME to approve their appeal before it would be forwarded to the QA.)

Sometimes, questions about confusing subjects are escalated to Facebook. But every moderator I asked about this said that Cognizant managers discourage employees from raising issues to the client, apparently out of fear that too many questions would annoy Facebook.

This has resulted in Cognizant inventing policy on the fly. When the community standards did not explicitly prohibit erotic asphyxiation, three former moderators told me, a team leader declared that images depicting choking would be permitted unless the fingers depressed the skin of the person being choked.

Before workers are fired, they are offered coaching and placed into a remedial program designed to make sure they master the policy. But often this serves as a pretext for managing workers out of the job, three former moderators told me. Other times, contractors who have missed too many points will escalate their appeals to Facebook for a final decision. But the company does not always get through the backlog of requests before the employee in question is fired, I was told.

Officially, moderators are prohibited from approaching QAs and lobbying them to reverse a decision. But it is still a regular occurrence, two former QAs told me.

One, named Randy, would sometimes return to his car at the end of a work day to find moderators waiting for him. Five or six times over the course of a year, someone would attempt to intimidate him into changing his ruling. “They would confront me in the parking lot and tell me they were going to beat the shit out of me,” he says. “There wasn’t even a single instance where it was respectful or nice. It was just, You audited me wrong! That was a boob! That was full areola, come on man!”

Fearing for his safety, Randy began bringing a concealed gun to work. Fired employees regularly threatened to return to work and harm their old colleagues, and Randy believed that some of them were serious. A former coworker told me she was aware that Randy brought a gun to work, and approved of it, fearing on-site security would not be sufficient in the case of an attack.

Cognizant’s Duncan told me the company would investigate the various safety and management issues that moderators had disclosed to me. He said bringing a gun to work was a violation of policy and that, had management been aware of it, they would have intervened and taken action against the employee.

Randy quit after a year. He never had occasion to fire the gun, but his anxiety lingers.

“Part of the reason I left was how unsafe I felt in my own home and my own skin,” he says.

Before Miguel can take a break, he clicks a browser extension to let Cognizant know he is leaving his desk. (“That’s a standard thing in this type of industry,” Facebook’s Davidson tells me. “To be able to track, so you know where your workforce is.”)

Miguel is allowed two 15-minute breaks, and one 30-minute lunch. During breaks, he often finds long lines for the restrooms. Hundreds of employees share just one urinal and two stalls in the men’s room, and three stalls in the women’s. Cognizant eventually allowed employees to use a restroom on another floor, but getting there and back will take Miguel precious minutes. By the time he has used the restroom and fought the crowd to his locker, he might have five minutes to look at his phone before returning to his desk.

Miguel is also allotted nine minutes per day of “wellness time,” which he is supposed to use if he feels traumatized and needs to step away from his desk. Several moderators told me that they routinely used their wellness time to go to the restroom when lines were shorter. But management eventually realized what they were doing, and ordered employees not to use wellness time to relieve themselves. (Recently a group of Facebook moderators hired through Accenture in Austin complained about “inhumane” conditions related to break periods; Facebook attributed the issue to a misunderstanding of its policies.)

At the Phoenix site, Muslim workers who used wellness time to perform one of their five daily prayers were told to stop the practice and do it on their other break time instead, current and former employees told me. It was unclear to the employees I spoke with why their managers did not consider prayer to be a valid use of the wellness program. (Cognizant did not offer a comment about these incidents, although a person familiar with one case told me a worker requested more than 40 minutes for daily prayer, which the company considered excessive.)

Cognizant employees are told to cope with the stress of the jobs by visiting counselors, when they are available; by calling a hotline; and by using an employee assistance program, which offers a handful of therapy sessions. More recently, yoga and other therapeutic activities have been added to the work week. But aside from occasional visits to the counselor, six employees I spoke with told me they found these resources inadequate. They told me they coped with the stress of the job in other ways: with sex, drugs, and offensive jokes.

Among the places that Cognizant employees have been found having sex at work: the bathroom stalls, the stairwells, the parking garage, and the room reserved for lactating mothers. In early 2018, the security team sent out a memo to managers alerting them to the behavior, a person familiar with the matter told me. The solution: management removed door locks from the mother’s room and from a handful of other private rooms. (The mother’s room now locks again, but would-be users must first check out a key from an administrator.)

A former moderator named Sara said that the secrecy around their work, coupled with the difficulty of the job, forged strong bonds between employees. “You get really close to your coworkers really quickly,” she says. “If you’re not allowed to talk to your friends or family about your job, that’s going to create some distance. You might feel closer to these people. It feels like an emotional connection, when in reality you’re just trauma bonding.”

Employees also cope using drugs and alcohol, both on and off campus. One former moderator, Li, told me he used marijuana on the job almost daily, through a vaporizer. During breaks, he says, small groups of employees often head outside and smoke. (Medical marijuana use is legal in Arizona.)

“I can’t even tell you how many people I’ve smoked with,” Li says. “It’s so sad, when I think back about it — it really does hurt my heart. We’d go down and get stoned and go back to work. That’s not professional. Knowing that the content moderators for the world’s biggest social media platform are doing this on the job, while they are moderating content …”

He trailed off.

Li, who worked as a moderator for about a year, was one of several employees who said the workplace was rife with pitch-black humor. Employees would compete to send each other the most racist or offensive memes, he said, in an effort to lighten the mood. As an ethnic minority, Li was a frequent target of his coworkers, and he embraced what he saw as good-natured racist jokes at his expense, he says.

But over time, he grew concerned for his mental health.

“We were doing something that was darkening our soul — or whatever you call it,” he says. “What else do you do at that point? The one thing that makes us laugh is actually damaging us. I had to watch myself when I was joking around in public. I would accidentally say [offensive] things all the time — and then be like, Oh shit, I’m at the grocery store. I cannot be talking like this.”

Jokes about self-harm were also common. “Drinking to forget,” Sara heard a coworker once say, when the counselor asked him how he was doing. (The counselor did not invite the employee in for further discussion.) On bad days, Sara says, people would talk about it being “time to go hang out on the roof” — the joke being that Cognizant employees might one day throw themselves off it.

One day, Sara said, moderators looked up from their computers to see a man standing on top of the office building next door. Most of them had watched hundreds of suicides that began just this way. The moderators got up and hurried to the windows.

The man didn’t jump, though. Eventually everyone realized that he was a fellow employee, taking a break.

Like most of the former moderators I spoke with, Chloe quit after about a year.

Among other things, she had grown concerned about the spread of conspiracy theories among her colleagues. One QA often discussed his belief that the Earth is flat with colleagues, and “was actively trying to recruit other people” into believing, another moderator told me. One of Miguel’s colleagues once referred casually to “the Holohoax,” in what Miguel took as a signal that the man was a Holocaust denier.

Conspiracy theories were often well received on the production floor, six moderators told me. After the Parkland shooting last year, moderators were initially horrified by the attacks. But as more conspiracy content was posted to Facebook and Instagram, some of Chloe’s colleagues began expressing doubts.

“People really started to believe these posts they were supposed to be moderating,” she says. “They were saying, ‘Oh gosh, they weren’t really there. Look at this CNN video of David Hogg — he’s too old to be in school.’ People started Googling things instead of doing their jobs and looking into conspiracy theories about them. We were like, ‘Guys, no, this is the crazy stuff we’re supposed to be moderating. What are you doing?’”

Most of all, though, Chloe worried about the long-term impacts of the job on her mental health. Several moderators told me they experienced symptoms of secondary traumatic stress — a disorder that can result from observing firsthand trauma experienced by others. The disorder, whose symptoms can be identical to post-traumatic stress disorder, is often seen in physicians, psychotherapists, and social workers. People experiencing secondary traumatic stress report feelings of anxiety, sleep loss, loneliness, and dissociation, among other ailments.

Last year, a former Facebook moderator in California sued the company, saying her job as a contractor with the firm Pro Unlimited had left her with PTSD. In the complaint, her lawyers said she “seeks to protect herself from the dangers of psychological trauma resulting from Facebook’s failure to provide a safe workplace for the thousands of contractors who are entrusted to provide the safest possible environment for Facebook users.” (The suit is still unresolved.)

Chloe has experienced trauma symptoms in the months since leaving her job. She started to have a panic attack in a movie theater during the film Mother!, when a violent stabbing spree triggered a memory of that first video she moderated in front of her fellow trainees. Another time, she was sleeping on the couch when she heard machine gun fire, and had a panic attack. Someone in her house had turned on a violent TV show. She “started freaking out,” she says. “I was begging them to shut it off.”

The attacks make her think of her fellow trainees, especially the ones who fail out of the program before they can start. “A lot of people don’t actually make it through the training,” she says. “They go through those four weeks and then they get fired. They could have had that same experience that I did, and had absolutely no access to counselors after that.”

Last week, Davidson told me, Facebook began surveying a test group of moderators to measure what the company calls their “resiliency” — their ability to bounce back from seeing traumatic content and continue doing their jobs. The company hopes to expand the test to all of its moderators globally, he said.

Randy also left after about a year. Like Chloe, he had been traumatized by a video of a stabbing. The victim had been about his age, and he remembers hearing the man crying for his mother as he died.

“Every day I see that,” Randy says, “I have a genuine fear over knives. I like cooking — getting back into the kitchen and being around the knives is really hard for me.”

The job also changed the way he saw the world. After he saw so many videos saying that 9/11 was not a terrorist attack, he came to believe them. Conspiracy videos about the Las Vegas massacre were also very persuasive, he says, and he now believes that multiple shooters were responsible for the attack. (The FBI found that the massacre was the work of a single gunman.)

Randy now sleeps with a gun at his side. He runs mental drills about how he would escape his home in the event that it were attacked. When he wakes up in the morning, he sweeps the house with his gun raised, looking for invaders.

He has recently begun seeing a new therapist, after being diagnosed with PTSD and generalized anxiety disorder.

“I’m fucked up, man,” Randy says. “My mental health — it’s just so up and down. One day I can be really happy, and doing really good. The next day, I’m more or less of a zombie. It’s not that I’m depressed. I’m just stuck.”

He adds: “I don’t think it’s possible to do the job and not come out of it with some acute stress disorder or PTSD.”

A common complaint of the moderators I spoke with was that the on-site counselors were largely passive, relying on workers to recognize the signs of anxiety and depression and seek help.

“There was nothing that they were doing for us,” Li says, “other than expecting us to be able to identify when we’re broken. Most of the people there that are deteriorating — they don’t even see it. And that’s what kills me.”

Last week, after I told Facebook about my conversations with moderators, the company invited me to Phoenix to see the site for myself. It is the first time Facebook has allowed a reporter to visit an American content moderation site since the company began building dedicated facilities here two years ago. A spokeswoman who met me at the site says that the stories I have been told do not reflect the day-to-day experiences of most of its contractors, either at Phoenix or at its other sites around the world.

The day before I arrived at the office park where Cognizant resides, one source tells me, new motivational posters were hung up on the walls. On the whole, the space is much more colorful than I expect. A neon wall chart outlines the month’s activities, which read like a cross between the activities at summer camp and a senior center: yoga, pet therapy, meditation, and a Mean Girls-inspired event called On Wednesdays We Wear Pink. The day I was there marked the end of Random Acts of Kindness Week, in which employees were encouraged to write inspirational messages on colorful cards, and attach them to a wall with a piece of candy.

After meetings with executives from Cognizant and Facebook, I interview five workers who had volunteered to speak with me. They stream into a conference room, along with the man who is responsible for running the site. With their boss sitting at their side, employees acknowledge the challenges of the job but tell me they feel safe, supported, and believe the job will lead to better-paying opportunities — within Cognizant, if not Facebook.

Brad, who holds the title of policy manager, tells me that the majority of content that he and his colleagues review is essentially benign, and warns me against overstating the mental health risks of doing the job.

“There’s this perception that we’re bombarded by these graphic images and content all the time, when in fact the opposite is the truth,” says Brad, who has worked on the site for nearly two years. “Most of the stuff we see is mild, very mild. It’s people going on rants. It’s people reporting photos or videos simply because they don’t want to see it — not because there’s any issue with the content. That’s really the majority of the stuff that we see.”

When I ask about the high difficulty of applying the policy, a reviewer named Michael says that he regularly finds himself stumped by tricky decisions. “There is an infinite possibility of what’s gonna be the next job, and that does create an essence of chaos,” he says. “But it also keeps it interesting. You’re never going to go an entire shift already knowing the answer to every question.”

In any case, Michael says, he enjoys the work better than he did at his last job, at Walmart, where he was often berated by customers. “I do not have people yelling in my face,” he says.

The moderators stream out, and I’m introduced to two counselors on the site, including the doctor who started the on-site counseling program here. Both ask me not to use their real names. They tell me that they check in with every employee every day. They say that the combination of on-site services, a hotline, and an employee assistance program are sufficient to protect workers’ well-being.

When I ask about the risks of contractors developing PTSD, a counselor I’ll call Logan tells me about a different psychological phenomenon: “post-traumatic growth,” an effect whereby some trauma victims emerge from the experience feeling stronger than before. The example he gives me is that of Malala Yousafzai, the women’s education activist, who was shot in the head as a teenager by the Taliban.

“That’s an extremely traumatic event that she experienced in her life,” Logan says. “It seems like she came back extremely resilient and strong. She won a Nobel Peace Prize... So there are many examples of people that experience difficult times and come back stronger than before.”

The day ends with a tour, in which I walk the production floor and talk with other employees. I am struck by how young they are: almost everyone seems to be in their twenties or early thirties. All work stops while I’m on the floor, to ensure I do not see any Facebook user’s private information, and so employees chat amiably with their deskmates as I walk by. I take note of the posters. One, from Cognizant, bears the enigmatic slogan “empathy at scale.” Another, made famous by Facebook COO Sheryl Sandberg, reads “What would you do if you weren’t afraid?”

It makes me think of Randy and his gun.

Everyone I meet at the site expresses great care for the employees, and appears to be doing their best for them, within the context of the system they have all been plugged into. Facebook takes pride in the fact that it pays contractors at least 20 percent above minimum wage at all of its content review sites, provides full healthcare benefits, and offers mental health resources that far exceed that of the larger call center industry.

And yet the more moderators I spoke with, the more I came to doubt the use of the call center model for content moderation. This model has long been standard across big tech companies — it’s also used by Twitter and Google, and therefore YouTube. Beyond cost savings, the benefit of outsourcing is that it allows tech companies to rapidly expand their services into new markets and languages. But it also entrusts essential questions of speech and safety to people who are paid as if they were handling customer service calls for Best Buy.

Every moderator I spoke with took great pride in their work, and talked about the job with profound seriousness. They wished only that Facebook employees would think of them as peers, and to treat them with something resembling equality.

“If we weren’t there doing that job, Facebook would be so ugly,” Li says. “We’re seeing all that stuff on their behalf. And hell yeah, we make some wrong calls. But people don’t know that there’s actually human beings behind those seats.”

That people don’t know there are human beings doing this work is, of course, by design. Facebook would rather talk about its advancements in artificial intelligence, and dangle the prospect that its reliance on human moderators will decline over time.

But given the limits of the technology, and the infinite varieties of human speech, such a day appears to be very far away. In the meantime, the call center model of content moderation is taking an ugly toll on many of its workers. As first responders on platforms with billions of users, they are performing a critical function of modern civil society, while being paid less than half as much as many others who work on the front lines. They do the work as long as they can — and when they leave, an NDA ensures that they retreat even further into the shadows.

To Facebook, it will seem as if they never worked there at all. Technically, they never did.
https://www.theverge.com/2019/2/25/1...itions-arizona





Judge Says Washington State Cyberstalking Law Violates Free Speech

Its definitions were reportedly too vague.
Jon Fingas

Washington was one of the first states to fight cyberstalking through legislation, but it may have to rethink its approach. A federal judge has blocked the state's 2004 law after ruling that a key provision violated First Amendment protections for free speech due to vague terms. Its prohibitions against speech meant to "harass, intimidate, torment or embarrass" weren't clearly defined, according to the judge, and effectively criminalized a "large range" of language guarded under the Constitution. You could theoretically face legal action just by criticizing a public figure.

The ruling came after a retired Air Force Major, Richard Rynearson III, sued to have the law overturned. He claimed that Kitsap County threatened to prosecute him under the cyberstalking law for criticizing an activist involved with a memorial to Japanese victims of US internment camps during World War II. While Rynearson would use "invective, ridicule, and harsh language," the judge said, his language was neither threatening nor obscene.

Officials had contended that the law held up because it targeted conduct, not the speech itself. They also maintained that Rynearson hadn't shown evidence of a serious threat -- just that the prosecutor's office would see how Rynearson behaved and take action if necessary. A county court had already tossed out the activist's restraining order against Rynearson over free speech.

It's not clear whether Washington will appeal the decision. If the ruling stays, though, it could force legislators to significantly narrow the scope if it wants a cyberstalking law to remain in place. This might also set a precedent that could affect legislation elsewhere in the country.
https://www.engadget.com/2019/02/23/...rstalking-law/





New Flaws in 4G, 5G Allow Attackers to Intercept Calls and Track Phone Locations
Zack Whittaker

A group of academics have found three new security flaws in 4G and 5G, which they say can be used to intercept phone calls and track the locations of cell phone users.

The findings are said to be the first time vulnerabilities have affected both 4G and the incoming 5G standard, which promises faster speeds and better security, particularly against law enforcement use of cell site simulators, known as “stingrays.” But the researchers say that their new attacks can defeat newer protections that were believed to make it more difficult to snoop on phone users.

“Any person with a little knowledge of cellular paging protocols can carry out this attack,” said Syed Rafiul Hussain, one of the co-authors of the paper, told TechCrunch in an email.

Hussain, along with Ninghui Li and Elisa Bertino at Purdue University, and Mitziu Echeverria and Omar Chowdhury at the University of Iowa are set to reveal their findings at the Network and Distributed System Security Symposium in San Diego on Tuesday.

“Any person with a little knowledge of cellular paging protocols can carry out this attack… such as phone call interception, location tracking, or targeted phishing attacks.”
Syed Rafiul Hussain, Purdue University

The paper, seen by TechCrunch prior to the talk, details the attacks: the first is Torpedo, which exploits a weakness in the paging protocol that carriers use to notify a phone before a call or text message comes through. The researchers found that several phone calls placed and cancelled in a short period can trigger a paging message without alerting the target device to an incoming call, which an attacker can use to track a victim’s location. Knowing the victim’s paging occasion also lets an attacker hijack the paging channel and inject or deny paging messages, by spoofing messages like as Amber alerts or blocking messages altogether, the researchers say.

Torpedo opens the door to two other attacks: Piercer, which the researchers say allows an attacker to determine an international mobile subscriber identity (IMSI) on the 4G network; and the aptly named IMSI-Cracking attack, which can brute force an IMSI number in both 4G and 5G networks, where IMSI numbers are encrypted.

That puts even the newest 5G-capable devices at risk from stingrays, said Hussain, which law enforcement use to identify someone’s real-time location and log all the phones within its range. Some of the more advanced devices are believed to be able to intercept calls and text messages, he said.

According to Hussain, all four major U.S. operators — AT&T, Verizon (which owns TechCrunch), Sprint and T-Mobile — are affected by Torpedo, and the attacks can carried out with radio equipment costing as little as $200. One U.S. network, which he would not name, was also vulnerable to the Piercer attack.

We contacted the big four cell giants, but none provided comment by the time of writing. If that changes, we’ll update.

Given two of the attacks exploit flaws in the 4G and 5G standards, almost all the cell networks outside the U.S. are vulnerable to these attacks, said Hussain. Several networks in Europe and Asia are also vulnerable.

Given the nature of the attacks, he said, the researchers are not releasing the proof-of-concept code to exploit the flaws.

It’s the latest blow to cellular network security, which has faced intense scrutiny no more so than in the last year for flaws that have allowed the interception of calls and text messages. Vulnerabilities in Signaling System 7, used by cell networks to route calls and messages across networks, are under active exploitation by hackers. While 4G was meant to be more secure, research shows that it’s just as vulnerable as its 3G predecessor. And, 5G was meant to fix many of the intercepting capabilities but European data security authorities warned of similar flaws.

Hussain said the flaws were reported to the GSMA, an industry body that represents mobile operators. GSMA recognized the flaws, but a spokesperson was unable to provide comment when reached. It isn’t known when the flaws will be fixed.

Hussain said the Torpedo and IMSI-Cracking flaws would have to be first fixed by the GSMA, whereas a fix for Piercer depends solely on the carriers. Torpedo remains the priority as it precursors the other flaws, said Hussain.

The paper comes almost exactly a year after Hussain et al revealed ten separate weaknesses in 4G LTE that allowed eavesdropping on phone calls and text messages, and spoofing emergency alerts.
https://techcrunch.com/2019/02/24/ne...ecurity-flaws/





ICANN Warns of “Ongoing and Significant” Attacks Against Internet’s DNS Infrastructure
Zack Whittaker

The internet’s address book keeper has warned of an “ongoing and significant risk” to key parts of the domain name system infrastructure, following months of increased attacks.

The Internet Corporation for Assigned Names and Numbers, or ICANN, issued the notice late Friday, saying DNS, which converts numerical internet addresses to domain names, has been the victim of “multifaceted attacks utilizing different methodologies.”

It follows similar warnings from security companies and the federal government in the wake of attacks believe to be orchestrated by nation state hackers.

In January, security company FireEye revealed that hackers likely associated with Iran were hijacking DNS records on a massive scale, by rerouting users from a legitimate web address to a malicious server to steal passwords. This so-called “DNSpionage” campaign, dubbed by Cisco’s Talos intelligence team, was targeting governments in Lebanon and the United Arab Emirates. Homeland Security’s newly founded Cybersecurity Infrastructure Security Agency later warned that U.S. agencies were also under attack. In its first emergency order amid a government shutdown, the agency ordered federal agencies to take action against DNS tampering.

ICANN’s chief technology officer David Conrad told the AFP news agency that the hackers are “going after the Internet infrastructure itself.”

The internet organization’s solution is calling on domain owners to deploy DNSSEC, a more secure version of DNS that’s more difficult to manipulate. DNSSEC cryptographically signs data to make it more difficult — though not impossible — to spoof.

But adoption has been glacial. Only three percent of the Fortune 1,000 are using DNSSEC, according to statistics by Cloudflare released in September. Internet companies like Cloudflare and Google have pushed for greater adoption by rolling out one-click enabling of DNSSEC to domain name owners.

DNSSEC adoption is currently at about 20 percent.
https://techcrunch.com/2019/02/23/ic...g-attacks-dns/





Huawei Frightens Europe's Data Protectors. America Does, Too
Helene Fouquet and Marie Mawad

• U.S. Cloud Act is raising concern about extraterritoriality
• Act allows authorities to get data overseas, EU to negotiate

A foreign power with possible unbridled access to Europe’s data is causing alarm in the region. No, it’s not China. It’s the U.S.

As the U.S. pushes ahead with the “Cloud Act” it enacted about a year ago, Europe is scrambling to curb its reach. Under the act, all U.S. cloud service providers from Microsoft and IBM to Amazon -- when ordered -- have to provide American authorities data stored on their servers regardless of where it’s housed. With those providers controlling much of the cloud market in Europe, the act could potentially give the U.S. the right to access information on large swaths of the region’s people and companies.

The U.S. says the act is aimed at aiding investigations. Some people are drawing parallels between the legislation and the National Intelligence Law that China put in place in 2017 requiring all its organizations and citizens to assist authorities with access to information. The Chinese law, which the U.S. says is a tool for espionage, is cited by President Donald Trump’s administration as a reason to avoid doing business with companies like Huawei Technologies Co.

“I don’t mean to compare U.S. and Chinese laws, because obviously they aren’t the same, but what we see is that on both sides, Chinese and American, there is clearly a push to have extraterritorial access to data,” Laure de la Raudiere, a French lawmaker who co-heads a parliamentary cyber-security and sovereignty group, said in an interview. “This must be a wake up call for Europe to accelerate its own, sovereign offer in the data sector.”

Matters of espionage and foreign interference will be at the center of talks at Europe’s biggest telecoms and technology conference, the MWC Barcelona, that started Monday.

Irish Case

The Cloud Act (or the “Clarifying Lawful Overseas Use of Data Act”) addresses an issue that came up when Microsoft in 2013 refused to provide the FBI access to a server in Ireland in a drug-trafficking investigation, saying it couldn’t be compelled to produce data stored outside the U.S.

The act’s extraterritoriality spooks the European Union -- an issue that’s become more acute as trans-Atlantic relations fray and the bloc sees the U.S. under Trump as an increasingly unreliable ally.

Europe may seek to mitigate the impact of the law by drawing on a provision in the act that allows the U.S. to reach “executive agreements” with countries allowing a mutual exchange of information and data. The European Commission wants the EU to enter into talks with the U.S., and negotiations may start this spring.

EU Action

France and other EU countries like The Netherlands and Belgium are pushing for the bloc to present a common front as they struggle to come up with regulations to protect privacy, avert cyber attacks and secure critical networks in the increasingly amorphous world of information in the cloud.

A Dutch lawmaker at the European Parliament, Sophie in ’t Veld, recently expressed frustration at what she called the EU’s “enormous weakness” in the face of the U.S.’s “unlimited data hunger.”

“Because of the Cloud Act, the long arm of the American authorities reaches European citizens, contradicting all EU law,” she said. “Would the Americans accept it if the EU would grant itself extraterritorial jurisdiction on U.S. soil? And would the Commission also propose negotiations with Russia or China, if they would adopt their own Russian or Chinese Cloud Act?"

An internal memo crafted by the French government in November states that “the Cloud Act could be a test from the U.S., and they expect a political response, which ought to be European to be strong enough.”
French Response

The Cloud Act was enacted just weeks ahead of Europe’s data-protection law, the General Data Protection Regulation, or GDPR, which states that all businesses that collect data from EU citizens have to follow the bloc’s rules, which could put the two laws at odds.

While waiting for the EU to get its response together, some countries are preparing their own, with the French leading the pack. President Emmanuel Macron’s teams are readying legal and technical measures to shield the country, four government officials involved said. The president’s office, the finance ministry and the state’s cyber security agency ANSSI have worked on it for the last 10 months.

“The more we dig into the Cloud Act, the more worrying it is,” said ANSSI chief Guillaume Poupard. “It’s a way for the U.S. to enter into negotiations... but it has an immediate extraterritorial effect that’s unbearable.”

Not OK

The French government has held meetings with banks, defense contractors, energy utilities and others, asking them to use “Cloud Act-safe” data providers. It’s also studying legal options, a finance ministry official said. One way might be to refresh a 1968 “Blocking Statute,” which prohibits French companies and citizens from providing “economic, commercial, industrial, financial, or technical documents or information” as evidence in legal proceedings outside the country.

“No one can accept that a foreign government, even the American one, can come fetch data on companies stored by a U.S. company, without warning and without us being able to respond,” Finance Minister Bruno Le Maire said in a speech on Feb. 18.

France has been more vociferous in its opposition to the Cloud Act because its companies have borne the brunt of other extraterritorial U.S. laws. In 2014, BNP was slapped with an $8.97 billion U.S. fine for transactions with countries facing American sanctions. French oil company Total SA walked away from a $4.8 billion project in Iran after Trump pulled out of its nuclear deal.

Local Providers

One consequence of the Cloud Act is that European companies and organizations will start looking for local alternatives. Europe’s phone operators, many of whom are already being steered away from Huawei, see the act making providers from the U.S. a threat, too.

“On the one hand you have this Chinese expansion and on the other these new U.S. rules are putting American companies at the mercy of the administration,” Gervais Pellissier, deputy chief executive officer of Orange SA, told reporters on Thursday in Paris. “The hardware bricks are either American or Chinese. We need to now find a software layer to deal with the situation.”

Local cloud providers are using the Cloud Act and GDPR in their sales pitches. French company Atos is telling customers it’ll keep their most-sensitive data physically on servers in Europe. It struck a deal with Google to safeguard client data.

OVH Groupe SAS, presenting itself as a Europe-grown rival to Amazon’s cloud business, is growing sales 30 percent a year and making profit running data centers in Europe.

“We can guarantee our customers the sovereignty of their data, which is more than Amazon or other rivals can offer,” Founder and CEO Octave Klaba told reporters in October.

— With assistance by Ben Brody, Angelina Rascouet, and Natalia Drozdiak
https://www.bloomberg.com/news/artic...erica-does-too





Massive Database Leak Gives Us a Window into China’s Digital Surveillance State
Danny O'Brien

Earlier this month, security researcher Victor Gevers found and disclosed an exposed database live-tracking the locations of about 2.6 million residents of Xinjiang, China, offering a window into what a digital surveillance state looks like in the 21st century.

Xinjiang is China’s largest province, and home to China’s Uighurs, a Turkic minority group. Here, the Chinese government has implemented a testbed police state where an estimated 1 million individuals from these minority groups have been arbitrarily detained. Among the detainees are academics, writers, engineers, and relatives of Uighurs in exile. Many Uighurs abroad worry for their missing family members, who they haven’t heard from for several months and, in some cases, over a year.

Although relatively little news gets out of Xinjiang to the rest of the world, we’ve known for over a year that China has been testing facial-recognition tracking and alert systems across Xinjiang and mandating the collection of biometric data—including DNA samples, voice samples, fingerprints, and iris scans—from all residents between the ages of 12 and 65. Reports from the province in 2016 indicated that Xinjiang residents can be questioned over the use of mobile and Internet tools; just having WhatsApp or Skype installed on your phone is classified as “subversive behavior.” Since 2017, the authorities have instructed all Xinjiang mobile phone users to install a spyware app in order to “prevent [them] from accessing terrorist information.”

The prevailing evidence of mass detention centers and newly-erected surveillance systems shows that China has been pouring billions of dollars into physical and digital means of pervasive surveillance in Xinjiang and other regions. But it’s often unclear to what extent these projects operate as real, functional high-tech surveillance, and how much they are primarily intended as a sort of “security theater”: a public display of oppression and control to intimidate and silence dissent.

Now, this security leak shows just how extensively China is tracking its Xinjiang residents: how parts of that system work, and what parts don’t. It demonstrates that the surveillance is real, even as it raises questions about the competence of its operators.

A Brief Window into China’s Digital Police State

Earlier this month, Gevers discovered an insecure MongoDB database filled with records tracking the location and personal information of 2.6 million people located in the Xinjiang Uyghur Autonomous Region. The records include individuals’ national ID number, ethnicity, nationality, phone number, date of birth, home address, employer, and photos.

Over a period of 24 hours, 6.7 million individual GPS coordinates were streamed to and collected by the database, linking individuals to various public camera streams and identification checkpoints associated with location tags such as “hotel,” “mosque,” and “police station.” The GPS coordinates were all located within Xinjiang.

This database is owned by the company SenseNets, a private AI company advertising facial recognition and crowd analysis technologies.

A couple of days later, Gevers reported a second open database tracking the movement of millions of cars and pedestrians. Violations like jaywalking, speeding, and going through a red-light are detected, trigger the camera to take a photo, and ping a WeChat API, presumably to try and tie the event to an identity.

Database Exposed to Anyone with an Internet Connection for Half a Year

China may have a working surveillance program in Xinjiang, but it’s a shockingly insecure security state. Anyone with an Internet connection had access to this massive honeypot of information.

Gevers also found evidence that these servers were previously accessed by other known global entities such as a Bitcoin ransomware actor, who had left behind entries in the database. To top it off, this server was also vulnerable to several known exploits.

In addition to this particular surveillance database, a Chinese cybersecurity firm revealed that at least 468 MongoDB servers had been exposed to the public Internet after Gevers and other security researchers started reporting them. Among these instances: databases containing detailed information about remote access consoles owned by China General Nuclear Power Group, and GPS coordinates of bike rentals.

A Model Surveillance State for China

China, like many other state actors, may simply be willing to tolerate sloppy engineering if its private contractors can reasonably claim to be delivering the goods. Last year, the government spent an extra $3 billion on security-related construction in Xinjiang, and the New York Times reported that China’s police planned to spend an additional $30 billion on surveillance in the future. Even poorly-executed surveillance is massively expensive, and Beijing is no doubt telling the people of Xinjiang that these investments are being made in the name of their own security. But the truth, revealed only through security failures and careful security research, tells a different story: China’s leaders seem to care little for the privacy, or the freedom, of millions of its citizens.
https://www.eff.org/deeplinks/2019/0...eillance-state





Russia Limits Operations of Foreign Communications Satellite Operators

The Kremlin will require foreign satellite operators to go through an approval process and build local ground stations.
Catalin Cimpanu

This week, the Russian government has published a document outlining new rules that limit foreign communications satellite operators inside the country.

According to a copy of the document, the Russian government will require all foreign communications satellite companies to pass all incoming traffic through a ground gateway station.

This means satellite operators won't be able to beam communications directly to customers without going through a ground station first.

The Russian government cited an espionage threat of allowing foreign satellite companies to transmit data directly within the country's border. Critics of the Kremlin regime say the new requirement will enable Russian government agencies to intercept any incoming traffic.

The new rules, set to enter into effect in six months, will also force all foreign communications satellite companies to obtain a permit from Russian authorities even before operating in the country.

The Russian Defense Ministry, the Federal Security Service (FSB), and Federal Protective Service (FSO) will be in charge of reviewing applicants.

According to Russian news agency RosBiznesKonsalting (RBK), which first broke the story in local media, this review process can take more than a year, the paper said citing telecom industry insiders.

The same sources also said the new law greatly inhibits foreign operators from entering the Russian market, mainly because of the cost of building a ground station, which can go up to tens of millions of US dollars.

Foreign communications satellite operators such as Globalstar, Inmarsat, Iridium, and Thuraya are less impacted by the new rules since they're already operating ground stations in Russia and have obtained permission from government agencies.

However, RBK sources claim new companies will have a hard time entering the market mainly because of tensions on the Russian political scene where foreign companies are now viewed with distrust and are always under suspicion that they might be facilitating espionage operations for other countries.

Just last month, the Russian government announced plans to disconnect the country from the global internet as part of a test of its internal DNS system, which Kremlin has been trying to build and launch since 2014.
https://www.zdnet.com/article/russia...ite-operators/





How Mozilla Moved Fast to Block Facebook and Other Privacy Invaders from Your Web Browser

Mozilla COO Denelle Dixon and her team are making online browsing safer.
Katharine Schwab

When Facebook users learned last March that the social media giant had given their sensitive information to political-data firm Cambridge Analytica, Mozilla (parent company of the security-focused browser Firefox) reacted fast: Within eight hours, the product team had built a browser extension called the Facebook Container. The plug-in, now the most popular browser extension Mozilla has ever built (1.5 million downloads and 500,000 monthly active users), prevents Facebook from trailing its users around the internet.

Firefox Monitor, a service Mozilla launched in September, uses your email address to determine whether your personal info has been compromised in a breach. By summer 2019, the Firefox browser will also block, by default, all cross-site third-party trackers, strengthening privacy without your having to do a thing (unlike Firefox’s biggest competitor, Google Chrome). “We want to make it simple for people to create walls around data that’s important to them,” says Denelle Dixon, Mozilla’s COO.
https://www.fastcompany.com/90299092...companies-2019





Surveillance Firm Asks Mozilla to be Included in Firefox's Certificate Whitelist

Mozilla caught between a rock and a hard place on the issue of DarkMatter root certificates.
Catalin Cimpanu

By Catalin Cimpanu for Zero Day | February 25, 2019 -- 00:57 GMT (16:57 PST) | Topic: Security

Mozilla's security team has been caught between a rock and a hard place in regards to a recent request to add a known surveillance vendor to Firefox's internal list of approved HTTPS certificate issuers.

The vendor is named DarkMatter, a cyber-security firm based in the United Arab Emirates that has been known to sell surveillance and hacking services to oppressive regimes in the Middle East [1, 2, 3].

A few months back, DarkMatter filed a bug report asking that its own root certificates be added to the Firefox's certificate store --which is an internal list of Certificate Authorities (CAs).

CAs are companies, organizations, and other entities that are approved to issue new TLS certificates --the mechanism that supports encrypted HTTPS communications.

Mozilla uses this certificate store to know what TLS certificates to trust when loading encrypted content inside Firefox and Thunderbird, similar to how Apple, Google, and Microsoft all use their own certificate stores to know what content to trust in their own products as well.

An organization that has a root certificate added in these root stores has the power to issue new certificates that are automatically trusted by these major companies and their respective browsers.

Currently, Mozilla is caught between a rock and a hard place because DarkMatter has a history of shady operations but also has a clean history as a CA, without any known abuses.

On one side Mozilla is pressured by organizations like the Electronic Frontier Foundation, Amnesty International, and The Intercept to decline DarkMatter's request, while on the other side DarkMatter claims it never abused its TLS certificate issuance powers for anything bad, hence there's no reason to treat it any differently from other CAs that have applied in the past.

Fears and paranoia are high because Mozilla's list of trusted root certificates is also used by some Linux distros. Many fear that once approved on Mozilla's certificate store list, DarkMatter may be able to issue TLS certificates that will be able to intercept internet traffic without triggering any errors on some Linux systems, usually deployed in data centers and at cloud service providers.

In Google Groups and Bugzilla discussions on its request, DarkMatter has denied any wrongdoing or any intention to do so.

The company has already been granted the ability to issue TLS certificates via an intermediary, a company called QuoVadis, now owned by DigiCert.

Those who are asking Mozilla to decline DarkMatter's request of inclusion in the root certificate store were quick to seize on the fact that DarkMatter has already misissued a few TLS certificates already via QuoVadis. However, most seem technical errors, and the certificates don't seem to have been abused for anything malicious.

"Given DarkMatter's business interest in intercepting TLS communications adding them to the trusted root list seems like a very bad idea," EFF's Cooper Quintin said in the Google Groups discussions. "I would go so far as revoking their intermediate certificate as well, based on these revelations."

Quintin expanded on his fears in a post on the EFF blog, reminding Mozilla that it went through a similar issue in 2009 with CNNIC, the Chinese government's official CA. Mozilla approved CNNIC as a trusted root CA in Firefox in 2009, and the CA was caught misissuing certificates for Google domains in 2015, allowing threat actors to intercept traffic meant for Google sites --an event that got CNNIC banned inside most certificate root store lists.

According to Mozilla engineers who spoke with ZDNet on deep background and did not want to share their names because they were not authorized to speak on behalf of the organization, Mozilla is seriously considering the issue.

We were told that Mozilla was not aware of DarkMatter's history at the time it applied to be included in its root store a few months back. A Reuters report published last month describing DarkMatter's involvement in helping the Saudi government spy on dissidents turned a few heads at Mozilla.

The report sparked criticism of the surveillance vendor in the months-old Bugzilla bug report, which led Mozilla staff to seriously consider making an exception to its normal CA approval process and decline the inclusion request despite a lack of any evidence of abuse.

Mozilla has now opened a separate Google Groups discussion to gather more feedback from the community, most of which, at the time of writing, has been negative. We were told Mozilla would most likely use this criticism as a reason to decline DarkMatter's request in an attempt to avoid bad press and another CNNIC incident.

"Mozilla's Root Store Policy grants us the discretion to take actions based on the risk to people who use our products. Despite the lack of direct evidence of misissuance by DarkMatter, this may be a time when we should use our discretion to act in the interest of individuals who rely on our root store," Mozilla said.
https://www.zdnet.com/article/survei...ate-whitelist/





The Feds’ Favorite iPhone Hacking Tool Is Selling On eBay For $100—And It’s Leaking Data
Thomas Brewster

When eBay merchant Mr. Balaj was looking through a pile of hi-fi junk at an auction in the U.K., he came across an odd-looking device. Easily mistaken for a child’s tablet, it had the word “Cellebrite” written on it. To Mr. Balaj, it appeared to be a worthless piece of electronic flotsam, so he left it in his garage to gather dust for eight months.

But recently he’s learned just what he had his hands on: a valuable, Israeli-made piece of technology called the Cellebrite UFED. It’s used by police around the world to break open iPhones, Androids and other modern mobiles to extract data. The U.S. federal government, from the FBI to Immigration and Customs Enforcement, has been handing millions to Cellebrite to break into Apple and Google smartphones. Mr. Balaj (Forbes agreed not to publish his first name at his request) and others on eBay are now acquiring and trading Cellebrite systems for between $100 and $1,000 a unit. Comparable, brand-new Cellebrite tools start at $6,000.

Cellebrite isn’t happy about those secondhand sales. On Tuesday, two sources from the forensics industry passed Forbes a letter from Cellebrite warning customers about reselling its hugely popular hacking devices because they could be used to access individuals’ private data. Rather than return the UFEDs to Cellebrite so they can be properly decommissioned, it appears police or other individuals who’ve acquired the machines are flogging them and failing to properly wipe them. Cybersecurity researchers are now warning that valuable case data and powerful police hacking tools could have leaked as a result.

Hacker’s delight

Earlier this month, Matthew Hickey, a cybersecurity researcher and cofounder of training academy Hacker House, bought a dozen UFED devices and probed them for data. He discovered that the secondhand kit contained information on what devices were searched, when they were searched and what kinds of data were removed. Mobile identifier numbers like the IMEI code were also retrievable.

Hickey believes he could have extracted more personal information, such as contact lists or chats, though he decided not to delve into such data. “I would feel a little awful if there was a picture of a crime scene or something,” he said. But using the information within a UFED, Hickey believes a malicious hacker could identify the suspects and their relevant cases.

In one screenshot provided by Hickey to Forbes, the previous UFED user had raided phones from Samsung, LG, ZTE and Motorola. Hickey had tested it on old iPhone and an iPod models with success.

Cellebrite hasn’t returned repeated emails from Forbes seeking comment over the last two weeks.

Rooting out Cellebrite’s secrets

The tools may also contain the software vulnerabilities Cellebrite keeps secret from the likes of Apple and Google, said Hickey. Cellebrite’s exploits (little software programs that break the security of computers and mobile phones) were encrypted, but the keys should be extractable from the UFED, though Hickey hasn’t had success on the tools he bought.

As Forbes reported in March last year, Cellebrite had become so adept at finding iOS flaws that it was able to crack the passcodes of the latest Apple models, up to the iPhone X. But the forensics provider is in a race to find flaws before Apple patches them and the hacks become impossible. The company explained to Forbes that it had to keep those exploits secret so Apple couldn’t fix and prevent police from accessing iPhones.

Looking deeper, Hickey found what appeared to be Wi-Fi passwords left on the UFEDs too. They could have belonged either to police agencies or to other private entities that had access to the devices, such as independent investigators or business auditors.
Reselling police data

There’s one obvious reason the Cellebrite devices have started appearing online: There are newer models of UFED being released with fresh software. But Hickey was concerned to find leftover forensics data.

“You’d think a forensics device used by law enforcement would be wiped before resale. The sheer volume of these units appearing online is indicative that some may not be renewing Cellebrite and disposing of the units elsewhere,” Hickey told Forbes.

“Units are intended to be returned to vendor precisely for this reason, people ignoring that risk information on the units being available to third parties.”

Hackable hacking kit

Hickey said security on the units was “fairly poor.” In particular, he was able to find out the admin account passwords for the devices and take control of them. Cracking the devices’ license controls was also simple, using guides found on online Turkish forums. A skilled hacker could unleash the device to break into iPhones or other smartphones using the same information, he said. A malicious attacker could also modify a unit to falsify evidence or even reverse the forensics process and create a phone capable of hacking the Cellebrite tech, Hickey warned.

Despite concerns about the security of critical law enforcement devices, Hickey at least plans to do something fun with his purchases. For some upcoming hacker parties, he’s going to alter them to run the shoot-’em-up classic Doom. Others have already started playing.
https://www.forbes.com/sites/thomasb.../#45173635dd4f





FastMail Loses Customers, Faces Calls to Move Over Anti-Encryption Laws

FastMail loses customers, faces calls to move over anti-encryption laws
Australia no longer 'respects right to privacy'.

Hosted email provider FastMail says it has lost customers and faces “regular” requests to shift its operations outside Australia following the passage of anti-encryption laws.

The Victorian company, which offers ad-free email services to users in 150 countries, told a senate committee that the now-passed laws were starting to bite.

“The way in which [the laws] were introduced, debated, and ultimately passed ... creates a perception that Australia has changed - that we are no longer a country which respects the right to privacy,” FastMail CEO Bron Gondwana said. [pdf]

“We have already seen an impact on our business caused by this perception.

“Our particular service is not materially affected as we already respond to warrants under the
Telecommunications Act.

“Still, we have seen existing customers leave, and potential customers go elsewhere, citing this bill as the reason for their choice

“We are [also] regularly being asked by customers if we plan to move.”

Gondwana’s comments are similar to those of Senetas, which said it now “regularly fields questions” from customers about how encryption-busting laws might impact the products they have installed and are using. Senetas also said that its sales pipeline had dulled.

FastMail also used its submission to the senate committee to raise concerns that secretive “technical capabilities” added to products and services to aid law enforcement were unlikely to stay secret for long.

Moreover, he said that technical capabilities could be removed and destroyed internally by coders not privy to those capabilities even existing in the code base.

“Our staff are curious and capable - if our system is behaving unexpectedly, they will attempt to understand why. This is a key part of bug discovery and keeping our systems secure,” Gondwana said.

“Technology is a tinkerer’s arena. Tools exist to monitor network data, system calls, and give computer users more observability than ever before.

“Secret data exfiltration code may be discovered by tinkerers or even anti-virus firms looking at unexpected behaviour.

“[Additionally, as code is refactored and products change over time, ensuring that a technical capability isn’t lost means that everybody working on the design and implementation needs to know that the technical capability exists and take it into account.”
https://www.itnews.com.au/news/fastm...on-laws-519783





Cloudflare Expands its Government Warrant Canaries
Zack Whittaker

When the government comes for your data, tech companies can’t always tell you. But thanks to a legal loophole, companies can say if they haven’t had a visit yet.

That’s opened up an interesting clause that allows companies to silently warn customers when the government turns up to secretly raid its stash of customer data without violating a gag order it. Under U.S. freedom of speech laws, companies can publicly say that “the government has not been here” when there has been no demand for data, but they are allowed to remove statements when a warrant comes in as a warning shot to anyone who pays attention.

These so-called “warrant canaries” — named for the poor canary down the mine that dies when there’s gas that humans can’t detect — are a key transparency tool that predominantly privacy-focused companies use to keep their customers aware of the goings-on behind the scenes.

Where companies have abandoned their canaries or caved to legal pressure, Cloudflare is bucking the trend.

The networking and content delivery network giant said in a blog post this week that it’s expanding the transparency reports to include more canaries.

To date, the company:

• has never turned over their SSL keys or customers’ SSL keys to anyone;
• has never installed any law enforcement software or equipment anywhere on their network;
• has never terminated a customer or taken down content due to political pressure;
• has never provided any law enforcement organization a feed of customers’ content transiting their network.

Those key points are critical to the company’s business. A government demand for SSL keys and installing intercept equipment on its network would allow investigators unprecedented access to a customer’s communications and data, and undermine the company’s security. A similar demand led to Ladar Levison shutting down his email service Lavabit when they sought the keys to obtain information on whistleblower Edward Snowden, who used the service.

Now Cloudflare’s warrant canaries will include:

• Cloudflare has never modified customer content at the request of law enforcement or another third party.
• Cloudflare has never modified the intended destination of DNS responses at the request of law enforcement or another third party.
• Cloudflare has never weakened, compromised, or subverted any of its encryption at the request of law enforcement or another third party.

It has also expanded and replaced its first canary to confirm that the company “has never turned over our encryption or authentication keys or our customers’ encryption or authentication keys to anyone.”

Cloudflare said that if it were ever asked to do any of the above, the company would “exhaust all legal remedies” to protect customer data, and remove the statements from its site.

The networking and content delivery network is one of a handful of major companies that have used warrant canaries over the years. Following reports that the National Security Agency was vacuuming up the call records from the major telecom giants in bulk, Apple included a statement in its most recent transparency reports noting that the company has to date “not received any orders for bulk data.” Reddit removed its warrant canary in 2015, indicating that it had received a national security order it wasn’t permitted to disclose.

Cloudflare’s expanded canaries were included in the company’s latest transparency report, out this week.

According to its latest figures covering the second half of 2018, Cloudflare responded to just seven subpoenas of the 19 requests, affecting 12 accounts and 309 domains. The company also responded to 44 court orders of the 55 requests, affecting 134 accounts and 19,265 domains.

The company received between 0-249 national security requests for the duration, and didn’t process any wiretap or foreign government requests for the duration.
https://techcrunch.com/2019/02/26/cl...arrant-canary/

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

February 23rd, February 16th, February 9th, February 2nd

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 10:14 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)