P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 27-03-19, 06:06 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - March 30th, ’19

Since 2002


































"We tested this — and, well, wow." – Natasha Lomas


"I wish I could go back in time and alter things: I can’t." – Ed Husic, Labor’s spokesman






































March 30th, 2019




Music Labels Sue Charter, Complain that High Internet Speeds Fuel Piracy

Sony, Universal, Warner claim Charter refused to kick music pirates off network.
Jon Brodkin

The music industry is suing Charter Communications, claiming that the cable Internet provider profits from music piracy by failing to terminate the accounts of subscribers who illegally download copyrighted songs. The lawsuit also complains that Charter helps its subscribers pirate music by selling packages with higher Internet speeds.

While the act of providing higher Internet speeds clearly isn't a violation of any law, ISPs can be held liable for their users' copyright infringement if the ISPs repeatedly fail to disconnect repeat infringers.

The top music labels—Sony, Universal, Warner, and their various subsidiaries—sued Charter Friday in a complaint filed in US District Court in Colorado. While Charter has a copyright policy that says repeat copyright infringers may be disconnected, Charter has failed to disconnect those repeat infringers in practice, the complaint said:

Despite these alleged policies, and despite receiving hundreds of thousands of infringement notices from Plaintiffs, as well as thousands of similar notices from other copyright owners, Charter knowingly permitted specifically identified repeat infringers to continue to use its network to infringe. Rather than disconnect the Internet access of blatant repeat infringers to curtail their infringement, Charter knowingly continued to provide these subscribers with the Internet access that enabled them to continue to illegally download or distribute Plaintiffs' copyrighted works unabated. Charter's provision of high-speed Internet service to known infringers materially contributed to these direct infringements.

The complaint accuses Charter of contributory copyright infringement and vicarious copyright infringement. Music labels asked for statutory damages of up to $150,000 for each work infringed or for actual damages including any profit Charter allegedly made from allowing piracy. The complaint focuses on alleged violations between March 24, 2013 and May 17, 2016.

During that time, plaintiffs say they sent infringement notices to Charter that "advised Charter of its subscribers' blatant and systematic use of Charter's Internet service to illegally download, copy, and distribute Plaintiffs' copyrighted music through BitTorrent and other online file-sharing services." The music industry's complaint repeatedly focused on BitTorrent and other peer-to-peer networks, saying that "online piracy committed via BitTorrent is stunning in nature, speed, and scope."

Lawsuit: High speeds enabled piracy

The music labels' complaint also seems to describe the basic acts of providing Internet service and advertising high speeds as nefarious:

Many of Charter's customers are motivated to subscribe to Charter's service because it allows them to download music and other copyrighted content—including unauthorized content—as efficiently as possible. Accordingly, in its consumer marketing material, including material directed to Colorado customers, Charter has touted how its service enables subscribers to download and upload large amounts of content at "blazing-fast Internet speeds." Charter has told existing and prospective customers that its high-speed service enables subscribers to "download just about anything instantly," and subscribers have the ability to "download 8 songs in 3 seconds." Charter has further told subscribers that its Internet service "has the speed you need for everything you do online." In exchange for this service, Charter has charged its customers monthly fees ranging in price based on the speed of service.

That paragraph from the music labels' complaint merely describes the standard business model of Internet providers. There is nothing illegal about offering higher Internet speeds in exchange for higher prices.

But the labels also allege that Charter's lax approach to copyright enforcement helped it earn more revenue, in part because piracy supposedly inspired consumers to subscribe to faster Internet tiers.

"For those account holders and subscribers who wanted to download files illegally at faster speeds, Charter obliged them in exchange for higher rates. In other words, the greater the bandwidth its subscribers required for pirating content, the more money Charter made," the complaint said.

The complaint argues that, while Charter performs network management to block "spam and other unwanted activity," the ISP "has gone out of its way not to take action against subscribers engaging in repeated copyright infringement."

"Charter condoned the illegal activity, because it was popular with subscribers and acted as a draw to attract and retain new and existing subscribers. Charter's customers, in turn, purchased more bandwidth and continued using Charter's services to infringe Plaintiffs' copyrights," the complaint said. "Charter undoubtedly recognized that if it terminated or otherwise prevented repeat infringer subscribers from using its service to infringe or made it less attractive for such use, Charter would enroll fewer new subscribers, lose existing subscribers, and ultimately lose revenue."

Infringement notices that music labels send to Charter and other ISPs identify violators by their IP addresses.

"Because the copyright holders could only determine the unique IP addresses of an ISP's infringing subscribers but not their actual identities, they served subpoenas on Charter and other ISPs to obtain the infringing subscribers' names and contact information," the complaint said. "Although Charter's customer agreements allowed it to produce that information and no real doubt existed as to their customers' underlying infringement, Charter vigorously opposed the subpoenas, undermining the record companies' efforts to curb direct infringement activity."

When contacted by Ars, a Charter spokesperson said, "We will defend against these baseless allegations." We asked Charter if it ever terminates the accounts of alleged copyright infringers, but the company provided no answer.

Industry has sued several ISPs

The record companies last week also sued Charter subsidiary Bright House Networks in US District Court in the Middle District of Florida, a TorrentFreak article noted. Music labels had previously sued ISPs such as Cox and Grande Communications.

Under the Digital Millennium Copyright Act, ISPs cannot be held liable for Internet users' copyright infringement if the ISPs "'adopt and reasonably implement' a repeat infringer policy that provides for termination of users' accounts 'in appropriate circumstances,'" an EFF explainer notes. But the law is vague enough that courts have had to interpret its meaning in various cases over the years.

In 2013, AT&T and other ISPs began using a "six-strikes" Copyright Alert System, working in conjunction with the Motion Picture Association of America (MPAA) and the Recording Industry Association of America (RIAA). The system ended up doing little to thwart copyright infringement and was shut down early last year.

Music publishers have called on ISPs to filter out pirated content, and they have filed various lawsuits against ISPs. Cox lost a jury verdict in a music piracy case in 2015. Another lawsuit involving Cox was settled last year, but Cox faces yet another lawsuit filed in August 2018.

In the Grande case, a federal judge this month ruled that Grande does not qualify for a legal safe harbor because of the ISP's "complete abdication of [its] responsibilities to implement and enforce a policy terminating repeat copyright infringers."

While ISPs have often resisted disconnecting alleged pirates, AT&T recently terminated the broadband service of more than a dozen customers who were accused multiple times of copyright infringement.
https://arstechnica.com/tech-policy/...s-fuel-piracy/





Stream-Ripping Drops 13% In One Year; Accounts for Just 4% of Total Pirate Site Traffic
Daniel Sanchez

As on-demand streaming services proliferate, will global piracy die down on its own?

As the major labels continue their legal attacks against YouTube stream-rippers, an interesting question is popping up. Is YouTube ‘stream-ripping’ dying on its own?

In its Global Piracy Report for 2017, UK-based Muso tracked 300 billion visits to piracy sites, up 1.6% year-over-year. Streaming music piracy in the UK alone increased 21%. But ‘stream-ripping’ accounted for a mere 4% of the total. It’s also declining alongside broader drops in track downloading.

The company’s global piracy data platform tracks piracy in film, TV, music, publishing, and software.

While most people now subscribe to Spotify and Apple Music, Christopher Elkins, the company’s Chief Strategy Officer, explained that piracy “remains a significant challenge.”

Now, Muso has unveiled several surprising facts in its study analyzing 2018 piracy website habits. That includes a marked decrease in YouTube ‘stream-ripping,’ declared a piracy menace by groups like the RIAA.

Are streaming music platforms curbing stream ripping?

Last year, the company tracked over 189 million visits to piracy sites last year.

TV remained the most popular content for piracy. Nearly half (49.4%) of all activity focused on pirating television programs. Film, music, and publishing had a respective share of 17.1%, 16%, and 11.2%. Software piracy came in last place with around 6.2%.

Similar to its results for 2017, the United States topped the list of countries with the most visits to piracy sites – 17 billion. Russia came in second with 14.5 billion, followed by Brazil, India, and France with 10.3 billion, 9.6 billion, and 7.4 billion visits, respectively.

Turkey (7.3 billion), Ukraine (6.1 billion), Indonesia (6 billion), the United Kingdom (5.8 billion), and Germany (5.4 billion) rounded out the top ten.

Developed markets saw a steep decline in piracy compared to 2017. Yet, traffic to infringing content in emerging markets increased. Brazil, for example, saw a 12.5% jump to over 10 billion visits. Citing another example, Indonesia’s traffic to piracy websites increased 9% to 6 billion.

Around the world, software piracy increased 17% between 2017 and 2018. Publishing piracy increased 11%.

In addition, almost 60% of all piracy visits are to unlicensed web streaming sites. This mirrors the trend in legal consumption, helping the market move away from content ownership to on-demand streaming.

Public torrent networks, once a favored piracy delivery method, now account for just 13% of all infringing activity. Stream-ripping also fell 13% between 2017 and 2018 – from 8.9 billion visits to 7.7 billion. This was primarily due to YouTube-MP3.org’s closure in 2017, leading to a 16% drop in overall stream-ripper visits.

In good news for the music industry, music saw the largest overall decline of piracy – 34%.

Sounds like progress and effective enforcement, though Muso criticized the ‘whack-a-mole’ strategy. Speaking about the findings, Andy Chatterley, Muso’s Co-Founder and CEO, explained,

“In 2018, we’ve seen a 10% increase in people bypassing search engines and going directly to the piracy destination of their choice.

“Simply focusing on take-downs is clearly a whack-a-mole approach and, while an essential part of any content protection strategy, it needs to be paired with more progressive thinking.

“With the right mindsight, piracy audiences can offer huge value to rights holders.”

Just don’t forget to tell that to the RIAA, which is currently waging – and losing – its own war against notorious Russian stream-ripper, FLV.to.
https://www.digitalmusicnews.com/201...ng-muso-study/





Bill That Would Restore Net Neutrality Moves Forward Despite Telecom’s Best Efforts to Kill it

Last minute attempts to weaken the proposal failed as bill now moves toward a showdown in the House and Senate.
Karl Bode

Last month, Democrats introduced a simple three page bill that would do one thing: restore FCC net neutrality rules and the agency’s authority over ISPs, both stripped away by a hugely-controversial decision by the agency in late 2017.

Tuesday morning, the Save the Internet Act passed through a key House committee vote and markup session—despite some last-minute efforts by big telecom to weaken the bill.

“Inside the beltway, this is really about maybe five companies,” Representative Anna Eshoo said during the hearing. “Across the country, the American people really get this. National polling shows that Republicans, Democrats, Independents support net neutrality. We’re still in the same old soup pot here. We need to take our lenses off and look across the country.”

Survey after survey have shown that the vast bipartisan majority of Americans supported the FCC’s 2015 rules and opposed the repeal. But the Trump FCC was quick to bow to pressure to telecom giants like AT&T, Verizon, and Comcast—despite their long history of using their role as natural monopolies to hamstring competitors and nickel and dime subscribers.

The Pai repeal not only ended net neutrality, it dramatically cut back the FCC’s authority over major broadband providers, shoveling any remaining authority to an FTC critics (like former FCC boss Tom Wheeler) say lacks the authority or resources to actually police telecom giants.

With neither competition nor meaningful regulatory oversight to keep them in check, these telecom giants will have carte blanche to abuse their roles as internet gatekeepers online, net neutrality activists have repeatedly warned.

Net neutrality supporters were unsurprisingly quick to applaud the bill’s progress.

“Net neutrality is coming back with a vengeance,” said Evan Greer, deputy director of consumer group Fight for the Future said in a statement.

“Politicians are slowly learning that they can’t get away with shilling for big telecom anymore,” Greer said. “We’re harnessing the power of the Internet to save it, and any lawmaker who stands in our way will soon face the wrath of their constituents, who overwhelmingly want lawmakers to restore these basic protections.”

Greer told Motherboard that several last minute amendments were introduced by lawmakers during the markup period in an attempt to water down the bill, but all were pulled in the wake of widespread public interest in the hearing.

“It seems like the GOP retreated a bit given after the huge swell of public support,” said Greer, who told Motherboard that 300,000 people watched the organization’s livestream of the markup process. That attention “really emboldened the Democrats and shored up the ones that were wobbling,” Greer said.

The FCC’s 2015 rules were crafted over a decade of discussion, countless public hearings, and numerous court victories for net neutrality supporters. As such, activists say they viewed any attempt to modify the legislation as a non-starter, given the public clearly wanted a clean restoration of the original rules.

Despite net neutrality’s broad, bipartisan approval among consumers, telecom lobbyists have continued to encourage stark partisan divisions in Congress on the issue, something that could make the bill hard to pass. While it should pass in the House, it faces a tougher uphill climb in the Senate, and would also need to avoid a veto by President Trump.

Should the legislation fail to pass, the FCC’s 2015 net neutrality rules may also be restored via a lawsuit filed against the FCC by 22 state attorneys general and companies like Mozilla, who say the Pai FCC ignored all objective data and the public interest its rush to please the nation’s biggest broadband providers.
https://motherboard.vice.com/en_us/a...rts-to-kill-it





FTC Seeks to Examine the Privacy Practices of Broadband Providers

Share This Page
For Release
March 26, 2019

Tags:

Technology Mobile Telecommunications Bureau of Consumer Protection Consumer Protection Consumer Privacy

The Federal Trade Commission issued orders to seven U.S. Internet broadband providers and related entities seeking information the agency will use to examine how broadband companies collect, retain, use, and disclose information about consumers and their devices.

The orders seek information about the companies’ privacy policies, procedures, and practices. The orders were sent to: AT&T Inc., AT&T Mobility LLC, Comcast Cable Communications doing business as Xfinity, Google Fiber Inc., T-Mobile US Inc., Verizon Communications Inc., and Cellco Partnership doing business as Verizon Wireless.

The FTC is initiating this study to better understand Internet service providers’ privacy practices in light of the evolution of telecommunications companies into vertically integrated platforms that also provide advertising-supported content. Under current law, the FTC has the ability to enforce against unfair and deceptive practices involving Internet service providers.

The FTC is seeking information from the seven companies that includes:

• The categories of personal information collected about consumers or their devices, including the purpose for which the information is collected or used; the techniques for collecting such information; whether the information collected is shared with third parties; internal policies for access to such data; and how long the information is retained;
• Whether the information is aggregated, anonymized or deidentified;
• Copies of the companies’ notices and disclosures to consumers about their data collection practices;
• Whether the companies offer consumers choices about the collection, retention, use and disclosure of personal information, and whether the companies have denied or degraded service to consumers who decline to opt-in to data collection; and
• Procedures and processes for allowing consumers to access, correct, or delete their personal information.

The Commission is authorized to issue the Orders to File a Special Report by Section 6(b) of the FTC Act. The Commission vote to issue the orders was 5-0.

The Federal Trade Commission works to promote competition, and protect and educate consumers. You can learn more about consumer topics and file a consumer complaint online or by calling 1-877-FTC-HELP (382-4357). Like the FTC on Facebook, follow us on Twitter, read our blogs, and subscribe to press releases for the latest FTC news and resources.
Contact Information

MEDIA CONTACT:
Juliana Gruenwald Henderson
Office of Public Affairs
202-326-2924

STAFF CONTACT:
Jah-Juin “Jared” Ho
Bureau of Consumer Protection
202-326-3463

https://www.ftc.gov/news-events/pres...band-providers





VPN Providers Pull Russian Servers as Putin's Ban Threatens to Bite

VPN services told to connect their systems to a Russian blacklist of banned websites or face the consequences.
David Meyer

Almost two years ago, Russian president Vladimir Putin signed a law banning virtual private networks (VPNs) and other tools that could be used to circumvent the country's extensive censorship of the internet.

However, the Russian authorities haven't done much to enforce the law. Until now.

On Thursday, Russia's online regulator, Roskomnadzor, said it had written to 10 popular VPN services to demand they connect their systems to the watchdog's blacklist of banned websites, so their users are no longer able to view the forbidden content.

They were given 30 days in which to do so, failing which, "Roskomnadzor may decide to restrict access to the VPN service."

The notified services include NordVPN, Hide My Ass, Hola VPN, OpenVPN, VyprVPN, ExpressVPN, TorGuard, IPVanish, Kaspersky Secure Connection – the only Russian VPN on the list – and VPN Unlimited. Similar obligations are placed on search-engine operators, including Google, which reportedly started playing ball last month after being hit with a small fine for noncompliance.

In response to the request, TorGuard said in a blogpost it had "taken steps to remove all physical server presence in Russia," wiping its Moscow and St Petersburg servers.

"We would like to be clear that this removal of servers was a voluntary decision by TorGuard management and no equipment seizure occurred," it wrote.

"We do not store any logs, so even if servers were compromised it would be impossible for customers' data to be exposed. TorGuard has not disclosed any information to the Russian authorities and our legal team has been notified of this request."

TorGuard apologized for the sudden location removal and said it was rolling out additional servers in neighboring countries to "ensure fast VPN download speeds for everyone in the region."

Because most of the services are not based in Russia, it could make them tricky to ban in an effective way. Roskomnadzor has a spotty record when it comes to blocking services based elsewhere – its haphazard attempt to block the Telegram messaging service springs to mind – though that is perhaps why lawmakers are keen to make the Russian internet (Runet) separable from the wider internet.

Of course, a ban wouldn't be necessary if the VPN providers played ball. But Roskomnadzor may not have much luck on that front – TorGuard isn't the only one that's planning to resist.

"The strong censorship and oppression of the Russian regime was the main reason for us to avoid locating any of our servers inside Russia," said VyprVPN operator Golden Frog in a blogpost.

"Our core mission is to keep the internet open and free, and therefore, we will continue to provide uncensored access to the internet in Russia and around the world. We will not cooperate with the Russian government in their efforts to censor VPN services."

Panama-based NordVPN told a concerned user on Twitter: "Rest assured, compliance is not something that we will consider."

OpenVPN tweeted a link to an article about Roskomnadzor's threat, saying: "OpenVPN is committed to our users and customers by protecting them against cyberthreats and providing secure and private access to their information from anywhere in the world."
https://www.zdnet.com/article/vpn-pr...atens-to-bite/





Virus Attacks Spain's Defense Intranet, Foreign State Suspected: Paper

A computer virus infected the Spanish Defence Ministry’s intranet this month with the aim of stealing high tech military secrets, El País newspaper said on Tuesday, citing sources leading the investigation as suspecting a foreign power behind the cyberattack.

A Defence Ministry spokesman said the ministry would not comment.

El País said the virus was apparently introduced via email and was first spotted at the beginning of March. However, it could have gone undetected for months in an intranet with more than 50,000 users.

Although the network does not carry classified information, the paper said its sources were concerned about a wider infection to other networks with the purpose of accessing information related to secret military technology.

The investigation had yet to determine who was responsible for the cyberattack, but sources told El País it was too technically complex to be done by standard hackers. “There is a state behind it,” they said.

Reporting by Jose Elias Rodriguez; Editing by Axel Bugge and Frances Kerry
https://uk.reuters.com/article/us-sp...-idUKKCN1R7115





This Spyware Data Leak Is So Bad We Can't Even Tell You About It

A consumer spyware vendor left a lot of incredibly sensitive and private data, including intimate pictures and private call recordings, for all to see on a server freely accessible over the internet. And it still hasn’t taken the data down.
Lorenzo Franceschi-Bicchierai

This story is part of When Spies Come Home, a Motherboard series about powerful surveillance software ordinary people use to spy on their loved ones.

A company that sells consumer-grade software that lets customers spy on other people’s calls, messages, and anything they do on their cell phones left more than 95,000 images and more than 25,000 audio recordings on a database exposed and publicly accessible to anyone on the internet. The exposed server contains two folders with everything from intimate pictures to recordings of phone calls, given that the app markets itself mostly to parents.

Troy Hunt, a researcher who maintains the breach database Have I Been Pwned?, analyzed the database and said that there were around 16 gigabytes of images and around 3.7 gigabytes of MP3 recordings in it. Motherboard confirmed his analysis. (It’s hard to say how many unique pictures and recordings there are, however. Some pictures appear to have been uploaded multiple times.)

This breach is just the latest in a seemingly endless series of exposures or leaks of incredibly sensitive data collected by companies that promise to provide services for parents to keep children safe, monitor employees, or spy on spouses. In the last two years, there have been 12 stalkerware companies that have either been breached or left data exposed online: Retina-X (twice), FlexiSpy, Mobistealth, Spy Master Pro, SpyHuman, Spyfone, TheTruthSpy, Family Orbit, mSpy, Copy9, and Xnore.

We can’t tell you the name of the company that’s the latest—but certainly not the last—to join that list. That’s because despite our repeated efforts to alert the company to the leak, it has yet to fix the problem or acknowledge our request for comment. Because the leaked data violates the privacy of hundreds if not thousands of people, and because that data is still very easy for anyone to find and access, even naming the company publicly could lead bad actors to it.

Got a tip? You can contact this reporter securely on Signal at +1 917 257 1382, OTR chat at lorenzofb@jabber.ccc.de, or email lorenzo@motherboard.tv

The exposed database was found by security researcher Cian Heasley, who contacted us when he found it earlier this year. The database is still online, and has been online for at least six weeks. Pictures and audio recordings are still being uploaded to it nearly every day. We won’t name the company to protect the victims who may be getting spied on without their consent or knowledge, and—on top of that—are having their pictures and calls uploaded to a server open to anyone with an internet connection.

We have spent weeks trying to ethically disclose this vulnerability to the company and to get the private images secured. We reached out to the company’s official contact email, displayed on its site. No answer. We reached out to the Gmail address of the site’s administrator, who also appears to be the company’s founder. No answer. We left a voicemail to a Google Voice number listed on the site’s WHOIS details. No answer.

We reached out to GoDaddy, the domain registrar for the company’s main site, as well as the leaky database, which is on the same domain. Company employees told us there’s not much they can do.

The US Federal Trade Commission did not respond to a request for comment.

The company that’s hosting the actual content, a hosting provider called Codero, did not respond to multiple requests for help via email.

So, as of today, weeks after Heasley found the database and Motherboard tried to warn the company, the pictures and audio recordings are still out there, for all to see and listen to.

Motherboard was unable to reach any victims or customers because the exposed server does not contain any contact information, such as email addresses or telephone numbers of victims or users. The data uploaded, in any case, is still highly sensitive, possibly identifying, and in some cases consists of nude and otherwise intimate images.

The spyware app that's leaking this data allows its customers to monitor pretty much everything on the cellphone where it’s installed. The spyware lets its operator read the target’s phone contacts, text messages, listen to calls, record ambient sound by turning on the microphone, and much more.

Heasley, who is analyzing the security of several stalkerware apps, said that the URL of the database was exposed in the source code of the app. The URL is also relatively easy to guess.

“This is the level of security these guys work with,” Heasley, who studies computer security and forensics at Napier University in Edinburgh, Scotland, told Motherboard in an online chat. “It'd be funnier if it wasn't stalking victim's data.”

“People should not be using these tools in the first place,” Eva Galperin, who has researched stalkerware and is the director of cybersecurity at the Electronic Frontier Foundation, told Motherboard. “But the fact that these companies aren’t very good at securing their own data is just the cherry on the bad idea sundae.”

Additional reporting by Joseph Cox.
https://motherboard.vice.com/en_us/a...dio-recordings





Hacking Lawyers or Journalists Is Totally Fine, Says Notorious Cyberweapons Firm
Patrick Howell O'Neill

The founder and CEO of NSO Group, the notorious Israeli hacking company with customers around the world, appeared on CBS’s 60 Minutes Sunday night to defend the use of his company’s tools in hacking and spying on lawyers, journalists, and minors when the company’s customers determine the ends justify the means.

Founded in 20018, NSO Group has reportedly sold hacking tools to dictators including those in Saudi Arabia, the United Arab Emirates, and across Central Asia—a group of decision-makers whose track record includes numerous examples of human rights abuses and oppression of dissent. NSO’s tools have been directly involved in the arrest of human rights activists and, in Mexico at least, spying on lawyers and journalists in an effort to catch the drug lord Joaquin “El Chapo” Guzman.

“In order to catch El Chapo, for example, they had to intercept a journalist, an actress, and a lawyer,” NSO Group founder Shalev Hulio told 60 minutes. “Now, by themselves, they are not criminals, right? But if they are in touch with a drug lord and in order to catch them, you need to intercept them, that’s a decision an intelligence agency should get.”

Hulio’s company, worth hundreds of millions of dollars, first made global headlines in 2016 when its tools were used by the authoritarian government of the UAE in order to spy on Ahmed Mansoor, an award-winning human rights activist. The company has never fully addressed the spying; Mansoor currently sits, untried and unable to regularly contact his family, in an unidentified prison somewhere in the UAE on charges of criticizing the UAE government.

The spotlight did not dissuade the company. Instead, it served as an advertisement to other authoritarian governments about NSO Group’s exceptional ability to hack into new iPhones, a highly valued capability. Observers at companies like the mobile security firm Lookout and the University of Toronto’s Citizen Lab saw NSO Group’s footprint grow to dozens of new countries, a strong indication that the company’s sales department was busier than ever.

“Selling this technology to those who would spy on journalists represents a major threat to human rights and press freedom worldwide, especially in light of the fact that NSO sells its technologies to countries, like Saudi Arabia and Mexico, where journalists are arrested and routinely murdered,” Ron Deibert, director of Citizen Lab, told Gizmodo by email. “When you include Citizen Lab’s and Amnesty International’s research on a Saudi Arabian government operator, there are now 11 publicly-reported cases of journalists and civic media targeted with Pegasus spyware. In the case of Mexican targets, these were not friends or confidents of the drug cartels—these were investigative journalists reporting on the drug cartels, including two colleagues and the widow of a journalist who was murdered in a cartel-linked hit.”

NSO Group’s tools, most famous of which is its Pegasus spyware, have also been used to spy on minors. In a defense, Hulio raised a hypothetical scenario in which intelligence agencies could have stopped 9/11 by spying on Osama Bin Laden’s teenage son. It is an incredible act of contortion on the part of Hulio: The actual child who NSO Group’s tools spied on was the child of a journalist, not a terrorist.

“I only say that we are selling Pegasus to prevent crime and terror,” Hulio said last night.

I spoke to Mansoor leading up to his 2017 arrest. He described a previous arrest at the hands of the UAE’s secret police in which he spent eight months in prison for, again, criticizing the country’s unelected leaders. Mansoor told me that when he was released from prison that time, his children cried because they did not recognize their father after he came out of prison underweight.

There is virtually no transparency around how NSO Group sells its weapons. Although the firm’s founder was on CBS arguing it has a “three layer” approval process including the Israeli Ministry of Defense and their own ethics committee, both are opaque and essentially invisible from the outside. Additionally, neither has said anything about the accusations of abuses by NSO’s customers except when the Ministry of Defense unceremoniously denied an Amnesty International demand to revoke NSO Group’s export license.

In 2017, after UAE was caught using NSO Group’s spyware against Mansoor, the activist was arrested and eventually sentenced to 10 years in prison.

When Saudi Arabian journalist Jamal Khashoggi was murdered last year, NSO Group was accused in a lawsuit of spying on Kashoggi’s friends and colleagues in order to surveil him, but not Kashoggi himself. In last night’s 60 minutes episode, Hulio denied that the firm’s tools were used to hack into Kashoggi’s phone—which, again, is not the accusation in court. For a company that seems to be increasingly comfortable in the spotlight, it was a well-practiced little public relations move that is unrelated to reality.

The company is being sued by Montreal, Canada-based Saudi dissident Omar Abdulaziz, a Kashoggi collaborator who spoke with the now deceased journalist frequently by messenger app. Abdulaziz claims NSO’s tools were used to hack his phone and, through him, spy on Kashoggi’s conversations.

When asked by CBS journalist Lesley Stahl if NSO Group’s tools were used to spy on Kashoggi’s friends, colleagues, and fellow activists, Hulio would not comment. Asked if NSO Group had sold surveillance software to the Saudis for $55 million, Hulio grinned widely, his face grew red, the executive laughed and said, “don’t believe newspapers.” He declined to comment on “specific customers.”

NSO, which hired a PR agency on full-time retainer, has received no apparent punishment from Israeli regulators who have to approve all sales. They did show CBS’s cameras that their offices have video games and exercise classes, but they would not show employees’ faces.

Hulio claimed his software was responsible for saving tens of thousands of lives. An unnamed Western intelligence official reportedly told CBS that the company is a game changer in the world of intelligence gathering. He would not, however, specifically answer charges of misuse, lack of transparency or abuse by potential customers.

NSO Group is now called Q, a winking reference to the gadget-maker serving James Bond and also, not coincidentally, a supremely difficult-to-Google brand name.
https://gizmodo.com/hacking-lawyers-...ous-1833533568





Telegram Adds ‘Delete Everywhere’ Nuclear Option — Killing Chat History
Natasha Lomas

Telegram has added a feature that lets a user delete messages in one-to-one and/or group private chats, after the fact, and not only from their own inbox.

The new ‘nuclear option’ delete feature allows a user to selectively delete their own messages and/or messages sent by any/all others in the chat. They don’t even have to have composed the original message or begun the thread to do so. They can just decide it’s time.

Let that sink in.

All it now takes is a few taps to wipe all trace of a historical communication — from both your own inbox and the inbox(es) of whoever else you were chatting with (assuming they’re running the latest version of Telegram’s app).

Just over a year ago Facebook’s founder Mark Zuckerberg was criticized for silently and selectively testing a similar feature by deleting messages he’d sent from his interlocutors’ inboxes — leaving absurdly one-sided conversations. The episode was dubbed yet another Facebook breach of user trust.

Facebook later rolled out a much diluted Unsend feature — giving all users the ability to recall a message they’d sent but only within the first 10 minutes.

Telegram has gone much, much further. This is a perpetual, universal unsend of anything in a private chat.

The “delete any message in both ends in any private chat, anytime” feature has been added in an update to version 5.5 of Telegram — which the messaging app bills as offering “more privacy”, among a slate of other updates including search enhancements and more granular controls.

To delete a message from both ends a user taps on the message, selects ‘delete’ and then they’re offered a choice of ‘delete for [the name of the other person in the chat or for ‘everyone’] or ‘delete for me’. Selecting the former deletes the message everywhere, while the later just removes it from your own inbox.

Explaining the rational for adding such a nuclear option via a post to his public Telegram channel yesterday, founder Pavel Durov argues the feature is necessary because of the risk of old messages being taken out of context — suggesting the problem is getting worse as the volume of private data stored by chat partners continues to grow exponentially.

“Over the last 10-20 years, each of us exchanged millions of messages with thousands of people. Most of those communication logs are stored somewhere in other people’s inboxes, outside of our reach. Relationships start and end, but messaging histories with ex-friends and ex-colleagues remain available forever,” he writes.

“An old message you already forgot about can be taken out of context and used against you decades later. A hasty text you sent to a girlfriends in school can come haunt you in 2030 when you decide to run for mayor.”

Durov goes on to claim that the new wholesale delete gives users “complete control” over messages, regardless of who sent them.

However that’s not really what it does. More accurately it removes control from everyone in any private chat, and opens the door to the most paranoid; lowest common denominator; and/or a sort of general entropy/anarchy — allowing anyone in any private thread to choose to edit or even completely nuke the chat history if they so wish at any moment in time.

The feature could allow for self-servingly and selectively silent and/or malicious edits that are intended to gaslight/screw with others, such as by making them look mad or bad. (A quick screengrab later and a ‘post-truth’ version of a chat thread is ready for sharing elsewhere, where it could be passed off a genuine conversation even though it’s manipulated and therefore fake.)

Or else the motivation for editing chat history could be a genuine concern over privacy, such as to be able to remove sensitive or intimate stuff — say after a relationship breaks down.

Or just for kicks/the lolz between friends.

Either way, whoever deletes first seizes control of the chat history — taking control away from everyone else in the process. RIP consent. This is possible because Telegram’s implementation of the super delete feature covers all messages, not just your own, and literally removes all trace of the deleted comms.

So unlike rival messaging app WhatsApp, which also lets users delete a message for everyone in a chat after the fact of sending it (though in that case the delete everywhere feature is strictly limited to messages a person sent themselves), there is no notification automatically baked into the chat history to record that a message was deleted.

There’s no record, period. The ‘record’ is purged. There’s no sign at all there was ever a message in the first place.

We tested this — and, well, wow.

It’s hard to think of a good reason not to create at very least a record that a message was deleted which would offer a check on misuse.

But Telegram has not offered anything. Anyone can secretly and silently purge the private record.

Again, wow.

There’s also no way for a user to recall a deleted message after deleting it (even the person who hit the delete button). At face value it appears to be gone for good. (A security audit would be required to determine whether a copy lingers anywhere on Telegram’s servers for standard chats; only its ‘secret chats’ feature uses end-to-end encryption which it claims “leave no trace on our servers”.)

In our tests on iOS we also found that no notifications is sent when a message is deleted from a Telegram private chat so other people in an old convo might simply never notice changes have been made, or not until long after. After all human memory is far from perfect and old chat threads are exactly the sort of fast-flowing communication medium where it’s really easy to forget exact details of what was said.

Durov makes that point himself in defence of enabling the feature, arguing in favor of it so that silly stuff you once said can’t be dredged back up to haunt you.

But it cuts both ways. (The other way being the ability for the sender of an abusive message to delete it and pretend it never existed, for example, or for a flasher to send and subsequently delete dick pics.)

The feature is so powerful there’s clearly massive potential for abuse. Whether that’s by criminals using Telegram to sell drugs or traffic other stuff illegally, and hitting the delete everywhere button to cover their tracks and purge any record of their nefarious activity; or by coercive/abusive individuals seeking to screw with a former friend or partner.

The best way to think of Telegram now is that all private communications in the app are essentially ephemeral.

Anyone you’ve ever chatted to could decide to delete everything you said (or they said) and go ahead without your knowledge let alone your consent.

The lack of any notification that a message has been deleted will certainly open Telegram to accusations it’s being irresponsible by offering such a nuclear delete option with zero guard rails. (And, indeed, there’s no shortage of angry comments on its tweet announcing the feature.)

Though the company is no stranger to controversy and has structured its business intentionally to minimize the risk of it being subject to any kind of regulatory and/or state control, with servers spread opaquely all over the world, and a nomadic development operation which sees its coders regularly switch the country they’re working out of for months at a time.

Durov himself acknowledges there is a risk of misuse of the feature in his channel post, where he writes: “We know some people may get concerned about the potential misuse of this feature or about the permanence of their chat histories. We thought carefully through those issues, but we think the benefit of having control over your own digital footprint should be paramount.”

Again, though, that’s a one-sided interpretation of what’s actually being enabled here. Because the feature inherently removes control from anyone it’s applied to. So it only offers ‘control’ to the person who first thinks to exercise it. Which is in itself a form of massive power asymmetry.

For historical chats the person who deletes first might be someone with something bad to hide. Or it might be the most paranoid person with the best threat awareness and personal privacy hygiene.

But suggesting the feature universally hands control to everyone simply isn’t true.

It’s an argument in line with a libertarian way of thinking that lauds the individual as having agency — and therefore seeks to empower the person who exercises it. (And Durov is a long time advocate for libertarianism so the design choice meshes with his personal philosophy.)

On a practical level, the presence of such a nuclear delete on Telegram’s platform arguably means the only sensible option for all users that don’t want to abandon the platform is to proactive delete all private chats on a regular and rolling basis — to minimize the risk of potential future misuse and/or manipulation of their chat history. (Albeit, what doing that will do to your friendships is a whole other question.)

Users may also wish to backup their own chats because they can no longer rely on Telegram to do that for them.

While, at the other end of the spectrum — for those really wanting to be really sure they totally nuke all message trace — there are a couple of practical pitfalls that could throw a spanner in the works.

In our tests we found Telegram’s implementation did not delete push notifications. So with recently sent and deleted messages it was still possible to view the content of a deleted message via a persisting push notification even after the message itself had been deleted within the app.

Though of course, for historical chats — which is where this feature is being aimed; aka rewriting chat history — there’s not likely to be any push notifications still floating around months or even years later to cause a headache.

The other major issue is the feature is unlikely to function properly on earlier versions of Telegram. So if you go ahead and ‘delete everywhere’ there’s no way back to try and delete a message again if it was not successfully purged everywhere because someone in the chat was still running an older version of Telegram.

Plus of course if anyone has screengrabbed your chats already there’s nothing you can do about that.

In terms of wider impact, the nuclear delete might also have the effect of encouraging more screengrabbing (or other backups) — as users hedge against future message manipulation and/or purging. Or to make sure they have a record of any abusive messages.

Which would just create more copies of your private messages in places you can’t at all control and where they could potentially leak if the person creating the backups doesn’t secure them properly — so the whole thing risks being counterproductive to privacy and security, really. Because users are being encouraged to mistrust everything.

Durov claims he’s comfortable with the contents of his own Telegram inbox, writing on his channel that “there’s not much I would want to delete for both sides” — while simultaneously claiming that “for the first time in 23 years of private messaging, I feel truly free and in control”.

The truth is the sensation of control he’s feeling is fleeting and relative.

In another test we performed we were able to delete private messages from Durov’s own inbox, including missives we’d sent to him in a private chat and one he’d sent us. (At least, in so far as we could tell — not having access to Telegram servers to confirm. But the delete option was certainly offered and content (both ours and his) disappeared from our end after we hit the relevant purge button.)

Only Durov could confirm for sure that the messages have gone from his end too. And most probably he’d have trouble doing so as it would require incredible memory for minor detail.

But the point is if the deletion functioned as Telegram claims it does, purging equally at both ends, then Durov was not in control at all because we reached right into his inbox and selectively rubbed some stuff out. He got no say at all.

That’s a funny kind of agency and a funny kind of control.

One thing certainly remains in Telegram users’ control: The ability to choose your friends — and choose who you talk to privately.

Turns out you need to exercise that power very wisely.

Otherwise, well, other encrypted messaging apps are available…
https://techcrunch.com/2019/03/25/going-going-gone/





Congress Introduces Bipartisan Legislation to Permanently End the NSA’s Mass Surveillance of Phone Records
Posted 12:00 EDT on March 29, 2019

FOR IMMEDIATE RELEASE: March 29, 2019
Contact: Laila Abdelaziz, laila@fightforthefuture.org

Fight for the Future welcomes the introduction of the “Ending Mass Collection of Americans’ Phone Records Act” as a major first step in restoring civil liberties eroded by the U.S. government’s mass surveillance machine.

Yesterday, Senators Rand Paul (R-KY) and Ron Wyden (D-OR) and Representatives Justin Amash (R-MI 03) and Zoe Lofgren (D-CA 19) introduced the “Ending Mass Collection of Americans’ Phone Records Act.” This bipartisan bill (read the full text here) would permanently shut down the ineffective, and nearly two decades old, National Security Agency (NSA) program surveilling all of our telephone records.

Responding to the bill’s introduction in both the Senate and House, Fight for the Future campaigner Laila Abdelaziz had this to say:

“This bill will once-and-for-all end the NSA’s ineffective and harmful mass surveillance of all of our phone records. It’s a welcome and necessary first-step in a longer fight to dismantle the U.S. government’s sprawling surveillance state.

This bill was introduced on the same day the public learned about a seperate phone records surveillance program based out of the Drug and Enforcement Agency (DEA)—which resulted in the collection of billions of phone records by the DEA without proper legal review. These government programs rely on powerful telecommunications companies that store our record and data in bulk by default. The consequences of such surveillance in a data-driven economy are frightening.

We hope this bill is the first-step in many others taken by this Congress to end the USA PATRIOT Act and restore key civil liberties required for a healthy democratic society.

Fight for the Future is urging everyone to call their members of Congress and ask them to support the “Ending Mass Collection of Americans’ Phone Records Act.” This bipartisan bill is a no-brainer and should be passed and signed into law as swiftly as possible. Furthermore, we urge Congressional lawmakers to investigate the DEA’s phone records program immediately.”
https://www.fightforthefuture.org/ne...egislation-to/





Tech Companies Not 'Comfortable' Storing Data In Australia, Microsoft Warns

President says customers are asking company to build data centres elsewhere as a result of the government’s encryption bill
Paul Karp

Companies and governments are “no longer comfortable” about storing their data in Australia as a result of the encryption legislation, Microsoft has warned.

On Wednesday the company’s president and chief legal officer, Brad Smith, said customers were asking it to build data centres elsewhere as a result of the changes, and the industry needed greater protection against the creation of “systemic weaknesses” in their products.

This week the Australian tech industry renewed calls for further amendments to controversial encryption-cracking legislation at an industry forum in Sydney.

Also on Wednesday, Labor’s spokesman on the digital economy, Ed Husic, told the StartupAus forum in Sydney he wished he could “turn back time”, expressing regret for Labor’s role in passing the bill and explaining the opposition feared it would be blamed for a terrorist attack over Christmas if it refused.

In Canberra, Smith told the Committee for the Economic Development of Australia the law had not yet changed Microsoft’s operations in Australia, but the company was worried about the law’s “potential consequences”.

Smith said the law was not written with the intent to create backdoors in technology, but the safeguard that companies would not have to create “systemic weaknesses” was “not defined”. After a deal between the Coalition and Labor a definition was added, but the industry has said it is still unclear.

Smith said Australia had “emerged as a country where companies and governments were comfortable” with storing data, a boon to the tech sector and the economy.

“But when I travel to other countries I hear companies and governments say ‘we are no longer comfortable putting our data in Australia’.

“So they are asking us to build more data centres in other countries, and we’ll have to sort through those issues.”

The Australian Signals Directorate director general, Mike Burgess, has labelled it a “myth” that the reputation of Australian tech companies would suffer as a result of the encryption bill, which the Coalition passed with Labor support on the final parliamentary sitting day of 2018.

At the Sydney forum, that claim was rubbished by industry participants. Eddie Sheehy, a tech investor and former chief executive of the cybersecurity vendor Nuix, said on that point Burgess “doesn’t know what he’s saying”.

He said in response to a later question the law had the “capacity to turn many Australian companies into Huawei” in that they might become “untouchable in many places”.

Nicola Nye, the chief of staff at FastMail, said some customers were no longer using her service as a result of the law, and others had expressed concerns through submissions to the parliamentary joint committee on intelligence and security. The committee is examining proposed amendments and will report next week.

Husic acknowledged that the tech industry was upset, explaining that Labor had passed the bill because national security agencies had said it was urgent.

He said the opposition could not rule out an “attempt by the other side of politics to blame us if, God forbid, something should happen over that period of time”.

Labor has accused the Coalition of reneging on a deal to support amendments consistent with a bipartisan security committee report. Labor’s amendments better define “systemic weakness” and require a fresh warrant before ordering tech companies to assist or build a new capability to access electronic communications.

Husic told the forum that Labor would push for those changes “in this current parliament or the next”, regardless of whether it won or lost the May election.

Faced with an angry questioner from the Science party, Husic said he could not change the fact the bill had passed. “I wish I could go back in time and alter things: I can’t.”
https://www.theguardian.com/technolo...icrosoft-warns





Why We Moved Our Servers to Iceland
Adriaan van Rossum

As the founder of Simple Analytics, I have always been mindful for the need of trust and transparency for our customers. We would like to be held accountable for our customers needs, so they can sleep in peace. The choices we make has to be optimal, in terms of privacy, for the visitors and our customers. One of the crucial choices to consider was, choosing the location of our servers.

In the last few months, we moved our servers gradually to Iceland. In this blog post, I’d like to explain how we’ve achieved that, and most importantly, why. It wasn’t an easy process and I would like to share our learnings. There are some technical parts in this article which I’ve tried to write in an understandable way, but forgive me if it’s too technical.

Why moving our servers?

It all started with our website being added to EasyList. It’s a list with domain names which are used by popular ad-blockers. I asked why Simple Analytics was added because we don’t track visitors of our customers’ websites. We even respect the “Do Not Track” settings in the browser.

So I replied the following to the Pull Request on GitHub:

[…] So if we keep blocking the companies that do good, and respect the privacy of the users, what kind of sign is it to just block those companies? I think it’s wrong and we shouldn’t put every company on the list just because they are sending a request. […]

I got a reply to my comment from @cassowary714:

Everyone says what you are saying, but I don’t want to see my requests sent to a US company (in your case, Digital Ocean […]

I didn’t like this reply at first, but after sharing it with my community, people pointed it out to me that he indeed was correct about the fact the US government is able to access the data of our users. At that time, our servers were indeed running on Digital Ocean and they could pull out our drive and read our data.

The solution is somewhat technical so bear with me. You can make a stolen drive (or detached for whatever reason) unusable for others. This can be solved by encrypting the data on the drive which makes it very difficult to read the data for people without the encryption key (Note: only Simple Analytics has this key). It would still be possible to get little parts of the data by physically reading out the memory of the server. Memory is easy explained as a type of a drive, which is small but super fast which allows the processor of the server to run efficiently. A server does not function without memory so we kind of need to trust the hosting provider.

This challenged me to think where to move our servers.

Our next location

I started with some basic searches and I found a Wikipedia page on Internet censorship and surveillance by country. It contains a list of “Enemies of the Internet” by the Reporters without Borders, a Paris-based international non-governmental organization that advocates freedom of the press, which classifies a country as an enemy of the internet when “all of these countries mark themselves out not just for their capacity to censor news and information online but also for their almost systematic repression of Internet users.”

Apart from this list, there is an alliance called Five Eyes a.k.a. FVEY. It’s an alliance of Australia, Canada, New Zealand, the United Kingdom, and the United States. In recent years, documents have shown that they are intentionally spying on one another’s citizens and sharing the collected information with each other in order to circumvent restrictive domestic regulations on spying (sources). The former NSA contractor Edward Snowden, described the FVEY as a “supra-national intelligence organization that doesn’t answer to the laws of its own countries.” There are other countries working together with the FVEY in other international cooperatives including Denmark, France, the Netherlands, Norway, Belgium, Germany, Italy, Spain, and Sweden (so-called 14 Eyes). I couldn’t find evidence of the 14 Eyes alliance abusing their combined intelligence.

At this point, we were pretty sure not to use any of the listed countries from the “Enemies of the Internet” list and just to be sure to skip the countries on the 14 Eyes alliance list. For Simple Analytics, this gave enough reason to avoid those countries for storing the data of our customers.

The Wikipedia page earlier mentioned reads the following for Iceland:

Censorship is prohibited by the Icelandic Constitution and there is a strong tradition of protecting freedom of expression that extends to the use of the Internet. […]

Iceland

While researching the best country, privacy-wise, Iceland kept popping up. So I did some thorough research on Iceland. Please keep in mind that I don’t speak Icelandic which may have resulted in missing important information. Let us know if you have any feedback.

According to the Freedom on the Net 2018 report (from the Freedom House), Iceland together with Estonia scored a 6/100 (lower is better) on the Internet Freedom Score. This makes them the best privacy-friendly countries. Be aware that not every country has been rated.

Iceland is not a member of the European Union, although the country is part of the European Economic Area and has agreed to follow legislation regarding consumer protection and business law similar to other member states. This includes the Electronic Communications Act 81/2003 which implemented data retention requirements.

The law applies to telecommunication providers and mandates the retention of records for six months. It also states that companies may only deliver information on telecommunications in criminal cases or on matters of public safety and that such information may not be given to anyone other than the police or the public prosecution.

Although, Iceland is somewhat following the laws of the European Economic Area, it has its own approach to privacy. For example, the Icelandic Data Protection Act encourages anonymity of user data. ISPs and content hosts are not held legally liable for the content that they host or transmit. According to Icelandic law, its not the domain name provider, but the registrant of an .is domain name that is responsible for ensuring the use of the domain is within the limits of the law (ISNIC). The government does not place any restrictions on anonymous communication and no registration is required when purchasing a SIM card.

Another advantage from moving to Iceland is the climate and location of the country. Servers produce a lot of heat and while Reykjavík (Icelands capital where most data centers are located) is on average 40.41°F (4.67°C) it’s a great location to cool down the servers. Meaning that for each watt used to run servers, storage and network equipment, proportionally very little is used for cooling, lighting and other overhead. On top of that Iceland is the world’s largest green energy producer per capita and largest electricity producer per capita, with approximately 55,000 kWh per person per year. In comparison, the EU average is less than 6,000 kWh. Most hosting providers in Iceland get 100% of their electricity from renewable energy sources.

If you draw a straight line from San Francisco to Amsterdam you will cross Iceland. Simple Analytics has most customers from the US and Europe, so it makes sense to pick this geographical location. The privacy-friendly laws and the environmental friendly approach of Iceland made it even more easy for us to choose them as the new location for our servers.

Moving our servers

First, we needed to find a hosting provider in Iceland. There are quite a few and it’s really hard to know if you have the best. We didn’t have the resources to try them all, so instead, we set up some automatic scripts (Ansible) while setting up the server so we could easily move to another provider if we needed to. We choose 1984, a company with the slogan “Safeguarding privacy and civil rights since 2006”. We liked that slogan and asked them a few questions about how they would handle our data. They reassured us and we proceeded installing our main server and they only use electricity from renewable energy sources.

However, we hit a few roadblocks during this process. This section of the article is quite technical. Feel free to skip to the next. When you have an encrypted server you’ll need to unlock it with a private key. This key can’t be stored on the server as it defeats the purpose of encrypting. So if the key isn’t on the server you need to enter it remotely. That’s right, we need to enter the key when the server boots. Wait, but what happens with a power failure? Are all requests with page views to your server failing after a reboot?

That’s why we added an extra server in front of the main server. This server is kind of stupid. It just receives the requests with page views and sends it directly to our main server. When the main server is failing it will store the requests in its own database and re-attemps those requests to the main server until it succeeds. So after a power failure, there is no data loss anymore.

Back to booting up the server. When the encrypted main server boots we need to enter a password. But we don’t want to travel to Iceland or ask somebody there to enter it, for obvious reasons. To access a server remotely you usually use SSH. SSH - is a secure communication protocol, that most people use to communicate with their servers. SSH is a program which is accessible when a server or computer is running. But we needed it to connect before the server was completely started.

Then we found Dropbear, a very small SSH program, that you can run via the initial ramdisk (initramfs). This means we are able to allow external connections via SSH. We don’t have to fly to Iceland to boot our server, yeah!

After moving our data from our old server to our new server in Iceland we were finally done. It took us a couple of weeks from start to end, but we are glad we did it.

Only storing the data you need

At Simple Analytics we live by the saying: “Only store data you need.” We only collect the minimal.

It’s common practice to soft delete data in applications. This means that the data is not really deleted but it’s made inaccessible by the end user. We don’t do this, if you delete your data, it’s gone from our database. We use hard delete. Note: it will be in our encrypted backups for a maximum of 90 days. In case of a bug we can retrieve this data.

We don’t have delete_at fields ;-)

For customers, it’s important to know what data is kept and what is deleted. When somebody deletes their data we show them a page with exactly that. We delete the user and their analytics from our database. We also delete the credit card and email from Stripe (our payment provider). We keep the payment history, which is needed for taxes and keep our log files and database backups for 90 days.

Question: If you only store little sensitive data, what’s the need for all this protection and extra security?

Well, we want to be the best privacy focused analytics company in the world. We will do everything within our power to deliver the best analytics tools without invading the privacy of your visitors. By even protecting our massive amounts of unidentifiable information about visitors we want to show we take privacy super seriously.

What is next?

While we improved the privacy of our platform we noticed a slight increase in loading time for our embed scripts. This makes perfect sense, because they were hosted via the CDN of CloudFlare. A CDN is a set of servers around the world to decrease loading times for everybody. We are thinking of setting up a very simple CDN with encrypted servers, which only serve our JavaScript and store the page views temporarily before sending it to our main server in Iceland.
https://blog.simpleanalytics.io/why-...ers-to-iceland





How Japanese Police Turned Cyber Prank into Arresting Cases

Report from a country in which your home could get barged in by the police for simply posting a URL of infinite loops
Shuji Sado

Counter Cyber Crime Unit of the Hyogo Prefectural Police searched the house of a thirteen -year-old, junior-high-school student on a charge of posting on an Internet forum a URL of a page that displays a Java Script alert message in infinite loops. She was then taken into custody by the police. You may have heard about this incident in Japan.

JavaScript infinite alert prank lands 13-year-old Japanese girl in hot water
https://arstechnica.com/tech-policy/...t-popup-prank/

Japanese police charge 13-year-old for sharing 'unclosable popup' prank online
https://www.zdnet.com/article/japane...-prank-online/

Perhaps there are some of you who thought this is some kind of a prank or fake news. However, sadly, this is something that really happened in Japan, and let me remind you that this is not a mistake caused by some simple misunderstanding by the officers or exaggerated report by the media. I would like reveal how grave Japan's cyber space is becoming.

Details on the incident

According to the news reports released by NHK, on March 4th the Hyogo Prefectural Police searched the houses of three individuals separately: a thirteen-year-old junior-high-shcool female student who lives in Kariya, Aichi prefecture, a man of 39 years of age who lives in Yamaguchi prefecture, and a man of 47 years who lives in Kagoshima prefecture. All three were charged with the same offense which was that they wrote on an internet forum the URL of a very small page that activates the following Java script .

---
for ( ; ; ) {
window.alert(" ∧_∧ ババババ\n( ・ω・)=つ≡つ\n(っ ≡つ=つ\n`/  )\n(ノΠU\n何回閉じても無駄ですよ~ww\nm9(^Д^)プギャー!!\n byソル (@0_Infinity_)")
}
---

The page that had this Script has already been deleted, but there’s a URL of the archive site in the ZDnet and arstechinica articles, so if you are intrigued, try following the links. (I will refrain from posing the link to the Java Script page in this article, fearing that I could get arrested.) On the pop-up dialogue, there’s a message that says "No matter how many times you try to close the dialogue, it's no use" and this alert continue to appear in infinite loops. However, by simply closing the tab on any commonly used browsers, you can get away from the loop, and this can be shut down as some silly joke.

This page seemed to have been around since a few years ago, and it appears that there were a quite a number of people who posted each other this URL on some particular internet forums. It is still not clear why only these three were arrested by the Hyogo Prefectural Police under such circumstance. They may not have any specific reasons for that.

In addition, Hyogo Prefectural Police abruptly searched the houses of individuals in Aichi and Kagoshima prefectures which are outside of their jurisdiction, and it gives you a feeling that doesn’t sit right with you. Just imagine the Los Angeles Police arresting a Texas resident or an Oregon resident. That said, in Japan there are many cases that are operated outside the jurisdictions, so this incident isn't limited to a problem only in Hyogo, and maybe it needs to be addressed as a problem of Japan as a whole.

Legal grounds of the incident

These three individuals were accused of leaving an infinite loop page URL on an Internet forum, and Hyogo Prefectural Police claims that it infringes the Article 168-2 of the Japan's Penal Code.

This article is a new law that was appended to Penal Code in 2011, and in Japan, it is generally known as the "Offense of Creating Virus". Although the law calls it virus, the wider definition of this law was set with an intension to crack down on developing and distributing malware. Unfortunately, the official English translation is still yet to be made, so I will post the translation of the article finished by a volunteer.

From https://coinhiveuser.github.io/chhistory/creruc.html
---
Chapter XIX-2 Crimes Related to Electromagnetic Records of Unauthorized Commands

(Unauthorized Creation of Electromagnetic Records of Unauthorized Commands)

Article 168-2 (1)
A person who, without justifiable grounds, makes or provides following electromagnetic records or other records for the purpose of executing on other persons' computer shall be punished by imprisonment with work for not more than 3 years or a fine of not more than 500,000 yen.

1. Electromagnetic records that do not operate in accordance with other persons' intention, or gives unauthorized commands which act against their intention, when another person uses a computer.
2. In addition to preceding issue, electromagnetic records or other records that unauthorized commands in the preceding issue is written.

(2) The same shall apply to a person who, without justifiable grounds, execute electromagnetic records which of item 1. of the preceding paragraph on other persons' computer.
(3) An attempt of the crime prescribed under the preceding paragraph shall be punished.

(Acquisition of Electromagnetic Records of Unauthorized Commands)

Article 168-3
A person who, for the purpose prescribed for in paragraph (1) of the preceding Article, Acquisition or preservation of electromagnetic records prescribed for in item 1. or 2. of the same Article shall be punished by imprisonment with work for not more than 2 years or a fine of not more than 300,000 yen.

---

Although this translation appears to have been done scrupulously, yet you might have a hard time understanding it due to the fact that the original Japanese law was written with extremely abstruse and vague expressions. Simply put, this law states that any person who develops, obtains, distributes, or offers "software that gives unauthorized commands" without any specific reasons will be imprisoned with work or fined, and that includes making one's computer behave in a way that goes against one's will, or making the computer act in a way that doesn't reflect the intention of the one using it.

There were presumably no justifiable reasons for the arrested three individuals to make other people open the infinite loop alert page. Also, at a glance, this problematic page seems that it doesn't operate in accordance with the intention of the reader. Legally speaking, you can be accused even for attempting it, so even if there were no victims, the moment they posted the URL, it could be a crime. Supposedly, this is how the police accounts.

Be that as it may, is the infinite loop alert really a program that gives unauthorized commands? If they are going to call this page, on which there's only a simple trick that can be ended by closing the browser's tab, illegal, then what about the ads that are way more malicious than this infinite loop page? And there are lot of those ads. If the infinite loop alert is targeted, then these ads should be equally targeted as well. To me, an ad that pops up bunch of windows or covers the entire screen seem more malicious. However, according to the police, ads are publicly accepted, and therefore they are legal. I honestly don’t get how they came up with their standards.

Moreover, these three arrested individuals just wrote on a forum a URL of a page which was created by someone else, and this case is being treated as an attempted crime. We can't ignore the question whether it is adequate to treat this circumstance, in which nothing happens unless you follow the link, as an attempted crime.

Coinhive incident — the other weird incident

In response to this infinite loop incident, Japan's cyber community is greatly disturbed. There are three things that are happening simultaneously to arouse the cyber community: anyone involved in the realm of internet, even in the slightest degree, feels that the police are too relentless, the trial for an incident known as the Coinhive case is still ongoing, and the Diet is aggressively discussing to illegalize downloading.

Putting aside the state in which the Diet is trying to make more and more areas illegal for downloading, the Coinhive case is an incident that the Article 168-2 and 3 are applied, just like they are applied to the infinite loop alert case.

It was an incident that came to light for the first time in June 2018, and in which 16 individuals from all over Japan were arrested, in charge of posting Coinhive(https://coinhive.com/) on their web sites.

Most of them seemed to have caved in to the police's pressure and paid the fine, and just when all the details seemed to be thrown away in the abyss, Mr. Moro, a web designer and one of the 16 individuals who were prosecuted, filed an objection to the summary indictment to pay a fine. Now it has moved on to a formal trial and the court will reach a decision by the end of the month.

What happened to Mr. Moro is detailed on his blog, but to summarize it, he introduced Coinhive to his own web site for about a month starting in September 2017. The reason why he did that was to simply run a test to enhance UX. Perhaps he wanted to wipe out ads. After introducing Coinhive to his web site, he was pointed out by one of the users to notify the users of the Coinhive introduction, and so after a while, he stopped using Coinhive. Then after a few months, the police searched his home for the charge of having introduced Coinhive in the past, and he had to go through many hours of investigation.

It's not made public what the other 15 individuals had to go through, but I suppose their situations were quite similar to Mr. Moro's. I came across to read about it more than a few times that when he introduced Coinhive in 2017 as a trial, it didn't yield much profit, and there were many who deemed it annoying. It lacks persuasiveness to treat this site, which most of the readers were able to use without experiencing any inconvenience, as "a site that gives unauthorized commands". Additionally, by the time the police searched his home, it was already a few months after he had stopped the Coinhive.

The infinite loop alert case and Coinhive case are both incidents that are said to have infringed the same law. And the police claims that both of them are "software that gives unauthorized commands". The definition of the word,"unauthorized" that is used here, is it something that is publicly not accepted. In other words, according to how the police accounts, in the cases of ad scrips, that include malicious ones, are publicly accepted, so therefore they are not illegal, the rest are. On many web sites, there are great deal of Java Scripts, that are not recognized by the users. According to the police, then all of these must be illegal in Japan.

Speaking of which, until a few days ago, the Hyogo Prefectural Police, who prosecuted the infinite loop alert case, had Google Analytics on their web site. When many people pointed this out, they just deleted Google Analytics from the page. They must have realized that they themselves were infringing the Article 168-2,3, and therefore quickly swept it under the rug.

Summary

Up till here, I have talked about the two weirdly related cases. The Article 168-2,3, which are the legal grounds for these cases, were originally devised with an aim to prevent malicious malware. When these laws were being discussed, the Minister of Justice offered his reply saying that in a case that a person continues to leave a malware bug untreated will also be prosecuted, and that stirred controversy. That led the Ministry of Justice to make a statement that it only targets those with criminal intent and the bugs that were mistakenly generated will not count. By these statements, the laws were supposed to incriminate those who were involved in spreading malware that are generally considered as malicious.

It's been a few years since this law had been passed, and now this law has become something that could prosecute anyone without any criminal intent for using a harmless tool at any time. If writing infinite loops or introducing an unknown Java Script tool can be called out as "an unauthorized command" by the police at any time, then you have to be aware of the possibility that the police could barge into your home for simply using the internet.

Generally speaking, in Japan, people who had their homes searched or got arrested by the police tend to get convicted. Moreover, those records will follow the those people for the rest of their lives to affect not only their lives but also their family members' lives as well. It gets in their way when they try to get higher education or find a job. It's been reported that the junior-high-school student and the man of 39, who left the URL to the infinite loop alert, have been coerced to confess "I did the crime," after the oppressive interrogation by the police. Now these two are in a situation where they have to defend themselves from people who are not tech savvy and likely to put a stigma on them as if these two are felons who committed serious crimes.

To the people outside of Japan

At least anyone outside of Japan should be careful with the followings when entering Japan.

Do not create HTML links.
Do not get yourself involved in any programming including creating a web site.

Especially if you are a programmer who has written loops in the past, the Hyogo Prefectural Police can prosecute you anytime, so you'd better not visit Japan.

Mr. Brenan Eich, the creator of Java Script, has posted on Twitter that he will be the witness to prove the Java Script safety when he comes to Japan, but he will get arrested the moment he enters Japan as the root of all evil who created "the software that gives (numerous) illegal commands!" (With levity at heart, please understand that this is half in joke.)
https://twitter.com/BrendanEich/stat...06486332989440

Yikes. May be in Japan this year, could be expert witness if it would help.
— BrendanEich (@BrendanEich) 2019年3月5日

Back to being serious…. The reason why I'm writing what's happening here in Japan in English is because we need your help. Under normal circumstances, Japanese should be the ones to notice this problem and aim to revise the penal laws. However, in order to modify penal laws that already went into effect could take a very long time like decades, and I'm afraid that the Prefectural Police all over Japan could be detaining innocent people based on their original interpretation. Knowing that somehow Japanese public institutions loathe to attract condemnation from abroad, I am hoping that that if the people around the world decry the inadequate behaviors of the Japanese police, they will refrain from running aggressive and relentless investigations like they did in the two cases.

The Japanese Police organizations do not have the mechanism to get a feedback from outside, so even if you visit the web sites of the Hyogo Prefectural Police, who caused the infinite loop alert case, and the Kanagawa Prefectural Police, who caused the Coinhive case, you won't be able to gain any useful information. Both of these sites are releasing information on Twitter and Facebook, but they never write back. So, I think the best way is to write your honest view on this matter on SNS.

When you Tweet about it on Twitter, you might like to use . Kobe is the center city of Hyogo. My guess is that you are more familiar with KobeBeef than Hyogo, am I right?

By the way, to anyone with a Github account, I recommend that you take a look at Lets get arrested project (https://github.com/hamukazu/lets-get-arrested). This is like half in joke, and I don’t think it can directly pressure the police. But the more the people talk about this project, I see the possibility that the bigger influence and impact it can give the Japanese police. Please jump on the wagon to increase this project's stars and forks.


Hall of shame

Hyogo Prefectural Police
web: http://www.police.pref.hyogo.lg.jp/
Twitter: @Hyogo_Police submit:
Facebook: https://www.facebook.com/hyogo.pref.police/

Kanagawa Prefectural Police
web: https://www.police.pref.kanagawa.jp/


Shuji Sado
President & CEO, OSDN Inc. https://osdn.net/users/sado/
https://b.shujisado.com/2019/03/how-...ber-prank.html





Facebook to Fight Belgian Ban on Tracking Users (and Even Non-Users)
Stephanie Bodoni

• Court last year ordered Facebook to stop using cookies
• Belgian privacy watchdog argues social network still in breach

Facebook Inc. is attacking a Belgian court order forcing it to stop tracking local users’ surfing habits, including those of millions who aren’t signed up to the social network.

The U.S. tech giant will come face to face with the Belgian data protection authority in a Brussels appeals court for a two-day hearing starting on Wednesday. The company will challenge the 2018 court order and the threat of a daily fine of 250,000 euros ($281,625) should it fail to comply.

Armed with new powers since the introduction of stronger European Union data protection rules, Belgium’s privacy watchdog argues Facebook “still violates the fundamental rights of millions of residents of Belgium.” The Brussels Court of First Instance in February 2018 ruled that Facebook doesn’t provide people with enough information about how and why it collects data on their web use, or what it does with the information.

“Facebook then uses that information to profile your surfing behavior and uses that profile to show you targeted advertising, such as advertising about products and services from commercial companies, messages from political parties, etc,” the Belgian regulator said in an emailed statement on Wednesday.

Facebook is facing increasing scrutiny in Europe as privacy authorities are looking to increase the level of fines they issue under the EU’s new General Data Protection Regulation, which allow penalties as large as 4 percent of a company’s annual revenue. Antitrust regulators too have been probing the social network with Germany’s Federal Cartel Office last month ordering Facebook to overhaul how it tracks its users’ internet browsing and smartphone apps in the first case to combine privacy with competition enforcement.

Facebook’s Model Attacked by German Antitrust Regulator

Belgium’s data protection authority last year won the court’s backing for its attack against Facebook’s use of cookies, social plug-ins -- the "like" or "share" buttons -- and tracking technologies that are invisible to the naked eye to collect data on people’s behavior during their visits to other sites.

Facebook understands “that people want more information and control over the data Facebook receives from other websites and apps that use our services,” the company said in a statement.

“That’s why we are developing Clear History, that will let you to see the websites and apps that send us information when you use them, disconnect this information from your account, and turn off our ability to store it associated with your account going forward,” it said. “We have also made a number of changes to help people understand how our tools work and explain the choices they have, including through our” privacy updates.

Facebook last year after the ruling said it had “worked hard to help people understand how we use cookies to keep Facebook secure and show them relevant content” and that the cookies and that the tracking technologies it uses "are industry standard."
https://www.bloomberg.com/news/artic...g-gets-hearing





8chan Looks Like a Terrorist Recruiting Site after the New Zealand Shootings. Should the Government Treat it Like One?
Drew Harwell and Craig Timberg

As most of the world condemned last week’s mass shooting in New Zealand, a contrary story line emerged on 8chan, the online message board where the alleged shooter had announced the attack and urged others to continue the slaughter. “Who should i kill?” one anonymous poster wrote. "I have never been this happy,” wrote another. “I am ready. I want to fight.”

To experts in online extremism, the performance echoed another brand of terrorism — that carried out by Islamist militants who have long used the Web to mobilize followers and incite violence. Their tone, tactics and propaganda were eerily similar. The biggest difference was their ambitions: a white-supremacist uprising, instead of a Muslim caliphate.

As Facebook, YouTube and other tech companies raced to contain the sounds and images of the gruesome shooting, 8chan helped it thrive, providing a no-holds-barred forum that further propelled the extremism and encouraged new attacks.

The persistence of the talk of violence on 8chan has led some experts to call for tougher actions by the world’s governments, with some saying the site increasingly looks like the jihadi forums organized by the Islamic State and al-Qaeda — masters in flexing the Web’s power to spread their ideologies and recruit new terrorists. Critics of 8chan argue that the site, and others like it, may warrant a similar governmental response: close monitoring and, when talk turns to violence, law-enforcement investigation and intervention.

The owner and administrators of 8chan, which is registered as a property of the Nevada-based company N.T. Technology, did not respond to multiple requests for comment through email addresses listed for the site, as well as a request placed through a founder of the site, who said he remains in touch with Jim Watkins, an American who is based in the Philippines and owns the company.

The 8chan site’s Twitter account said Saturday that it “is responding to law enforcement regarding the recent incident where many websites were used by a criminal to publicize his crime,” and noted that it would not comment further. New Zealand police declined to comment on whether they had contacted 8chan.

The 8chan administration is responding to law enforcement regarding the recent incident where many websites were used by a criminal to publicize his crime. We always comply with US law and won't comment further on this incident so as not to disrupt the ongoing investigation.
— 8chan (8ch.net) (@infinitechan) March 16, 2019

But the brazenness of the threats of racist and anti-Muslim violence posted on 8chan poses a striking new challenge to a foundational idea of the Internet: that in all but the most extreme cases, such as child pornography, those hosting sites are not legally or morally responsible for the content others upload to them.

Telecommunications companies in Australia and New Zealand already have taken the rare step of blocking Internet access to 8chan and some other sites. Public pressure is building as well on other companies, including some based in the United States, that provide the technical infrastructure for sites that espouse violence against Muslims, African Americans and Jews.

“This is terrorism. It’s no different than what we see from ISIS,” said Joel Finkelstein, executive director of the Network Contagion Research Institute, which, in partnership with the Anti-Defamation League, studies how hateful ideas spread online. “The platforms are responsible if they are organizing and propagating terror.”

A crackdown would mark an extraordinary step in confronting online extremism. Terrorism experts say U.S. law enforcement and intelligence agencies have been reluctant to treat white supremacists and right-wing groups as terrorist organizations because they typically include Americans among their ranks, creating complex legal and political issues. It’s a thorny issue for tech companies, too: Platforms such as Facebook and Twitter blocked white-supremacist content after the Charlottesville riots in 2017, a watershed moment that sparked a debate about censorship.

Some are also skeptical that any effort to suppress such activity online would be successful, because the Web’s decentralized nature makes targeted takedowns difficult and allows hate groups to quickly retreat underground.

The increasingly hateful tone of 8chan has become a cautionary tale for how corners of the Web can be radicalized. Launched in 2013, the site grew out of an exodus from the lightly moderated message board 4chan and quickly gained an audience as a cauldron for the extreme content few other sites are willing to support. The past week has marked a new low.

“I’d never seen the whole board so happy about what had just happened. Fifty people are dead, and they’re in total ecstasy,” said 8chan’s founder, Fredrick Brennan, who said he stepped down as an administrator in 2016 and stopped working with the site’s ownership in December.

Brennan said he has been stunned to see how little the current administrators have done to curb violent threats, and he voiced remorse over his role in creating a site that now calls itself the "darkest reaches of the Internet.” But he worries there are no true technical solutions beyond a total redesign of the Web, focused around identification and moderation, that could undermine it as a venue for free expression.

“The Internet as a whole is not made to be censored. It was made to be resilient,” Brennan said. "And as long as there’s a contingent of people who like this content, it will never go away.”

A move to silence 8chan would clash with a key tenet of the Internet, enshrined in a landmark 1996 U.S. law, that allows Facebook, YouTube, Twitter and others to operate with minimal government interference. The Communications Decency Act sharply limits the legal liability of platforms for content their users post.

But 8chan’s content in the aftermath of last week’s shooting has renewed debate over whether the Internet’s freewheeling culture has gone too far — and whether sites that harbor talk of white-supremacist violence should face the same depth of government scrutiny that previously seemed reserved for chat rooms frequented by members of Islamist terrorist cells.

Federal authorities in the United States — mindful of constitutional protections for the free-speech rights of Americans and, in some case, their links to mainstream political actors — have long been reluctant to gather intelligence among potential domestic terrorists in the same intrusive ways they do among foreign terrorist groups, said Clinton Watts, a senior fellow at the Foreign Policy Research Institute and a former FBI counterterrorism expert.

Although the alleged Christchurch shooter last week was an Australian and 8chan is operated from the Philippines, Watts said the site probably attracts Americans, making it part of one of the bureau’s legal blind spots in combating domestic terrorism.

“These domestic extremists are organizing in the same way” as foreign Muslim extremists, using websites to inspire bloodshed, radicalize believers and even plan assaults, he said. There was one key difference in the political and legal dynamics, however: “Domestic terrorists vote. Foreign terrorists don’t.”

It’s unclear just how closely law enforcement is surveying sites like 8chan already. The FBI said in a statement that, while “individuals often are radicalized by looking at propaganda on social media sites and in some cases may decide to carry out acts of violence … the FBI only investigates matters in which there is a potential threat to national security or a possible violation of federal law.”

Whack-a-mole?

Any move to crack down on sites that host conversation, no matter how loathsome, will confront the constitutional protections for free speech and the conviction among many experts that suppressing talk in one portion of the Internet will only prompt its growth elsewhere online.

There is an ever-growing number of technological options for evading government censors, obscuring identities, faking locations and posting identical copies of disfavored content, which makes any quest to crack down on perceived misbehavior daunting for authorities, if not impossible.

The spread of the shooting videos last week was a classic example: Even Facebook and YouTube were overmatched by human users, organized in part on 8chan, and were unable to block the images of mass murder for days. Both companies said afterward that they struggled to control the crush of uploads in the hours after the attack but were taking steps to prevent a recurrence.

“When you shut things down of that nature, another one springs up,” said Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism. “What we’ve seen on 8chan is just on the surface.”

Yet there’s less disagreement that the New Zealand shootings — two deadly attacks on mosques, including one live-streamed on Facebook — fit classic definitions of terrorism, meaning that the act was calculated to inspire public fear and spread an ideology. The platforms that helped spread videos of the killings, such as 8chan, played a role in that act that went beyond mere exchange of free speech as commonly understood, experts in online extremism said.

Facebook’s former chief security officer, Alex Stamos, said the alleged gunman’s tactics mimicked those of the Islamic State: committing an act of attention-grabbing mass violence, then bolstering and shaping that attention through technological means.

“For all of his hatred of Muslims, he’s copying a Muslim supremacist organization,” Stamos said. “There’s a sad irony there.”

Stamos is wary of government tactics that smack of censorship: He has long argued that any power you give to liberal Western democracies will be used by illiberal authoritarians to block legitimate speech. But he favors more aggressive law-enforcement monitoring of any site where terrorist acts are being planned.

The FBI and other U.S. authorities for years have infiltrated the online sites of foreign terrorist organizations, as designated by the State Department, experts in political extremism said. This has included active monitoring of chats about jihadi themes, using false personas to engage potential terrorists in direct conversation and, in the most serious cases, taking action when violent plans appeared to be forming.

“Thanks to the efforts of the companies and law enforcement, potential ISIS supporters got to the point where they couldn’t trust anybody they met online,” Stamos said. “They discouraged the hobbyists and left only real supporters in some of these online groups.”
An anonymous audience for hate

The anonymity of 8chan is its most critical feature — there are no profiles or post histories for users, who call themselves “anons,” making it difficult to know how many people visit the site, who they are and whether their messages are legitimate threats or merely inflammatory posts intended to shock.

The site portrays itself as a beacon of free speech and says it deletes only posts that clearly violate U.S. law, such as those featuring copyrighted material or child pornography. Its most active forum, the “politically incorrect” board “/pol/," features more than 12 million posts and runs rampant with images of disturbing violence, white-supremacist memes and far-right hate speech. Brennan estimates that more than 100,000 people visit the site every week.

8chan lists one administrator — Ron Watkins, the son of N.T. Technology owner Jim Watkins — and roughly a dozen programmers and “global volunteers.” Brennan said Jim Watkins owns other Internet businesses and has built a technical fortress to guard 8chan from potential takedowns: He owns nearly every component securing the site to the backbone of the Web, including its servers, which are scattered around the world.

“You can send a complaint, but no one’s going to do anything. He owns the whole operation,” Brennan said. “It’s how he keeps people confused and guessing.”

Watkins did not respond to repeated requests for comment.

The site’s only revenue comes from a small group of donors and advertisers whom Brennan estimates pay about $100 a month, which he said is not enough to cover the site’s expenses. But Watkins is content to lose money, Brennan said, because he sees it as a pet project: “8chan is like a boat to Jim. It doesn’t matter if it makes money. He just enjoys using it.”

The board has grown increasingly fanatical, Brennan said, as its user base of early trolls and Internet libertarians have ceded ground to the “committed Nazis” who now dominate the site. In previous mass shootings, he said, the board often fueled anti-Semitic conspiracy theories that painted the attacks as faked. The Christchurch shooting marked the first moment Brennan said that most users portrayed an attack as a point of pride and a step toward their goal of a global race war.

Posters have pushed each other to flood the New Zealand police email inboxes with images of gore and pornography, to widely distribute the gunman’s writing, and to spray-paint a neo-Nazi symbol onto “Muslim-run” schools and businesses. Many glorified the gunman as a “hero” and said they would hang posters around their neighborhoods of a meme showing the gunman with his rifle and manifesto in a messianic pose, a halo of sun around his helmet camera. “This guy is the only person I’ve ever truly admired/looked up to in my life," one poster wrote.

Posters this week shared the names and addresses of religious centers they said they intended to target, as well as tips for future shooters on how to improve their videos for more “amazing kill shots … [and] details many of us are salivating for.” Links and memes of the gunman’s video and manifesto could be found virtually everywhere, as well as threats and eager calls to carry out more violence. “Invaders,” one poster wrote, had 90 days to leave the United States and other countries or “be executed on the spot.”

Some 8chan posters hinted at even more private gathering places online. When one poster who said he was a white nationalist “highly inspired” by the killings asked where the board’s plans were for “accelerating” the gunman’s plan, another poster wrote that “we don’t discuss that here” but at a site on the dark Web available only to those “that prove themselves.”

Brennan said 8chan is only the most visible corner of a vast network of privately organized sites that shelter and fuel extremist thought. And while he believes 8chan and sites like it should enforce stricter moderation for violent messages, he also worries about a broad shift toward censorship that could push people further into the digital shadows: sites on the dark Web, secret chat rooms and decentralized file-sharing networks that are even harder to monitor and shut down.

Brennan expects there will be another shooting because of 8chan, and he said he’s seen nothing from leaders there to suggest they would begin cracking down on incitements of brutality. Some of the people expected to moderate the site, he said, subscribe to extreme beliefs themselves. “It’s like having the lunatics run the asylum,” he said.

‘An extraordinary response’

The enduring extremism on 8chan reveals what experts say has become an existential crisis for the Web: how the empowering freedom of digital connectivity can rally the most dismal and dangerous viewpoints together, often anonymously and consequence-free.

It also highlights how even the biggest improvements from tech giants such as Facebook and YouTube, which have in recent days terminated hundreds of accounts “created to promote or glorify the shooter,” will do little to limit vile speech on a global stage.

The sites’ anonymity can have real-world impact. Public school campuses in Charlottesville closed for two days this week after threats of an “ethnic cleansing” at a high school there surfaced Wednesday on 4chan.

Internet service providers in Australia and New Zealand, which temporarily blocked access to 8chan, 4chan and other forum and video sites that hosted the shooting footage, showed one potential technical remedy. Telstra, Australia’s largest telecommunications company, said it took action following a request from the New Zealand government, which says sharing the content is a criminal offense. Nikos Katinakis, a top Telstra executive, said that while some sites have removed the content and seen their blocks lifted, 8chan remains blocked. “Extraordinary circumstances … required an extraordinary response,” he said in a statement.

8chan, however, is shielded in another way: the U.S. web-services giant Cloudflare, which helps websites guard against “distributed denial of service,” or DDoS, attacks that online vigilante groups have used to target 8chan in the past.

Cloudflare says that it helps 8chan and other websites regardless of their content, as long as they don’t violate U.S. laws, and that the company complies with court orders, works with law enforcement and bans terrorist propaganda networks and other groups on official sanction lists. Cloudflare would not discuss specific business or financial details about its relationship with 8chan.

After the Charlottesville riots, Cloudflare stopped working with the neo-Nazi site Daily Stormer, a ban that led Cloudflare chief Matthew Prince to later question whether he had set a dangerous political precedent.

Alissa Starzak, Cloudflare’s head of policy, said the role of policing should be left to the companies, governments or content moderators. She questioned the free-speech ramifications for revoking services from websites hosting content with which the company disagrees. “It’s still going to be on the Internet,” she said. “They might be more open to a DDoS attack, but is that the goal? A vigilante attack?”

Alice Crites and Devlin Barrett contributed to this report.
https://www.washingtonpost.com/techn...t-it-like-one/





Where to Draw the Line on Deplatforming

Facebook and YouTube were right to delete the video shot by the New Zealand shooter. Internet providers were wrong to try to do it, too.
April Glaser

After a shooter livestreamed himself killing 50 Muslim worshippers in New Zealand earlier this month, one of the places where footage of his broadcast lived on was 8chan—the same shadowy message board where he posted a manifesto and chillingly called his actions “a real life effort post.” While mainstream platforms like Facebook and YouTube mobilized to take down uploads of the video as they proliferated thousands, even millions, of times, 8chan left it up. So did its cousin 4chan (whose /pol/ board is a similar magnet for far-right trolls) as well as sites like the social network Voat, the video-hosting site LiveLeak, and the blog Zero Hedge. Because these places were not willing to remove the videos, New Zealand and Australia’s major internet service providers decided to take action: They blocked access to any website continuing to host the video. As of this week, 8chan appeared to still be blocked in New Zealand, but 4chan and Voat were again accessible, meaning user access to those sites had likely been restored.

It might seem obvious that these companies ought to block access to a video containing an act of horrific violence by whatever means possible. But the way it happened marks an unusual and worrisome moment. As a general principle, internet service providers aren’t supposed to erect barriers between the users they serve and the websites those users want to visit. They tend to observe this rule even in places like Australia and New Zealand that don’t have net neutrality policies that prevent ISPs from blocking access to websites. An exception tends to be when those takedowns come at the behest of law enforcement, perhaps out of concern for public safety. But the telecoms companies in New Zealand and Australia didn’t decide to kick these websites offline in collaboration with law enforcement. Rather, they felt that the blockages were simply the responsible thing to do. “We must find the right balance between internet freedom and the need to protect New Zealanders, especially the young and vulnerable, from harmful content,” executives from Vodafone NZ, Spark, and 2degrees wrote in a joint statement.

What should be done about 4chan, 8chan, and other awful internet places whose ugliness spills into public view? It’s a question Americans have confronted as recently as last week, when public schools in Charlottesville, Virginia, shut down in response to an anonymous, invective-filled post on 4chan that threatened “ethnic cleansing” at a local high school. Other 4chan users encouraged the action, taunting, “School shooting tomorrow.” Police eventually found a 17-year-old whom they say wrote the post. On Gab, another social media site that attracts racist “free speech” enthusiasts, there is a policy against explicit calls for violence but no rules against hate speech, and so it flourished there. The Pittsburgh synagogue shooter was active on Gab, where he posted anti-Semitic missives and announced, “I’m going in” before killing 11 people during Saturday morning services in October.

A loose movement to push back against these spaces of hate has emerged since the 2016 election. Users of Twitter, Facebook, and YouTube have demanded the companies actually enforce their policies against hate speech—which they have tried to do, with varying degrees of enthusiasm and success. Elsewhere, some online services have deplatformed places that explicitly welcome hate speech (such as neo-Nazi havens the Daily Stormer and Stormfront), refusing to continue providing web hosting and security services. These are private firms, and few would ask governments to crack down on 8chan or its ilk—most of the time, governments should stay away from policing speech at all. But the example of New Zealand and Australia may offer a tempting place to turn instead. Internet providers appear to be one group that has the power to make these sites inaccessible. That doesn’t mean they should.

While it’s refreshing to see technology companies act swiftly to protect their users, in this case it’s also unsettling. Internet providers operate at a layer above websites, users, and even many infrastructural parts of the web. Many may argue that Facebook shouldn’t referee speech at all, but it’s clear to everyone that Facebook is within its rights to decide what conversations happen there. That’s why so many now make the persuasive case that it can do a better job moderating those conversations—which isn’t censorship. But it is censorship when ISPs, which are merely gateways to those conversations, try to take on hate speech or other content themselves. We don’t want ISPs making those calls.

Don’t confuse ISP blocks with other kinds of deplatforming. After the 2017 Unite the Right rally in Charlottesville, internet service companies like Cloudflare, Google, and GoDaddy stopped providing hosting and security to the Daily Stormer, a prominent neo-Nazi website where the event was organized. In this version of deplatforming, one or multiple companies decide to terminate their business relationship with another company, often resulting in an effective removal from the mainstream internet. This may happen because one firm’s terms of service were violated, like when a hosting company prohibits hate speech or when Facebook and YouTube removed Alex Jones’ channels last year. Or in the case of Cloudflare refusing service to the Daily Stormer, it was because, as CEO Matthew Prince put it, “I woke up this morning in a bad mood and decided to kick them off the Internet. … It was a decision I could make because I’m the CEO of a major Internet infrastructure company.” But even this capricious-sound case is less a problem than an ISP taking action. The Daily Stormer was a Cloudflare customer and Prince decided to stop working with it. And while he’s right that Cloudflare provides infrastructural services for websites, it’s not an internet provider. It doesn’t run the tubes that the internet travels on.

It’s troubling when a company with concentrated power decides to stop doing business with a website, thereby curbing its reach. But it’s still that company’s choice to decide whom it does business with and how. The situation is different when that company doesn’t have a direct business relationship with websites but rather controls the lanes that deliver those websites to consumers.

It also might not work. “[An ISP] blocking 4chan or 8chan ignores the fact that many of the users of these sites are sophisticated enough to have access to VPNs and other ways of evading this censorship,” said Ethan Zuckerman, director of the Center for Civic Media at MIT. It’s possible, for example, to access content blocked in your country by using a virtual private network, which allows users to route their internet access through servers in other locations. “Communities like 4chan and 8chan include many people interested in accessing forbidden content, violating national copyright restrictions,” Zuckerman said. “It’s hard to think of a community where a technical means of blocking access is less likely to work.”

Censorship also calls attention to what’s being censored, since people will probably be curious about what was blocked—but in this case without a good reason steeped in company policy. And when an internet provider blocks a website full of hate, users of that site can cry censorship, politicizing and potentially strengthening their community. “Blocking like this allows people to say, ‘Our speech is being censored,’ and therefore you are riling up a community who can go elsewhere on the web, and then they’re connecting and congregating around a ‘We’ve had our voices silenced’ line. And then there becomes kind of a victim narrative here that can act as a recruiting mechanism,” said Claire Wardle, a TED fellow and executive director of First Draft, an organization that helps journalists and researchers find and study disinformation. These users would have a point. As disturbing as parts of 4chan and 8chan are, plenty of corners on those sites aren’t used for hate. 4chan has thriving message boards of anime fans, video gamers, pornography, and advice topics. Blocking an entire website silences those groups, too, that may not agree with the hate groups on 4chan but would oppose blocking of the site. This is the case with any kind of blunt blocking—more people will be affected than are at fault.

The internet service providers’ blocking appears to have gone down outside of a specific policy around these kinds of crisis situations. Vodafone told me that it was unblocking websites once content from the Christchurch shooting had been removed, but it wouldn’t say which sites were blocked so as not to call further attention to those websites. The whole situation appears to be slightly, if not largely, ad hoc, said Rebecca MacKinnon, director of Ranking Digital Rights, a project that tracks how internet companies around the world protect freedom of expression and user privacy. “There’s no transparency about their policy for this kind of emergency situation,” MacKinnon told me. “They just kind of ad hoc decided this and didn’t appear to have a prior policy for what they might do in serious emergency and exceptional situations.” To block parts of the web so opaquely sets a troubling precedent. We may agree with what internet providers are blocking now, since we all agree a shooter’s footage of his violent attack shouldn’t spread. But in the future, the lines might not be as clear. In 2005, for example, the Canadian telecom Telus blocked access to a communication workers union website that promoted a labor strike against the internet provider. This is why net neutrality has become such an important principle: Internet providers shouldn’t be able to decide what users can and cannot see without oversight.

The Christchurch video is horrific. Platforms—especially massive, popular ones that attract hundreds of millions of users—should do everything they can to keep their communities safe. But the overly broad blocking of entire websites by internet providers, which operate at several layers above the platforms, isn’t going to make the horror disappear. It could strengthen these communities—and assign unnecessary powers to companies that no one asked to do the dirty work.
https://slate.com/technology/2019/03...k-youtube.html





India Bans on Most Popular Game Show Fear of Creating ‘Psychopaths’
Saritha Rai

India doesn’t have much of a history with popular computer games, unlike the U.S. or Japan. But now one of the industry’s kill-or-be-killed titles has become a smash hit -- and the backlash from the country’s traditionalists is ferocious.

PlayerUnknown’s Battlegrounds is a Hunger Games-style competition where 100 players face off with machine guns and assault rifles until only one is left standing. After China’s Tencent Holdings Ltd. introduced a mobile version of the death match that’s free to play, it has become the most popular smartphone game in the world, with enthusiasts from the U.S. to Russia to Malaysia.

Nowhere has resistance to the game been quite like India. Multiple cities have banned PUBG, as it’s known, and police in Western India arrested 10 university students for playing. The national child rights commission has recommended barring the game for its violent nature.

One of India’s largest Hindi newspapers declared PUBG an “epidemic” that turned children into “manorogi,” or psychopaths. “There are dangerous consequences to this game,” the Navbharat Times warned in a March 20 editorial. “Many children have lost their mental balance.”

Computer games have outraged parents and politicians for at least 20 years, since Grand Theft Auto first let players deal drugs, pimp out prostitutes and kill off strangers to steal their cars. Just last year, China went through its most serious crackdown on games, freezing approval of new titles and stepping up scrutiny of addiction and adverse health affects.

What’s different about India is the speed with which the country has landed in the strange digital world of no laws or morals. It skipped two decades of debate and adjustment, blowing into the modern gaming era in a matter of months. Rural communities that never had PCs or game consoles got smartphones in recent years -- and wireless service just became affordable for pretty much everyone after a price war last year.

With half a billion internet users looking for entertainment, PUBG has set off a frenzy. A student competition in the southern city of Hyderabad received 250,000 registrations from more than 1,000 colleges. One team walked away with a 1.5 million rupee ($22,000) prize as the top PUBG players, just days before this month’s arrests.

Aryaman Joshi, 13, has played PUBG for a few hours each day and says all his friends play too. "It’s a bit violent and there’s a lot of shooting so boys like me like it," he said. His mother, Gulshan Walia, says she wants to take a realistic approach to Aryaman’s game playing.

That kind of demand is giving a hint at India’s potential as a gaming market. It’s tiny today, generating a minuscule $290 million in revenue. But it’s already the world’s second-largest smartphone market, after China, and the fastest-growing.

“PUBG has made the online gaming market soar and demonstrated that India is a very attractive market,” said Lokesh Suji, the Gurgaon-based head of the Esports Federation of India.

As long as the authorities don’t choke it off first. Local politicians, parents and teachers have expressed outrage over PUBG, arguing the game will spur violence and divert students from their academics. They’ve blamed the game for bullying, stealing and, in one Mumbai case, a teenager’s suicide. A local minister went so far as to characterize it as “the demon in every house.”

At a public meeting last month, a concerned mother complained to Prime Minister Narendra Modi about her son’s addiction to mobile games. “Is that the PUBG one?” Modi shot back. One 11-year-old even filed a public interest lawsuit in a Mumbai court seeking a ban on the game.

South Korea’s Bluehole Inc., which made the original PUBG for PCs and then partnered with Tencent on the mobile version, has taken a cautious approach. The company said it was looking at the legal basis of the bans in various cities and will confer with authorities to find a solution. “We are working on the introduction of a healthy gameplay system in India to promote balanced, responsible gaming, including limiting play time for under-aged players,” the company said.

Because gaming is so new in India, there are no regulatory policies in place. In contrast, Tencent currently bans players in China under 13 from playing PUBG and imposes restrictions such as real-name registrations. In Germany, players under 16 are restricted.

A clinic for breaking digital addictions, run by the country’s National Institute of Mental Health and Neuro Sciences in Bangalore, is recording several PUBG addiction cases every week. An 11-year-old PUBG player walked into the clinic recently with his parents who lamented that he wanted to quit school to become a professional PUBG gamer.

Dr. Manoj Sharma, who heads the clinic, argues game makers need to take more responsibility. “There should be a ban on underage players,” he said. “The addiction has reached never-before proportions.”
https://news.yahoo.com/india-bans-mo...220000808.html





Human Contact Is Now a Luxury Good

Screens used to be for the elite. Now avoiding them is a status symbol.
Nellie Bowles

Bill Langlois has a new best friend. She is a cat named Sox. She lives on a tablet, and she makes him so happy that when he talks about her arrival in his life, he begins to cry.

All day long, Sox and Mr. Langlois, who is 68 and lives in a low-income senior housing complex in Lowell, Mass., chat. Mr. Langlois worked in machine operations, but now he is retired. With his wife out of the house most of the time, he has grown lonely.

Sox talks to him about his favorite team, the Red Sox, after which she is named. She plays his favorite songs and shows him pictures from his wedding. And because she has a video feed of him in his recliner, she chastises him when she catches him drinking soda instead of water.

Mr. Langlois knows that Sox is artifice, that she comes from a start-up called Care.Coach. He knows she is operated by workers around the world who are watching, listening and typing out her responses, which sound slow and robotic. But her consistent voice in his life has returned him to his faith.

“I found something so reliable and someone so caring, and it’s allowed me to go into my deep soul and remember how caring the Lord was,” Mr. Langlois said. “She’s brought my life back to life.”

Sox has been listening. “We make a great team,” she says.

Sox is a simple animation; she barely moves or emotes, and her voice is as harsh as a dial tone. But little animated hearts come up around her sometimes, and Mr. Langlois loves when that happens.

Mr. Langlois is on a fixed income. To qualify for Element Care, a nonprofit health care program for older adults that brought him Sox, a patient’s countable assets must not be greater than $2,000.

Such programs are proliferating. And not just for the elderly.

Life for anyone but the very rich — the physical experience of learning, living and dying — is increasingly mediated by screens.

Not only are screens themselves cheap to make, but they also make things cheaper. Any place that can fit a screen in (classrooms, hospitals, airports, restaurants) can cut costs. And any activity that can happen on a screen becomes cheaper. The texture of life, the tactile experience, is becoming smooth glass.

The rich do not live like this. The rich have grown afraid of screens. They want their children to play with blocks, and tech-free private schools are booming. Humans are more expensive, and rich people are willing and able to pay for them. Conspicuous human interaction — living without a phone for a day, quitting social networks and not answering email — has become a status symbol.

All of this has led to a curious new reality: Human contact is becoming a luxury good.

As more screens appear in the lives of the poor, screens are disappearing from the lives of the rich. The richer you are, the more you spend to be offscreen.

Milton Pedraza, the chief executive of the Luxury Institute, advises companies on how the wealthiest want to live and spend, and what he has found is that the wealthy want to spend on anything human.

“What we are seeing now is the luxurification of human engagement,” Mr. Pedraza said.

Anticipated spending on experiences such as leisure travel and dining is outpacing spending on goods, according to his company’s research, and he sees it as a direct response to the proliferation of screens.

“The positive behaviors and emotions human engagement elicits — think the joy of a massage. Now education, health care stores, everyone, is starting to look at how to make experiences human,” Mr. Pedraza said. “The human is very important right now.”

This is a swift change. Since the 1980s personal computer boom, having technology at home and on your person had been a sign of wealth and power. Early adopters with disposable income rushed to get the newest gadgets and show them off. The first Apple Mac shipped in 1984 and cost about $2,500 (in today’s dollars, $6,000). Now the very best Chromebook laptop, according to Wirecutter, a New York Times-owned product reviews site, costs $470.

“Pagers were important to have because it was a signal that you were an important, busy person,” said Joseph Nunes, chairman of the marketing department at the University of Southern California, who specializes in status marketing.

Today, he said, the opposite is true: “If you’re truly at the top of the hierarchy, you don’t have to answer to anyone. They have to answer to you.”

The joy — at least at first — of the internet revolution was its democratic nature. Facebook is the same Facebook whether you are rich or poor. Gmail is the same Gmail. And it’s all free. There is something mass market and unappealing about that. And as studies show that time on these advertisement-support platforms is unhealthy, it all starts to seem déclassé, like drinking soda or smoking cigarettes, which wealthy people do less than poor people.

The wealthy can afford to opt out of having their data and their attention sold as a product. The poor and middle class don’t have the same kind of resources to make that happen.

Screen exposure starts young. And children who spent more than two hours a day looking at a screen got lower scores on thinking and language tests, according to early results of a landmark study on brain development of more than 11,000 children that the National Institutes of Health is supporting. Most disturbingly, the study is finding that the brains of children who spend a lot of time on screens are different. For some kids, there is premature thinning of their cerebral cortex. In adults, one study found an association between screen time and depression.

A toddler who learns to build with virtual blocks in an iPad game gains no ability to build with actual blocks, according to Dimitri Christakis, a pediatrician at Seattle Children’s Hospital and a lead author of the American Academy of Pediatrics’ guidelines on screen time.

In small towns around Wichita, Kan., in a state where school budgets have been so tight that the State Supreme Court ruled them inadequate, classes have been replaced by software, much of the academic day now spent in silence on a laptop. In Utah, thousands of children do a brief, state-provided preschool program at home via laptop.

Tech companies worked hard to get public schools to buy into programs that required schools to have one laptop per student, arguing that it would better prepare children for their screen-based future. But this idea isn’t how the people who actually build the screen-based future raise their own children.

In Silicon Valley, time on screens is increasingly seen as unhealthy. Here, the popular elementary school is the local Waldorf School, which promises a back-to-nature, nearly screen-free education.

So as wealthy kids are growing up with less screen time, poor kids are growing up with more. How comfortable someone is with human engagement could become a new class marker.

Human contact is, of course, not exactly like organic food or a Birkin bag. But with screen time, there has been a concerted effort on the part of Silicon Valley behemoths to confuse the public. The poor and the middle class are told that screens are good and important for them and their children. There are fleets of psychologists and neuroscientists on staff at big tech companies working to hook eyes and minds to the screen as fast as possible and for as long as possible.

And so human contact is rare.

“But the holdup is this: Not everyone wants it, unlike other kinds of luxury products,” said Sherry Turkle, professor of the social studies of science and technology at the Massachusetts Institute of Technology.

“They flee to what they know, to screens,” Ms. Turkle said. “It’s like fleeing to fast food.”

Just as skipping fast food is harder when it’s the only restaurant offering in town, separating from screens is harder for the poor and middle class. Even if someone is determined to be offline, that is often not possible.

Coach seat backs have screen ads autoplaying. Public school parents might not want their kids learning on screens, but that is not an option when many classes are now built on one-to-one laptop programs. There is a small movement to pass a “right to disconnect” bill, which would allow workers to turn their phones off, but for now a worker can be punished for going offline and not being available.

There is also the reality that in our culture of increasing isolation, in which so many of the traditional gathering places and social structures have disappeared, screens are filling a crucial void.

Many enrolled in the avatar program at Element Care were failed by the humans around them or never had a community in the first place, and they became isolated, said Cely Rosario, the occupational therapist who frequently checks in on participants. Poor communities have seen their social fabric fray the most, she said.

The technology behind Sox, the Care.Coach cat keeping an eye on Mr. Langlois in Lowell, is quite simple: a Samsung Galaxy Tab E tablet with an ultrawide-angle fisheye lens attached to the front. None of the people operating the avatars are in the United States; they mostly work in the Philippines and Latin America.

The Care.Coach office is a warrenlike space above a massage parlor in Millbrae, Calif., on the edge of Silicon Valley. Victor Wang, the 31-year-old founder and chief executive, opens the door, and as he’s walking in he tells me that they just stopped a suicide. Patients often say they want to die, he said, and the avatar is trained to then ask if they have an actual plan of how to do it, and that patient did.

The voice is whatever the latest Android text-to-speech reader is. Mr. Wang said people can form a bond very easily with anything that talks with them. “Between a semi-lifelike thing and a tetrahedron with eyeballs, there’s no real difference in terms of building a relationship,” he said.

Mr. Wang knows how attached patients become to the avatars, and he said he has stopped health groups that want to roll out large pilots without a clear plan, since it is very painful to take away the avatars once they are given. But he does not try to limit the emotional connection between patient and avatar.

“If they say, ‘I love you,’ we’ll say it back,” he said. “With some of our clients, we’ll say it first if we know they like hearing it.”

Early results have been positive. In Lowell’s first small pilot, patients with avatars needed fewer nursing visits, went to the emergency room less often and felt less lonely. One patient who had frequently gone to the emergency room for social support largely stopped when her avatar arrived, saving the health care program an estimated $90,000.

Humana, one of the country’s largest health insurers, has begun using Care.Coach avatars.

For a sense of where things could be headed, look to the town of Fremont, Calif. There, a tablet on a motorized stand recently rolled into a hospital room, and a doctor on a video feed told a patient, Ernest Quintana, 78, that he was dying.

Back in Lowell, Sox has fallen asleep, which means her eyes are closed and a command center somewhere around the world has tuned into other seniors and other conversations. Mr. Langlois’s wife wants a digital pet, and his friends do too, but this Sox is his own. He strokes her head on the screen to wake her up.
https://www.nytimes.com/2019/03/23/s...y-screens.html





How America’s Biggest Theater Chains Are Exploiting Their Janitors
Gene Maddaus

Every night, after the last show ended at the AMC theater in Santa Monica, Maria Alvarez arrived at work.

She and her husband had a key to let themselves in. It was after midnight, and the building was empty. Together, they cleaned all seven auditoriums. They vacuumed the carpets and mopped the floors. They cleaned the bathrooms and restocked the toilet paper. They polished the escalators and scrubbed the glass concession cases.

They finished after sunrise. On weekends, when the theaters were especially dirty, they stayed later, until 9:30 a.m. Alvarez worked seven days a week. There were no days off, no sick days, no holidays.

“The day my son passed away, I asked for the day, and they did not want to give it to me,” she said through tears during a labor hearing in 2017.

Alvarez cleaned theaters for two and a half years. She was paid $300 a week — or about $5 an hour.

Filmmakers often speak of the magic that can happen only in a movie theater. As ticket sales have stagnated and Netflix has taken off, the industry has become increasingly protective of the “theatrical experience.”

But maintaining that experience depends on workers like Alvarez, who are grossly underpaid, overworked and easily expendable.

The major chains — AMC, Regal Entertainment and Cinemark — no longer rely on teenage ushers to keep the floors from getting sticky. Instead, they have turned to a vast immigrant workforce, often hired through layers of subcontractors. That arrangement makes it almost impossible for janitors to make a living wage.

Alvarez got hurt on the job, and a doctor recommended a lighter workload. When she made that request in April 2015, she was fired. The following year, she filed a California Labor Commission claim for unpaid wages, including overtime. The hearing officer awarded her $80,000 in back pay and penalties. But Alvarez could not collect. She did not work directly for AMC or its janitorial contractor, ACS Enterprises, which shielded them from liability. Instead, she worked for a couple — Alfredo Dominguez and Caritina Diaz — who had not even shown up to the hearing.

Even Dominguez and Diaz didn’t consider her an actual employee. In their minds, she was a contractor of a contractor of a contractor of AMC Theatres. AMC and ACS did send an attorney to fight her wage claim. In the end, the companies agreed to pay her $3,500 to go away.

Over the last eight months, Variety has investigated wage complaints from movie theater janitors across the country, reviewing class-action lawsuits, state labor commission records and investigations by the U.S. Department of Labor. A clear pattern emerged: AMC and other theater chains keep their costs down by relying on janitorial contractors that use subcontracted labor. Those janitors typically have no wage or job protections, toiling on one of the lowest rungs of the U.S. labor market.

It is customary for janitors to work all night long. Some workers told Variety that they had seen parents bring their young children to work, letting them sleep on the floor or in the theater seats. To make the job go faster, some janitors use leaf blowers to clear popcorn and wrappers out of the aisles. But the blowers leave dust on the speakers and screens, and most theaters have banned them. Instead, janitors typically go row by row with backpack vacuums. They wipe salt off the seats and clean soda stains out of the cup holders.

“This is so much like agricultural workers. They’re literally walking down rows the way agricultural workers do,” says Brandt Milstein, an attorney who filed a class-action suit in Colorado on behalf of Cinemark janitors.

The theater chains are largely immune from legal repercussions. Because they do not directly employ janitors, they are typically excluded from class-action wage cases. But some in the janitorial business say the chains are fully aware of what’s going on and are ultimately responsible.

“The theater companies are super cheap,” says one janitorial company executive who did not want to be identified to protect business relationships. “A lot of these guys, they don’t care if you use slave labor.”

Based in Leawood, Kan., AMC is the country’s largest theater chain, with 637 locations. According to interviews and documents obtained by Variety, AMC used to employ more than 100 companies to provide janitorial service. But several years ago the company adopted a “national partner model,” ultimately scaling down to just two providers nationwide: ACS, based in Pomona, Calif., and KBM, based in Hendersonville, Tenn.

The chain leveraged its size and its pricing power to save money. Brian Mullady was AMC’s director of procurement at the time. On his LinkedIn resume, he says the consolidation saved the company $8 million a year — or 26% of its janitorial costs.

Regal, the second-largest U.S. chain with 558 locations, attempted a similar consolidation but found that the service providers could not properly manage their workers, says Christopher Blevins, a former VP of operations for the company. Instead, Regal has 15 district managers who usually seek competitive bids for janitorial service.

According to an email obtained by Variety, AMC does not take bids. Instead, it tells its national contractors what it will pay based on an internal formula. Mullady, who did not respond to requests for comment, sent the email in 2016.

“We have a model in place that determines pricing for all of our contracted locations,” he wrote. “We do not go through a bidding process, all pricing is determined by AMC.”

AMC spokesman Ryan Noonan says in a statement that janitorial service providers are contractually obliged to abide by state and federal employment law.

In response to other questions, he adds, “I believe your story is not about AMC, but about ACS, which works with several theatrical exhibitors.”

ACS was founded in 2003 by brothers Jose and Raul Alvarado. According to his LinkedIn profile, Jose Alvarado started in the business working as an engineer for AMC Theatres. ACS provides janitors as well as an array of engineering services. According to the company’s website, it works with 20 smaller chains, including Pacific Theatres, Arclight and Regency, in addition to all three major chains.

ACS did not respond to requests for comment, nor did Regal or Cinemark. Its competitors believe that ACS serves the majority of AMC’s locations nationwide. Despite its vast footprint, the company claims it has only 16 to 18 employees, according to Jose Nuñez, the company’s operations director. The janitors are all considered subcontractors.

“When a theater chain approaches us for janitorial service, we will relay that to a contractor,” Nuñez testified at Alvarez’s labor commission hearing. “We’ll be that middle person between the work that is needed and a contractor that can provide it.”

Buperto Brigido was an ACS janitor for 11 years. He worked seven days a week, eight to 10 hours a day, cleaning a 12-screen theater sometimes on his own. He was paid between $700 and $900 every two weeks.

At one point, he asked owner Raul Alvarado for a raise.

“He said AMC doesn’t pay more,” Brigido tells Variety, adding that his requests for sick days and holidays were also denied. “He said AMC doesn’t pay for any of the things I’m talking about.”

This business model was pioneered by Rob Winters, who owned Winters Janitorial, later known as Coast to Coast. Founded in 1996, the company was the first to contract with theaters on a national scale.

“Before me, it was all done in-house with the kids,” he tells Variety.

Winters was based in Kansas City, and worked for AMC, Regal and other chains. At its height, the company had 500 to 600 locations, including hotels and restaurants, according to Doug Schlueter, who was the company’s VP of sales. Others heard Winters had about 400 theaters. Winters claims the figure was much lower, about 100.

Rivals say that Winters succeeded by subcontracting. By eliminating workers compensation and payroll tax expenses, he could lowball on price. Like ACS, he thought of himself as a middleman, connecting theaters with janitors without employing them.

“I didn’t hire anybody direct,” he says. “I don’t know what they did, other than clean or not clean. I would keep my percentage, and that’s what I ran with.”

Winters’ company was the target of repeated investigations by the U.S. Department of Labor. In 2013, a federal investigator met with Winters at his office in Arlington, Texas, and informed him that the government believed the janitors were employees, and that he owed them $286,000 in back wages.

“When asked why they had misclassified these employees as independent contractors, Mr. Winters advised that it is because of the way their competitors operate,” a DOL investigator wrote. Winters argued that he had no direct supervision of the workers. He refused to pay.

In 2014, the Maintenance Cooperation Trust Fund, a union-affiliated watchdog group in Los Angeles, launched an investigation of Winters’ practices. The probe led to a $1.8 million state citation for unpaid wages to 43 employees. Winters was barred from doing business in California.

In August 2016, he declared bankruptcy in Texas. His employees were never paid.

Winters still cleans theaters, specializing in carpets, though he is no longer a major player. It wasn’t investigations that did him in. He claims he was forced out by competition from companies that relied on undocumented workers.

“There were people working for wages that I couldn’t compete with,” he says. “When you try to play straight, it makes it difficult.”

At the time of the federal investigation, the Department of Labor was also looking at other janitorial companies that work in theaters, including ACS; Simply Right in Ogden, Utah; and One Stop Personnel Services in Frisco, Texas.

Even when investigators found violations, penalties were minor. At theaters in the Rio Grande Valley in 2017, investigators found that One Stop failed to pay $240,379 to 19 janitors. The company denied the allegations and refused to pay.

In 2011, an investigation of an ACS subcontractor found that the company owed $65,987 in back wages to 32 employees. Many of the employees said they actually worked for ACS, though ACS denied it. The investigators found that ACS was a joint employer, and it agreed to make sure the contractor would pay.

Investigators also looked at ACS contractors in New York and the Pacific Northwest, though nothing came of either probe.

In 2018, Buperto Brigido and three other janitors filed a class-action suit against AMC and ACS in Los Angeles. The suit, which is pending, alleges that ACS and AMC are joint employers and owe potentially millions in back wages to underpaid janitors across California.

Some janitorial companies think subcontracting is not worth the legal risks. APCS365, of Northbrook, Ill., switched from a contractor model to hiring all of its janitors directly.

“It wasn’t an easy change,” says CEO Marina Kohen, noting that costs went up by 25%. “But you get better accountability.”

In the retail sector, advocates pressured companies like Target and Best Buy to adopt a “responsible contractor” policy, which mandates worker protections. Some argue that theater chains should adopt a similar policy.

After Rob Winters was barred from California, ACS took over many of his accounts, including the Regal theater at L.A. Live in downtown Los Angeles. Working conditions remained much the same, says Georgina Hernandez, who was a janitor there. She worked seven days a week, sometimes 10-11 hours a day, and was paid $400 a week. She quit after two years and took a job cleaning offices.

“I don’t know what Hell is like, but I think it would be like that,” she tells Variety. “Sometimes I was crying because my feet couldn’t take it anymore. My back couldn’t take it anymore. I didn’t know how I could finish the work I had to do.”
https://www.newstimes.com/entertainm...e-13719956.php





The Napster Era Still an Issue for Some Security Clearance Holders
Sean Bigley

This year marks the twentieth anniversary of the founding of NAPSTER – the infamous (and ground-breaking) peer-to-peer file sharing website that was ultimately shut down for copyright infringement.

These days, digital music and movies are as ubiquitous as boy bands, baggy pants, and spiked hair were when NAPSTER first exploded onto the budding internet in 1999. But the public’s desire for free digital content hasn’t abated, and many of the tech wizards who were enthralled with NAPSTER in their teen years have continued pirating copyright content – some of them while possessing a security clearance.

Perhaps unsurprisingly, this is an adjudicative issue we see with particular frequency at places like the NSA and the military’s cyber command. And while downloading some pirated movies or music off the internet isn’t exactly the crime of the century, it is still something these agencies take quite seriously. NSA in particular pursues any computer-related misbehavior on the part of its security clearance holders or applicants with vigor – ostensibly under the theory that someone who engages in such activity at home may be more likely to violate computer security rules in the workplace.

Questions about this area seem to arise most commonly during polygraph examinations, with highly aggressive examiners demanding that the examinee estimate on the spot the total volume of pirated content – and the value – downloaded by the examinee during his or her lifetime. That would be difficult for prolific downloaders or older individuals to do under ideal circumstances, but it often leads to grossly inflated numbers while under the stress of a security interview where hesitation is sometimes viewed as deception.

Pirated Content? Delete Before You Apply

With that in mind, those who have recently engaged in such activity and/or still have pirated content from some time ago on their personal devices should steer clear of applying to NSA, Cyber Command, or other agencies with a heavy computer emphasis until such time as they have destroyed all pirated content and waited an appropriate period of time to demonstrate that the behavior is unlikely to recur. There is no bright-line rule as to how long is long enough to demonstrate reformation, but we’ve had clients denied clearances for downloading pirated content two to three years prior. Ideally, a minimum of three years’ wait is probably advisable.

These cases can be challenging to win, but they are not by any means impossible to win with the right legal defense and an appropriate passage of time. Applicants with illegal downloading in their background should strongly consider working with a qualified attorney to craft a comprehensive case of mitigation before applying or reapplying for a clearance.
https://news.clearancejobs.com/2019/...rance-holders/

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

March 23rd, March 16th, March 9th, March 2nd

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 01:13 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)