P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 20-09-17, 07:27 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - September 23rd, ’17

Since 2002


































"The results do not show robust statistical evidence of displacement of sales by online copyright infringements." – Martin van der Ende, et al






































September 23rd, 2017




EU Paid for a Report that Concluded Piracy Isn’t Harmful — and Tried to Hide the Findings
Már Másson Maack

Back in 2014, the European Commission paid the Dutch consulting firm Ecorys 360,000 euros (about $428,000) to research the effect piracy had on sales of copyrighted content. The final report was finished in May 2015, but for some reason it was never published– according to Julia Reda’s blog, the only Pirate in the EU Parliament.

What the @EU_Commission found out about #copyright infringement but ‘forgot’ to tell us https://t.co/Sxshdxy3KZ pic.twitter.com/Vk4Q74k1Hv

— Julia Reda (@Senficon) September 20, 2017

The 300-page report seems to suggest that there’s no evidence that supports the idea that piracy has a negative effect on sales of copyrighted content (with some exceptions for recently released blockbusters). The report states:

In general, the results do not show robust statistical evidence of displacement of sales by online copyright infringements. That does not necessarily mean that piracy has no effect but only that the statistical analysis does not prove with sufficient reliability that there is an effect. An exception is the displacement of recent top films. The results show a displacement rate of 40 per cent which means that for every ten recent top films watched illegally, four fewer films are consumed legally.

The report doesn’t settle the debate definitely for all “online copyright infringements” (piracy) but it points to similar conclusions as previous studies have done. It’s therefore not known whether the study will have a big impact on the current debate, but its publication delay raises serious questions.

On her blog, Julia Reda says that a report like this is fundamental to discussions about copyright policies — where the general assumption is usually that piracy has a negative effect on rightsholders’ revenues. She also criticizes the Commissions reluctance to publish the report and says it probably wouldn’t have released it for several more years if it wasn’t for the access to documents request she filed in July.

TNW reached out Reda to ask her why she thought the Commission hadn’t published the earlier. In a written response Reda said:

At first I was willing to give the Commission the benefit of the doubt that the study had simply fallen through the cracks, since the responsible department underwent significant restructuring in 2014, after the study was commissioned.

However, now all available evidence suggests that the Commission actively chose to ignore the study except for the part that suited their agenda: In an academic article published in 2016, two European Commission officials reported a link between lost sales for blockbusters and illegal downloads of those films. They failed to disclose, however, that the study this was based on also looked at music, ebooks and games, where it found no such connection. On the contrary, in the case of video games, the study found the opposite link, indicating a positive influence of illegal game downloads on legal sales.

That demonstrates that the study wasn’t forgotten by the Commission altogether.

They also failed twice to meet the deadline for responding to my freedom of information request.

One cannot avoid the suspicion that the Commission intentionally suppressed the publication of publicly-funded research because the facts discovered were inconvenient to their political agenda.


The report could’ve provided valuable grounding for any debate regarding copyright issues. This especially serious when considering the EU’s upcoming copyright reform. The reform is extremely disputed, with some even flat out calling it a ‘dysfunctional proposal.’ But does Reda believe that the report would’ve affected the controversial copyright overhaul?

It’s hard to say whether this study would have affected the upcoming copyright reform. It’s not the first study that calls into question the conventional wisdom that copyright infringement is always bad for business, and sadly, academic evidence doesn’t always impact policy making as directly as one would hope.

This is currently being demonstrated in the debate about the Commission’s plans for upload filters for internet platforms and an extra copyright for news sites: Too many politicians are ignoring the overwhelming academic consensus that these plans would do much more harm than good.

Despite all lip-service paid to supposedly evidence-based “better regulation”, industry lobbying and ideology appears to still have an outsized influence on the Commission’s lawmaking.


TNW also reached out to the authors of the report and its contact person at the EU Commission. Those who we reached declined to comment. The story will be updated if any of the contacted persons decide to comment.
https://thenextweb.com/eu/2017/09/21...hide-findings/





Publishers’ Legal Action Advances Against Sci-Hub

The pirate site plans to ignore the lawsuits from Elsevier and the American Chemical Society.
Diana Kwon

Sci-Hub, a widely-used website that provides access to pirated academic articles, is facing legal challenges from two major publishers—Elsevier and the American Chemical Society (ACS). The site, which was established by former neuroscientist Alexandra Elbakyan in 2011 and is operated out of Russia, hosts millions of scientific documents and has users all around the globe.

On Friday (September 22), a hearing for ACS’s case against Sci-Hub will take place at a federal trial court in Virginia. The society filed a default judgement request on September 1, asking the court to order the site to cease illegal distribution of its material and pay $4.8 million in damages.

ACS brought the case against Sci-Hub for unlawfully disseminating its content in June, a few days after the publishing giant Elsevier won a default legal judgment against Sci-Hub and the Library of Genesis Project (LibGen), another pirate site. A New York district court, which ruled that these sites violated US copyright laws, ordered they pay the publisher $15 million in damages.

“I don’t blame [ACS and Elsevier] for trying,” says Peter Suber, the director of the Harvard Office for Scholarly Communication, which facilitates the university’s adoption of open access policies. “Sci-Hub is violating their copyrights . . . and it’s not just a small player any more.”

The complaints

A 2016 investigation by Science revealed that between September 2015 and February 2016, Sci-Hub received 28 million download requests from all around the world. Elsevier and ACS were among the most downloaded publishers, along with Springer Nature, Wiley Blackwell, and the Institute of Electrical and Electronics Engineers. According Elbakyan, Sci-Hub’s founder and operator, the site currently hosts approximately 65 million research-related documents that have a digital object identifier (DOI), which includes journal articles, book chapters, and conference proceedings.

“Sci-Hub is stealing ACS copyrighted content and illegally reproducing and disseminating it on their website and via spoofed websites that mirror ACS’ own website, and Sci-Hub is counterfeiting and infringing on the Society’s trademarks,” Glenn Ruskin, the director of ACS External Affairs and Communications, writes in an email to The Scientist.

In addition to receiving damages from Sci-Hub and stopping the site from illegally distributing ACS material, the society states in its suit that it wants “any Internet search engines, web hosting and Internet service providers (ISPs), domain name registrars, and domain name registries, to cease facilitating access to any or all domain names and websites through which [Sci-hub] engages in unlawful access to, use, reproduction, and distribution of [ACS content].”

“In other words, they want information about Sci-Hub to be ‘censored,’” Elbakyan writes in an email to The Scientist. “Most visitors are coming to the website directly, not through the search engines. So I do not think that will have much effect on the website operation per se.”

Daniel Himmelstein, a biodata scientist and postdoc at the University of Pennsylvania, says that it is unlikely that a court in the U.S. will order ISPs to block access to Sci-Hub. He notes that a similar request made by Elsevier was opposed in 2015 by the Computer & Communications Industry Association and the Internet Commerce Coalition—two groups representing major global technology companies.

A publicity backfire

“The lawsuits are interesting because the effect of them is unclear,” Himmelstein says. Sci-Hub has neither paid the $15 million in damages to Elsevier nor ceased its services. Elbakyan, who lives outside U.S. jurisdiction, says the site plans to ignore the lawsuits. And although Elsevier was able to successfully shut down sci-hub.org, the domain name under which the site was launched, Sci-Hub quickly returned under multiple web addresses, through which it continues to thrive today.

In fact, these legal challenges may be leading more potential users to the site. Earlier this year, Himmelstein and his colleagues published a PeerJ preprint that found, based on data from Google Trends, that the suits against Sci-Hub led to brief spikes in visits to the site.

“[The] Elsevier lawsuit resulted in [the] project gaining publicity and some kind of recognition,” Elbakyan writes. “So I sometimes wonder if they wanted actually to promote Sci-Hub this way.”

When contacted for comment, Elsevier deferred the request to the Association of American Publishers (AAP). Regarding potential further action if Sci-Hub fails to comply with the court’s orders, John Tagler, AAP’s vice president and executive director of professional and scholarly publishing, writes in an email to The Scientist that he “can’t speak to Elsevier’s plans.” However, he says, AAP task force “may move forward with action in other jurisdictions, including in partnership with other trade groups located where Sci-Hub and its partners are operating.”

In the past, publishers have also tried combating piracy with technological solutions. A few years ago, for example, ACS and the publisher John Wiley & Sons implemented trap URLs, which are designed to detect unauthorized automated downloading. However, this tactic was criticized for being clumsy and ended up unintentionally locking out subscribers.

Ruskin tells The Scientist that as illegal activity from sites like SciHub increase, ACS has “increased its security technology accordingly.” In addition, he adds, the society recently hired a Chief Information Security Officer to oversee security efforts for the entire organization.

According to Elbakyan, these countermeasures have not had any noticeable effects. “Even though it is possible to introduce difficulties for Sci-Hub to access content, [it is] not possible to prevent this completely,” she writes.

Whether the ACS lawsuit or its added security measures will hinder the pirating of paywalled content remains to be seen. “The legal means have not worked, the technical means have not worked, and I don't see any other obvious way to do it,” Suber says. “So my reading is [that] Sci-Hub is here to stay.”
http://www.the-scientist.com/?articl...ainst-Sci-Hub/





With Over a Million Downloads Meet Cameroon’s File Sharing Service Feem
Roy Morrison

Have you ever had issues with transferring files between devices? Then you might be happy to learn that there is a world-class file sharing solution that can solve your issue and it was built in Africa. In fact, it was built by Cameroonian entrepreneur Fritz Ekwoge and his team in Buea, Cameroon.

It’s called Feem. And it has over one million downloads from people all over the world. It was also covered by the renowned CIO magazine that covers tech.

‘One of the features that makes Feem a great product is that you do not need an internet connection but only a shared network connection’

One of the features that makes Feem a great product is that you do not need an internet connection but only a shared network connection. Plus, file transfers won’t affect your data usage which is especially great in the African context where data bundles can often be quite expensive.

Roy Morrison: Why did you decide to build Feem?

Fritz Ekwoge: I created Feem to solve a personal pain point I had a few years ago.

To celebrate the initial success our company had in a previous pan-African tech venture, my partner, Sebastian, bought me an iPad. Cool device. But I quickly realised I couldn’t transfer the awesome photos and videos I took on the iPad to my PC.

Passing through the cloud would have been extremely slow and expensive here in Cameroon. You cannot use Bluetooth on an iPad to transfer files, and my PC didn’t have Bluetooth. Even if Bluetooth worked, it would have been a very slow option. Transferring a 1GB video over normal Bluetooth usually takes hours.

But most devices have Wi-Fi, and Wi-Fi is like 50X faster than Bluetooth. I searched around, and didn’t find any good offline Wi-Fi file transfer tool that worked between an iPad and a PC. So I created one. And I called it Feem.

RM: What have been your biggest challenges in building Feem?

FE: The first major challenge was educating users that Wi-Fi is not the internet. For some reason, everyone equated Wi-Fi and internet. They found it hard to believe Feem actually works offline.

The second biggest challenge was making Feem work across all major platforms. We wanted to stand out from our global competitors, so cross-platform support was a major part of our strategy. That strategy seems to be paying off.

We even did a major rewrite of Feem which made it easier for us to make Feem work consistently well across all platforms. The new Feem v4 is now available.

RM: What have been your biggest learnings in building Feem?

FE: Build global products locally. In other words: “glocalisation”. I created Feem to solve a local problem, but ended up creating one of the best offline file transfer tools used worldwide.

I created Feem two solve a local problem, but ended up creating 1 of the best offline file transfer tool used worldwide. It astonishes me even to this day that most of our users are not even African. 40% from India. 40% from the US. 19% from Europe, and less than one percent from Africa.

Our paying customers, (mostly from the US and from Europe) don’t even know Feem is being developed in Cameroon.

RM: What did your marketing strategy look like to achieve 1m+ downloads?

FE: We didn’t have a big enough budget for marketing, so we focused a lot on being technically superior than any of our competitors. This included features like encryption, cross-platform support, folder file transfers, resumable file transfers, Wi-Fi Direct and local chat.

We also launched Feem around the time Edward Snowden revealed the NSA was spying on everyone, including Americans. Concerned consumers started looking for tools that ensured their privacy while transferring files between devices. Feem is one such tool. Most probably the best in its category.

This and many other factors helped us achieve our first one-million-plus downloads organically through word of mouth.

RM: Many apps have a notorious retention problem. How does Feem ensure that users stay active and keep using the app?

FE: It is a bit difficult for us to effectively measure retention since most of what happens in Feem happens offline.

Some app stores show uninstall rates, so we use that as a proxy for retention. It is only a proxy because it doesn’t account for our desktop users. For Feem in particular, we’ve observed that the more we improve the quality of our app, the less uninstalls we get.

RM: What have been the biggest milestones in building Feem?

FE: Our first $10 sale. This came from a woman in Australia, who was using Feem to transfer videos from her PC to her iPad for her son who had autism.

The second was winning first place in the 2015 JIC Starcube accelerator programme in the Czech Republic.

RM: In a post “From Africa or For Africa” on your blog you argued that African entrepreneurs should build products for the global market instead of the African market. Are you still of the same opinion. Why? Why not?

FE: Yes, I am still of the same opinion that African entrepreneurs should build products for the global market. It is even more relevant now, than back then.

Every point I made in that blog post still applies now. I also gave a recent talk where I used the word “glocalisation”, to sum up why we Africans should be building more global products.

It is our duty to show that Africans are not only consumers of technology, but can also create technology so good that it can be exported.

Using ourselves as an anecdote, focusing on the global market has put Feem in a unique position where we are competing against the best in the world in our niche. We were even featured on CIO.com

To paraphrase @africatechie on twitter, our goal is when African tech is just recognized as good tech, not as African tech. That kind of sums up our motivation for Feem. In our core, we want to offer the best offline file transfer experience ever on Earth. And also on Mars, when Elon Musk succeeds in colonizing that planet.

RM: Looking at the entrepreneurial ecosystem in Cameroon what has been the biggest progress over the last 10 years and where do you see the most room for improvement?

FE: I’m happy with the progress made at Silicon Mountain and ActivSpaces. More and more young people are interested in tech. We just need to build more global products.

RM: Where can people contact you?

FE: I can be contacted via email at fritz.ekwoge@feeperfect.com, or on Twitter: @ekwogefee
http://ventureburn.com/2017/08/feem/





Five Pirates Arrested For Pirating One Piece
Tim Midura

Five people in Japan were arrested on suspicion of violating the Copyright Law by leaking the popular manga One Piece online ahead of its official publication date. The suspects have reportedly earned 379 million yen ($3.47 million) in advertising revenue by putting scanned manga online over several years.

Ryoji Hottai, 31, along with another person, were arrested in July on suspicion of posting images from One Piece on a website run by the pair from July 21-22, 2016. Yo Uehara, 30, and two associates were arrested on Sept. 6 on suspicion of posting images, text and other content revealing entire One Piece stories on their website between July 22, 2016, and July 25, 2017. Hottai and Uehara have admitted to the crimes, according to police.

Shueisha, who publishes One Piece, issued a statement in response to the arrests, saying:

"The manga was created by the author who devoted his heart and soul to it. We feel strong anger that the suspects published it in an inappropriate manner and earned revenue from it. We hope that these arrests act as a warning against the continuing piracy epidemic and wrongful use of publications."

The accused's defense of only being pirates like they see in the manga is unlikely to hold up in court.
http://www.theouthousers.com/index.p...one-piece.html





Visitors 'Help' Pirate Bay Mine Virtual Cash

The Pirate Bay briefly put code on some of its web pages that used visitors' machines to mine a virtual currency.

The hidden code helped the file-sharing site generate coins for the Monero digital currency.

The Pirate Bay's administrators said using the code had been an experiment to see whether it could provide a way to end its reliance on ad revenue.

But many visitors objected to the code being foisted on them without any prior notification.

One Monero coin is currently worth about $100 (£74).

The code inserted on The Pirate Bay pages was under development, said the site's administrators, who also asked people for feedback.

"Do you want ads or do you want to give away a few of your CPU [central processing unit] cycles every time you visit the site?" they asked.

A note added to the blog post said an error in the code had caused it to try to grab all available CPU power to mine Monero.

This bug had now been fixed, the administrators said, which should limit how much it used to 30% at most.

The Pirate Bay had adapted code from software company Coin-Hive for its test.

Many crypto-currencies work by getting some of those who hold the coins to run code that verifies who has spent or transferred which coins.

The reward for carrying out this work, known as mining, is typically newly minted coins.

Ad-blockers or browser add-ons that stopped scripts being run on web pages would also disable the mining code, the administrators said.

File-sharing news site Torrent Freak said the test had been carried out for 24 hours over the weekend and many people had complained.

Many had called the idea "dumb" and urged The Pirate Bay to disable the code.

While others had said mining virtual cash was an "interesting idea" but criticised The Pirate Bay for not warning users about the test.

"Agree on the overall goal," wrote one contributor, "but not so on the way it runs without explicit knowledge or authorisation of users."
http://www.bbc.com/news/technology-41306384





Secret Documents Reveal: German Foreign Spy Agency BND Attacks the Anonymity Network Tor and Advises Not to Use it

The German spy agency BND developed a system to monitor the Tor network and warned federal agencies that its anonymity is „ineffective“. This is what emerges from a series of secret documents that we are publishing. The spies handed a prototype of this technology over to the NSA, in expectation of a favor in return.
Andre Meister

„If you’re an Internet newcomer and want to get up to speed without all the intimidating technical jargon, The Internet For Dummies has you covered.“ That is how the publisher promotes its book in the popular „For Dummies“ series. Like many people, the German foreign spy agency Bundesnachrichtendienst (BND) buys a copy in 2005, to „get familiar“ with this internet. That is what the engineer Harald Fechner testified in the German parliamentary committee investigating the NSA spying scandal two years ago.

This is a creative version of the truth. Until his retirement in June 2009, Fechner was head of the BND’s Signals Intelligence Directorate and therefore responsible for the spy agency’s internet surveillance. He had more than a thousand spies „specifically intercepting communication streams“ – via radio waves, telephone cables and tapped fibre-optic cables.

Secret BND hacker unit

His command of the SIGINT Directorate also includes a secret hacker unit, responsible for „operative technological attacks on IT systems“ all over the world. Like every department of the spy agency, these hackers are constantly changing their name: until August 2008, they were called „Unit 26E“ (Operational Support and Listening Technology), then „Working Group TX“ (IT Operations) and finally „Sub-Directorate T4“ (Cyber Intelligence).

Within the spy agency, the hackers became famous in 2007 when one of them eavesdropped on his girlfriend’s romantic e-mails with a Bundeswehr soldier, this so-called LOVEINT incident makes the rounds internally. The public learned about the hacker unit one year later, when it is revealed that they infiltrated the computer network of the Afghan Ministry of Trade. Not only did the spies read the e-mails of the minister – officially a friend of Germany – but also mails from the German journalist Susanne Koelbl.

Harald Fechner remembers this well, as the hacking attack against the journalist led him to the last step of his 28-year career at the BND. On the same day, the magazine Der Spiegel revealed this scandal, the former head of the SIGINT Directorate Dieter Urmann was demoted. Fechner became his successor and kept that position until his retirement.

Gathering of the spies

While these events unfolded in Germany, the BND agent operating under the initials „H.F.“ was on a work trip to the USA. The President was still George W. Bush – and his hand-picked CIA-support abroad was sometimes Edward Snowden. H.F. was the NSA’s guest at its headquarters in Fort Meade, attending the annual SIGINT Development Conference, where more than a thousand agents discussed the latest developments in surveillance technology. While the BND was under pressure in Germany, it could shine here.

At the invitation of the NSA, H.F. presented an attack on the Tor network which the BND hackers developed shortly before. The „onion router“ is a network to anonymize internet traffic and has become „the king of high-secure, low-latency internet anonymity“. Millions of people around the world use Tor to protect against surveillance and censorship.

Tor was originally created by the US military to disguise spy agencies activities on the internet, and still receives a large part of its funding from the US government, to circumvent „technologies of internet repression, monitoring and control“ in authoritarian states. But Tor is not only an annoyance for dictators, agencies of western countries also want to de-anonymize Tor users. And the BND is keen to help.

Attack on the Tor network

A few weeks prior to the conference, the BND hackers from Unit 26E „developed the idea of how the Tor network could be monitored relatively easily“, according to internal BND documents. Tor was already well known at the time and had 200,000 active users all over the world. When project leader Roger Dingledine explained the development at the CCC Congress and in a police station in Stuttgart, the hackers of the BND listened carefully.

In March 2008, the spy agency filled in its partners from the USA and UK. When a foreign delegation visited Munich, the SIGINT unit presented „the anonymity network Tor and a possible disbandment of the anonymity feature“, the BND writes in its internal report. In order to implement the plan, the BND hoped for „an international cooperation with several foreign intelligence agencies“.

Both NSA and GCHQ expressed „a high interest“ and offered support. The three spy agencies decided on further meetings and the creation of a project group, while the BND planned to set up its own Tor exit node server, as well as a „test capture“ and „evaluation with the NSA“.

Far ahead of the Yanks

In April, the BND agent H.F. presented the work of the German hackers to the anti-terror coalition of the European spy agency club SIGINT Seniors Europe. Afterwards, he was invited to the SIGDEV conference by the NSA at its headquarters. Yet again, his presentation was a success: The other spy agencies showed themselves „impressed with our work on Tor servers“, the BND writes, its work being „far ahead of the Yanks“.

As a result, the NSA promised „a technical review by its experts“, with the goal to implement the project. Only a week later, H.F. was again invited by the NSA, this time accompanied by „M.S.“ from the hacker unit, and this time to the BND’s Bad Aibling station in Bavaria, where the NSA liaison unit SUSLAG has a building for itself. H.F. and M.S. joined a video conference with NSA experts to clarify further questions and ideas. Among other documents, we are publishing the report of this conference.

Both BND and NSA agree that „the Tor network is the most established system for anonymity on the internet“ and „other systems only play a minor role“. The spy agencies expected a continued growth of the Tor network, which would „continue to pose a problem for several years“. The spies assumed that „efforts for an attack are worthwhile“. Their goal was to break Tor’s anonymity.

Efforts to find an attack angle

How exactly the spy agencies want to crack Tor remains vague. Tor is transparent and open in order to promote research and feedback. Not only are design, specification and source code public, but also a bibliography of research papers on anonymity. This openness not only helps researchers, but also Tor itself: the system is regularly analyzed – and if a vulnerability is identified, it is fixed.

The BND hackers told the NSA about „a possibility to penetrate the Tor network“, a term commonly used for the infiltration of IT systems. In this case, the documents suggest that the spy agencies wanted to exploit a design decision Tor publicly specified.

The principle of „onion routing“ is to transmit internet traffic through three intermediary servers, so that no point in the network knows both sender and receiver. With this technique, Tor prevents many surveillance and censorship measures, better than a „Virtual Private Network“ (VPN) with only one intermediate server. But of course not all.

A global passive adversary

Like all low-latency anonymity systems used in practice, Tor cannot protect against „a global passive adversary“. This is defined in the design document. The software documentation warns: „If your attacker can watch the traffic coming out of your computer, and also the traffic arriving at your chosen destination, he can use statistical analysis to discover that they are part of the same circuit.“ The goal of NSA’s and GCHQ’s internet surveillance is to achieve exactly that.

A number of researchers have demonstrated this attack in practice, either by simply counting transmitted packets, by analyzing time windows, or correlation attacks with only a fraction of traffic. All this research is public. The spy agencies followed this research, used it for their own purpose and turned theoretical vulnerabilities into real-world surveillance systems.

The BND hackers based their attack on „a paper by an American university“, which they handed over to the NSA. During the video conference in Bad Aibling, the BND responded to questions and presented a timetable with further steps. The Germans planned to set up their own Tor network in a lab within „six to eight weeks“ in order to better understand the system and to verify the research paper.

Test network and proof of concept

The NSA was clearly enthusiastic about the BND’s presentation, wanted to work closely together, and especially wanted access to the test results. The Americans were „visibly astonished“ by the activity of the Germans. Although the BND considered its progress „a little more advanced than the NSA“, Pullach also wanted Fort Meade to participate: The project „would have a considerably greater prospect for success in a combined effort with partners“.

The NSA agreed to contact the university to learn more about the research paper. The BND started its work, set up the test network and developed a „proof of concept“ for the attack, a prototype. The Germans wanted to deliver first results only a month after the video conference. SIGINT chief Harald Fechner planned to visit the USA in October and discuss the issue with NSA Director Keith Alexander.

But then the project experienced a setback. The hacker unit „IT operations“ was reorganized and the people involved in the Tor project were „dispersed within the unit“ into two different areas. Nevertheless, the NSA headquarter hosted another meeting on the topic in December 2008, „by far the most intense in terms of the number of participants and competence. The room was packed.“

A promise to the Yanks

The transition of US presidency from George W. Bush to Barack Obama set the project in motion again. On the day of the inauguration, the BND’s „leadership support“ prepared another visit of SIGINT chief Fechner to the NSA in Fort Meade. In internal e-mails, the hackers were ordered to reactivate the project. After all, the BND had to keep „a promise the the Yanks“.

From that point, M.S. took over the project. He complained that „brilliant staff“ was a „scarce resource“ and about the lack of interest within the BND. After he presented the system internally, „there was no more feedback“. From then on, he stated, „further development is primarily geared to the needs of the partner“, meaning the NSA. The proof of concept was already „a good status to talk to the experts of the Yanks“.

For BND’s leadership, this was opportune. While they hoped that BND analysts could be „pushed“ to work on Tor, their true goal was bigger. The BND wanted something from the NSA: a technology from the „field of cryptanalysis“, to decipher encrypted communication. The Germans knew from experience that Fort Meade would not easily hand over the object of desire. So they collected of items to trade for the Americans, the attack against Tor was „another building block“ for this gift package.

Vegetable chopper against onions

The BND’s leadership gave M.S. the order to write up a concept paper within one month. And he delivered. On 20 February 2009, the 16-page „concept for tracking internet traffic, which has been anonymized with the Tor system“ was finalized. The cover is far from modest: He placed a vegetable chopper over an onion, the logo of the Tor network.

To justify the attack on Tor, M.S. quoted a law enforcement conference in Berlin from this year that took place under the motto „WWW – the virtual crime scene“. For the chapter on „How the Tor network works“, the author kept it simple, he copied the text from Wikipedia and took images from the Tor website.

Precisely how the BND plans to „chop“ Tor is unfortunately redacted in the document we obtained. But as before, the spy agency refers to public research. To implement the attack, it is likely that the spies runs their own servers in the Tor network. M.S. points to passive snooping servers, which are presumably operated by the NSA, and emphasizes the „protection of the anonymity“ of the spy agencies.

Highly interested in access

Three weeks after the concept paper, the British reiterated their demand. The GCHQ resident in Berlin and three other high-ranking spies of the queen visited Pullach on 11 March 2009. At the BND headquarters, they were welcomed by SIGINT chief Harald Fechner, who brought seven other senior SIGINT staff members. The purpose of the meeting was to develop their SIGINT cooperation, especially „regarding anonymity services“.

The British wanted to participate: The GCHQ „is very interested in the SIGINT unit’s access to the Tor network“, the internal report says. Both parties agreed to arrange further technical discussions and a „joint workshop on possible technical and operational procedures“.

Five days after the visit from the island, SIGINT chief Fechner flew across the Atlantic, the concept paper of M.S. in his bag. The Americans gladly accepted his offer – the NSA and GCHQ took over the project. Whether the BND received the compensation it hoped for, remains unknown. When we confronted the BND with a set of specific questions, we received only the boilerplate answer: „As a matter of principle, the BND talks about operational aspects of its work only with the Federal Government and the competent authorities of Parliament.“

Very high level of surveillance

One and a half years later, the BND warned German federal agencies not to use Tor. The hacker unit „IT operations“ entitled its report: „The anonymity service Tor does not guarantee anonymity on the internet“. The six-page paper was sent to the chancellery, ministries, secret services, the military and police agencies on 2 September 2010.

According to the executive summary, Tor is „unsuitable“ for three scenarios: „obfuscating activities on the internet“, „circumventing censorship measures“ and „computer network operations for intelligence services“ – spy agency hacking. The BND assumes „a very high level of surveillance within the network“, including the possibility that anyone can „set up their own so-called exit nodes for monitoring“.

In a technical description, BND explains how Tor works. The pictures are copied again: from a personal website and the Electronic Frontier Foundation, however in outdated formats. Moreover, the BND gets it partly wrong: their statement that „information about the running Tor nodes is downloaded from a server in unencrypted form“ has not been true for over two years at the time of writing. After Iran identified and blockedthese, they were encrypted from 2007.

Not convinced of the legality

In its announcement, the spy agency presents a strong hypothesis. According to the BND, „Tor is predominantly used to conceal activities, where users are not convinced of the legality of their actions. The number of Tor users who aim at preserving anonymity out of mere privacy considerations is relatively small.“ The BND bases this statement on „several pieces of intelligence“, but does not underpin it with any facts.

We reached out to several people from the Tor project but nobody had any idea how the BND came up with this hypothesis. „That sounds like nonsense,“ IT security advisor Jens Kubieziel says, who is a system administrator for the Tor project and runs large Tor exit nodes. The Chaos Computer Club also operates some of the major servers of the Tor network. „Compared to the amount of traffic and the millions of connections anonymized by Tor every day, the number of inquiries about illegal activities is negligibly low,“ lawyer Julius Mittenzwei says, one of the project managers and former member of Tor’s board of directors.

Spy agencies and other agencies worldwide „have ways to counter anonymity. One of them is to set up own Tor nodes and monitor those intensively to gather intelligence and evidence“, the BND continues. The spies do not treat this as a secret: „Some agencies have already reported about installing their own Tor nodes and using the logged data for different projects and criminal investigations.“

Disguise not provided

The BND sees clear proof that spy agencies operate Tor servers by looking at the location of various servers, especially „in the vicinity of Washington, D.C.“. The spies assume that „various agencies provide these nodes“. The document does not specify whether the spy agency only suspects this, read it on the internet, was told so by NSA – or gave that idea to the NSA in the first place.

However, the BND is so convinced that it warns the most important German federal agencies not to use Tor. The conclusion of its assessment: „Users of anonymity software expect a level of disguise, which known and widely used anonymity services do not provide.“

Not only does the BND think Tor is unsafe, they also advise against using hacked systems as proxy servers: „The use of a compromised system for camouflage by spy agencies is known to be ultimately ineffective and appears only plausible for diversion maneuvers.“ The „IT operations“ department must know this of course – and so they warn their fellow state hacker colleagues from federal police, domestic spy agency and military.

Tempora and XKeyscore

Looking at the activities of the NSA and GCHQ, the BND’s concern might just be justified. Two years after the Germans presented their gift, the spy agencies continue their work on breaking Tor. The efforts of the British team is documented in the GCHQ’s internal wiki, published by German magazine Der Spiegel from the Snowden archive. Their goal is to deanonymize Tor, or in their own words: „if given some traffic from a Tor exit node, […] find the IP address of the user associated with that traffic.“

According to the wiki, the research began in December 2010. The British gave up on trying to follow the path of a circuit through the Tor network. Instead, they launched „an entry-exit correlation attack“, correlating the internet traffic from the sender to the network and from the network to the receiver. As the GCHQ massively intercepts internet traffic and runs its own Tor servers, this is not difficult. As early as June 2011, they finalized an 18-page study and source code in the statistical programming language R, completed by a presentation with slides.

The NSA also scores a success. In 2011, they implemented „several fingerprints and a plugin“ in their powerful XKeyscore system, in order to recognize and deanonymize Tor users. German public broadcasters published some of these XKeyscore rules. According to the code, the NSA monitors all internet users who visit the Tor website, use the Tor software, or simply search for Tor or the Tor operating system Tails.

Egotistical giraffe

Despite all attacks, the NSA still honors Tor as „king of high-secure, low-latency internet anonymity“. Even if spy agencies that intercept large parts of the internet might deanonymize some Tor users some of time, it is unlikely that they are able to deanonymize all Tor users all of the time. The NSA writes, it has „no smoking gun yet :-(„

Anonymity and encryption share a common feature: Both are easier to circumvent than to crack. Anyone who breaks into a computer can decrypt its communication and identify its users. NSA and GCHQ do exactly this since at least 2013: Under the code name Egotistical Giraffe, they hack the Firefox-based Tor Browser, infect the operating system and thereby solve their self-proclaimed „Tor problem“. Even the FBI carried out and admitted such attacks.

But sometimes it is enough to take advantage of mistakes that surveilled targets make. LulzSec hacker Hector Monsegur was identified because he revealed his IP address just once. Stratfor hacker Jeremy Hammond was identified because the FBI correlated the times when his home WIFI was in use. Silk Road founder Ross Ulbricht was identified because he gave away his pseudonym. A recent study researches these „technical limitations of anonymity and the operational security challenges that Tor users will encounter“.

No purely technical measures

The domestic German spy agency was however less successful. Even though the „Federal Office for the Protection of the Constitution“ received the memo from the BND, they still experience problems to identify Tor users two years later. While visiting Washington in June 2012, a delegation asked the NSA if they could „identify“ or „decrypt“ Tor. The American answer did not satisfy them. In the assessment of the trip, the Germans write that the visit was „strategically important“, but „was more about relationship management“.

Well-funded international spy agencies continue to refine their attacks. But the Tor community also continues to improve the project and fight off attacks – in close collaboration with the privacy research community. Project leader Roger Dingledine is skeptical as to whether spy agencies are able to make their attacks „work at scale“. Nevertheless, the documents show „that we need to keep growing the Tor network so it’s hard for even larger attackers to see enough Tor traffic to do these attacks.“

But that is not enough, according to Dingledine: „We as a society need to confront the fact that our spy agencies seem to feel that they don’t need to follow laws. And when faced with an attacker who breaks into Internet routers and endpoints like browsers, who takes users, developers, teachers, and researchers aside at airports for light torture, and who uses other ‚classical‘ measures – no purely technical mechanism is going to defend against this unbounded adversary.“
https://netzpolitik.org/2017/secret-...not-to-use-it/





The NSA's Weird Interest In File Sharing Programs
Tim Cushing

Another large Snowden document dump from The Intercept uncovers many more off-brand uses of NSA surveillance tools. The pile of documents come from the NSA's "SID (Signals Intelligence Directorate) Today" files, of which there are apparently thousands of available pages. The documents released late last week show that if it happened online, the NSA was looking at it.

According to documents provided by NSA whistleblower Edward Snowden, the spy agency formed a research group dedicated to studying peer-to-peer, or P2P, internet traffic. NSA didn’t care about violations of copyright law, according to a 2005 article on one of the agency’s internal news sites, SIDtoday. It was trying to determine if it could find valuable intelligence by monitoring such activity.

But it appears the NSA found very little worth observing.

“By searching our collection databases, it is clear that many targets are using popular file sharing applications,” a researcher from NSA’s File-Sharing Analysis and Vulnerability Assessment Pod wrote in a SIDtoday article. “But if they are merely sharing the latest release of their favorite pop star, this traffic is of dubious value (no offense to Britney Spears intended).”

The info in the SID Today publication [PDF] is a bit dated, as it shows BitTorrent trailing applications like eDonkey and KaZaa. Even though it was mostly popular albums traversing the internet pipes, the NSA still formed a File-sharing Analysis and Vulnerability Assessment (FAVA) "pod" to poke away at the infrastructure and search the shared files for data of national security interest. To do this, it had to strip away the layers of protection lying between the NSA and the contents of the files.

As many of these applications, such as KaZaA for example, encrypt their traffic, we first had to decrypt the traffic before we could begin to parse the messages. We have developed the capability to decrypt and decode both KaZaA and eDonkey traffic to determine which files are being shared, and what queries are being performed.

Breaking the encryption allowed the NSA to peer into users' computers via their shared folders, as well as harvest email addresses, country codes, user names, and lists of recent searches.

Even so, there was little actual intelligence to be gathered from the most popular file sharing applications of a decade ago. But that laid the groundwork for further examination of file sharing for national security reasons. A program called GRIMPLATE tracked BitTorrent use by Defense Dept. employees, checking to see if any of the swarms travelling in and out of the DoD's safe spaces was "malicious" -- a definition that presumably covers DoD employee exfiltration of sensitive files as well as possibly-harmful programs being downloaded to DoD computers.

Over in the UK, GCHQ was taking much more proactive steps toward turning torrent traffic into both a weapon and a source of intel.

The page describes DIRTY RAT, a GCHQ web application used by analysts that at the time had “the capability to identify users sharing/downloading files of interest on the eMule (Kademlia) and BitTorrent networks. … For example, we can report on who (IP address and user ID) is sharing files with ‘jihad’ in the filename on eMule. If there is a new publication of an extremist magazine then we can report who is sharing that unique file on the eMule and BitTorrent networks.”

The RAT was also tasked with gathering info to be shared with law enforcement. Child porn is name-checked in the document, as are the London Metro Police and FBI. But GCHQ wasn't interested in merely collecting info on users sharing illicit content. It also wanted to use the sharing platforms for malware delivery.

A tool called PLAGUE RAT “has the capability to alter the search results of eMule and deliver tailored content to a target,” the wiki article states. “This capability has been tested successfully on the Internet against ourselves and testing against a real target is being pursued.”

File sharing hasn't gone away, so it's indisputable both agencies are still eyeballing BitTorrent traffic. Considering a number of exfiltrated docs/software have been shared via the service, there are probably files of national security interest circulating along with movies, music, and games.
https://www.techdirt.com/articles/20...programs.shtml





CIA Discovered to be Routinely Hacking Home Wi-Fi Routers Made by Linksys, DLink and Belkin to Monitor All Your Internet Traffic

The mass spying on the American people continues.
Jayson Veley

According to decade-old documents released earlier this month by Wikileaks, the Central Intelligence Agency has the ability to hack into people’s Wi-Fi routers and gather information regarding Internet searches. The CIA has used home routers from 10 U.S. manufacturers, including Linksys, DLink and Belkin.

The firmware, which has been given the codename “CherryBlossom,” runs on a total of 25 router models with the potential to run on dozens more after modifications are made.

“The Cherry Blossom (CB) system provides a means of monitoring the internet activity of and performing software exploits on targets of interest,” reads the ten-year-old document. “In particular, CB is focused on compromising wireless networking devices, such as wireless (802.11) routers and access points (APs), to achieve these goals.”

The document goes on to say that routers with weak passwords can be easily broken into, and that the firmware is particularly effective when it comes to DLink’s DIR-130 model and the Linksys-manufactured WRT300N model.

Once the hacking process is complete and CherryBlossom is fully installed on the Internet router, the device begins to send messages called beacons to a server that is run and controlled by the CIA, codenamed “CherryTree.”

At this point, the CIA has the ability to analyze the router’s status and web traffic via a web-based user interface called “CherryWeb.” The infected router is assigned a “mission,” which usually has to do with targeting a specific laptop or phone inside of the house by using information such as IP and email addresses, chat user names and MAC addresses.

The CIA documents that have been released by Wikileaks date back to the year 2007, which means that this practice has been going on for ten years without our knowledge and without our consent.

When you think about all of the ways in which the federal government and big corporations like Amazon are spying on the American people, it’s both shocking and unnerving to know that most of this spying occurs in the comfort of our own homes. As technology continues to advance, and as that technology becomes more and more accessible to average American citizens, there is an ever-expanding list of ways in which our Fourth Amendment rights are being infringed upon.

Take, for example, Smart TVs manufactured by Vizio, which have the ability to track your viewing habits and share that information with third parties through a feature called “Smart Interactivity.” Over ten million devices are built with this feature, meaning that Vizio is spying on millions of good, law-abiding Americans as they sit in the comfort of their living rooms.

“Non-personal identifiable information may be shared with select partners… to permit these companies to make, for example, better-informed decisions regarding content production, programming and advertising,” Vizio explained in a statement.

LG and Samsung have similar features on their own Smart TVs, though they are not nearly as invasive as Vizio, which collects data regarding what you’re watching (such as TV or Netflix), how you’re watching it (recorded or live), as well as when you’re watching it. Unlike other companies, Vizio has the ability to link the viewing patterns that it collects from users with the IP address, which increases the overall value of the data. This practice is illegal, but Vizio claims that the laws do not apply to its business.

This has got to stop. Some may argue that it’s really not a big deal if their routers are hacked or their television viewing habits are monitored, but at this rate, it will become a big deal in the not-so-distant future. That is why it is up to the American people to stand up for their Fourth Amendment rights before it is too late.
http://www.computing.news/2017-06-26...t-traffic.html





Mueller Just Obtained a Warrant that Could Change the Entire Nature of the Russia Investigation
Natasha Bertrand

• Robert Mueller obtained a search warrant for records of "inauthentic" Facebook accounts
• It's bad news for Russian election interference "deniers"
• Mueller may be looking to charge specific foreign entities with a crime

FBI Special Counsel Robert Mueller reportedly obtained a search warrant for records of the "inauthentic" accounts Facebook shut down earlier this month and the targeted ads these accounts purchased during the 2016 election.

The warrant was first disclosed by the Wall Street Journal on Friday night and the news was later confirmed by CNN.

Legal experts say the revelation has enormous implications for the trajectory of Mueller's investigation into Russia's election interference, and whether Moscow had any help from President Donald Trump's campaign team.

"This is big news — and potentially bad news for the Russian election interference 'deniers,'" said Asha Rangappa, a former FBI counterintelligence agent.

Rangappa, now an associate dean at Yale Law School, explained that to obtain a search warrant a prosecutor needs to prove to a judge that there is reason to believe a crime has been committed. The prosecutor then has to show that the information being sought will provide evidence of that crime.

Mueller would not have sought a warrant targeting Facebook as a company, Rangappa noted. Rather, he would have been interested in learning more about specific accounts.

"The key here, though, is that Mueller clearly already has enough information on these accounts — and their link to a potential crime to justify forcing [Facebook] to give up the info," she said. "That means that he has uncovered a great deal of evidence through other avenues of Russian election interference."

It also means that Mueller is no longer looking at Russia's election interference from a strict counterintelligence standpoint — rather, he now believes he may be able to obtain enough evidence to charge specific foreign entities with a crime.

Former federal prosecutor Renato Mariotti, now a partner at Thompson Coburn LLP, said that the revelation Mueller obtained a search warrant for Facebook content "may be the biggest news in the case since the Manafort raid."

The FBI conducted a predawn July raid on the home of Trump's former campaign chairman, Paul Manafort, in late July. The bureau is reportedly investigating Manafort's financial history and overseas business dealings as part of its probe into possible collusion between the campaign and Moscow.

The Facebook warrant "means that Mueller has concluded that specific foreign individuals committed a crime by making a 'contribution' in connection with an election," Mariotti wrote on Saturday.

"It also means that he has evidence of that crime that convinced a federal magistrate judge of two things: first, that there was good reason to believe that the foreign individual committed the crime. Second, that evidence of the crime existed on Facebook."

That has implications for Trump and his associates, too, Mariotti said.

"It is a crime to know that a crime is taking place and to help it succeed. That's aiding and abetting. If any Trump associate knew about the foreign contributions that Mueller's search warrant focused on and helped that effort in a tangible way, they could be charged."

Congressional intelligence committees are homing in on the campaign's data operation as a potential trove of incriminating information.

Democratic Rep. Adam Schiff, the ranking member of the House Intelligence Committee, told MSNBC earlier this month that he wants to know how sophisticated the Russian-bought ads were — in terms of their content and targets — to determine whether they had any help from the Trump campaign.

The House Intelligence Committee also wants to interview the digital director for Trump's campaign, Brad Parscale, who worked closely with Trump's son-in-law Jared Kushner.

Kushner was put in charge of the campaign's entire data operation and is now being scrutinized by the FBI over his contacts with Russia's ambassador and the CEO of a sanctioned Russian bank in December.

Facebook said in its initial statement that about 25% of the ads purchased by Russians during the election "were geographically targeted," and many analysts have found it difficult to believe that foreign entities would have had the kind of granular knowledge of American politics necessary to target specific demographics and voting precincts.

In a post-election interview, Kushner told Forbes that he had been keenly interested in Facebook's "micro-targeting" capabilities from early on.

“I called somebody who works for one of the technology companies that I work with, and I had them give me a tutorial on how to use Facebook micro-targeting,” Kushner said.

“We brought in Cambridge Analytica," he continued. "I called some of my friends from Silicon Valley who were some of the best digital marketers in the world, a nd I asked them how to scale this stuff . . . We basically had to build a $400 million operation with 1,500 people operating in 50 states, in five months to then be taken apart. We started really from scratch."
http://www.businessinsider.com/muell...ccounts-2017-9





Journalist Nearly Banned from YouTube and Gmail For Posting Al-Qaeda Videos From Chelsea Manning Trial
Dell Cameron

YouTube’s latest push to ban terrorist propaganda across its ubiquitous video platform is getting off to a rough start. Earlier this week, noted investigative reporter and researcher Alexa O’Brien woke to find that not only had she been permanently banned from YouTube, but that her Gmail and Google Drive accounts had been suspended as well. She would later learn that a reviewer who works for Google had mistakenly identified her channel, in the words of a YouTube representative, as “being dedicated to terrorist propaganda.”

This drastic enforcement action followed months of notifications from YouTube, in which O’Brien was told that three of her videos had been flagged for containing “gratuitous violence.” None of the videos, however, depict any actual scenes of violence, except for one that includes footage of American helicopter pilots gunning down civilians in Iraq, which has been widely viewed on YouTube for half a decade.

While appealing YouTube’s decision, O’Brien learned that the mechanism for correcting these mistakes can be vexing, and that a fair outcome is far from guaranteed. By Wednesday morning, her channel was slated for deletion. The Google Drive account she was locked out of contained hundreds of hours of research—or years worth of her work—and was abruptly taken offline. She was then told that she was “prohibited from accessing, possessing or creating any other YouTube accounts.” The ban was for life, and with little explanation and zero human interaction, O’Brien’s research, much of it not accessible elsewhere, was bound for Google’s trashcan.

With the knowledge that YouTube has faced increased pressure from the US and European governments to crack down on the spread of terrorist propaganda—a consequence of which has led to the disappearance of content amassed by conflict reporters—it wasn’t difficult to deduce what had happened to O’Brien’s account.

The problem was eventually addressed and representatives of both Google and YouTube later called O’Brien to apologize and explain the error. When she was told that her channel had been misidentified as an outlet for terrorist propaganda, she could hardly contain her laughter. “It was a series of unfortunate events,” a YouTube rep told her. The mistake, they explained, was the fault of a human reviewer employed by Google.

A spokesperson for Google told Gizmodo on Friday: “With the massive volume of videos on our site, sometimes we make the wrong call. When it’s brought to our attention that a video or channel has been removed mistakenly, we act quickly to reinstate it.”

This year, YouTube has begun increasingly relying on machine learning to find and scrub extremist content from its pages—a decision prompted by the successful online recruiting efforts of extremist groups such as ISIS. With over 400 hours of content uploaded to YouTube every minute, Google has pledged the development and implementation of systems to target and remove what it calls “terror content.”

Last month, a YouTube spokesperson admitted, however, that its programs “aren’t perfect,” nor are they “right for every setting.” But in many cases, the spokesperson said, its AI has proven “more accurate than humans at flagging videos that need to be removed.” In a call Wednesday, a YouTube representative told Alexa: “Humans will continue to make mistakes, just like any machine system would obviously be flawed.” The machine, which prioritizes the content reviewed by human eyes, wasn’t “quite ready,” she said, to recognize the context under which controversial content is uploaded.

The O’Brien incident demonstrates that Google has many miles to go before its AI and human reviewers are skilled enough to distinguish between extremist propaganda and the investigative work that even Google agrees is necessary to broaden the public’s knowledge of the intricate military, diplomatic, and law enforcement policies at play throughout the global war on terror.

Al-Qaeda and The As-Sahāb Tape

What prompted a Google reviewer to designate O’Brien as a purveyor of terrorist content? Well, for one, her channel contains actual al-Qaeda propaganda. But that propaganda is also an important piece of US history: A few years ago, it nearly cost former US Army Private Chelsea Manning a life sentence.

O’Brien’s channel contain portions of a June 2011 video presented by al-Qaeda outlet As-Saḥāb Media featuring Adam Yahiye Gadahn, a US-born al-Qaeda operative in the Arabian Peninsula, who—in earlier jihadi propaganda tapes rebroadcast by US network news—referred to himself as “Azzam the American.” In 2006, Gadahn appeared in an al-Qaeda documentary that features an introduction by Ayman al-Zawahiri, the al-Qaeda co-founder and current leader of the organization who succeeded Bin Laden in 2011.

In January 2015, Gadahn was killed in Pakistan in a series of US drone strikes, which also claimed the lives of foreign aid workers Giovanni Lo Porto and Warren Weinstein.

O’Brien’s interest in Gadahn has nothing to do with spreading his views on the “Great Satan” or his prophesies of American streets run with blood. The footage she preserved using YouTube’s service, which was also embedded in an off-site analysis, was used by military prosecutors to support criminal offenses at the court martial of Chelsea Manning. The criminal proceedings against Manning lacked contemporaneous access to the court record. Only the work of reporters, like O’Brien, who personally attended the trial, is available to the public.

The As-Saḥāb video featuring Gadahn came into play after the US government accused Manning of “aiding the enemy,” a charge that, unlike most derived from the military’s code of justice, can be applied to civilians. And it carries a life sentence.

Manning was accused of aiding Gadahn, legally defined in the court martial as an enemy of the US, because the As-Saḥāb video cites both WikiLeaks and the State Department cables that Manning leaked. An unidentified male narrator in the Gadahn video references, for example, the “revelations of WikiLeaks,” and claims they expose “the subservience of the rulers of the Muslim world for their master America.” The video also includes portions of the infamous “Collateral Murder” tape, which depicts American Apache pilots firing upon a group of men in Baghdad, killing among them two Reuters journalists.

A stipulation in the criminal case reveals that the US government argued Osama bin Laden himself had been in receipt of, and consequently aided by, the intelligence Manning leaked. The evidence to support this, however, is classified—all of it collected during the May 2, 2011, raid on his Abbottabad compound. An analysis conducted by O’Brien, which includes the portions of the As-Saḥāb video she uploaded to YouTube, suggests that Bin Laden may have somehow received a copy of the video while hiding in Pakistan. A digital copy of the tape itself may even have been recovered by the US Navy SEALs that breached his compound during the CIA-led mission that ended in Bin Laden’s death.

The video of Gadhan had already been entered into evidence to support the aiding the enemy charge—but to prevent testimony, which would’ve involved an elaborate set-up to conceal the identity of a witness linked to the Bin Laden evidence, Manning’s defense agreed to stipulate that Bin Laden was in possession of information tied to WikiLeaks. The CIA recovered, for example, a letter from Bin Laden in which he requests from a member of al-Qaeda US Department of Defense material released by WikiLeaks. In another letter, an al-Qaeda operative attached a number of leaked battlefield reports. The defense further stipulated that Bin Laden was in possession of “Department of State information,” which O’Brien’s analysis suggests is likely the As-Saḥāb tape itself.

Ultimately, the charge didn’t stick. Manning was acquitted of aiding the enemy and convicted instead of “wanton publication,” a charge that, as O’Brien notes, “had never been used before, and is not tied to any existing federal statute or article in the Uniformed Code of Military Justice.” The sheer complexity of the case, the minute details of which are sparsely understood by the public, illustrate a need for records such as those catalogued by O’Brien to be maintained, even perhaps by YouTube, in spite of its squeamish attitude toward terrorism-related content.

Undeniably, portions of the Gadahn video uploaded by O’Brien do contain al-Qaeda propaganda—and 75 percent of content removed from YouTube over a one month period this summer involved “violent extremism”—but it did not contain scenes of graphic violence, other than the Collateral Murder tape, which, again, remains widely available across YouTube. “I personally would not have uploaded material that contained a beheading out of respect for the victim and his or her family,” she told Gizmodo. “That is a very personal choice, because provided context such material is certainly in the public interest. Moreover, if victims should die in such a manner, I might feel that I had an ethical or civic responsibility to (at a minimum) not to look away.”

“The excerpts contained in all three videos were squarely in the public interest and I handled the material responsibly,” O’Brien said. “I excerpted the portions and uploaded them to YouTube to use in my analysis of the case, because I did not want to post an entire hour and forty five minute terrorist propaganda video.” Moreover, each video included a description that offered context regarding the video’s public relevance—though that may not be visible to viewers on a mobile device.

“The material was used at a court-martial, which not only didn’t provide a contemporaneous public record of its proceedings,” O’Brien said, “but also which had legal precedents with wide ramifications for the public at large.”

Takedown notifications

The first takedown notification from YouTube arrived on July 5 and pertained to a Gadahn video titled, “Portion of As Shahab video dated June 6, 2011 from US v Pfc Manning 2007 U S Baghdad Airstrike Video.” A second notification, citing a separate clip from the 2011 video, arrived on Aug. 8. In both cases, O’Brien was warned that the videos had been flagged, and that, upon review, YouTube had determined they violated user guidelines. In total, three videos were pulled down. In every case, she was “assigned a Community Guidelines strike,” further receipts of which, YouTube warned, could lead to her “account being terminated.”

On Sept 12, O’Brien was informed via email that her YouTube account had been suspended due to “repeated or severe violations of [YouTube] Community Guidelines.” At the same time, Google disabled her Gmail account. She was instructed to sign back in, and warned: “If you don’t take action soon, your account and all of its contents will be scheduled for deletion.” She immediately appealed the decision.

Using an online form provided by Google, O’Brien explained who she was, that the purpose of the video was archival, newsworthy, and that it was embedded in posts describing events of Manning’s trial; but in no way was it intended to promote Al-Qaeda, its ideologies, or “gratuitous violence.” The appeal was immediately rejected in an email signed, “The YouTube Team.”

Based on her experience, O’Brien said the process whereby YouTube notifies its users about how their videos are being deleted works fine; it’s the appeals process where things seem to fall apart.

It was unclear, for instance, whether the “strikes” her account had received still stood. After discussing the problem with Google and YouTube representatives by phone—a form of redress that wouldn’t normally be available to most users—she regained access to her YouTube account; shortly after she received another email saying it was still suspended, an apparent mistake given her account is still online. She also regained access to her gmail account and Google Drive, and only one of the three videos taken down remains offline, but bizarrely it’s not the video containing the helicopter attack or Gadahn speaking directly into the camera. The video still blocked is primarily news clips and footage from music videos used by As-Saḥāb, which features voiceover discussing the leak of US State Department cables.

“This is more than a usability issue,” she said. “The company failed to articulate its policies and process in a meaningful way.”

Fingerprinting terrorist content

During a phone call, YouTube and Google reps revealed that problems have arisen while trying to meet the demands of governments at war with international terror groups. The service is increasing the requirements of citizen journalists and investigative reporters—while also consulting with them—to justify posting videos related to ISIS, al-Qaeda, and other terrorist outfits.

When footage hits the site containing content to which the US government might object, the burden falls on journalists to ensure that the proper context is applied, and to show that the video contains actual educational or documentary value. “Something like uploading al-Qaeda footage, for example, without something in the video itself that makes it really clear that this is reporting, something you’re trying to shed light on, something you’re criticizing, whatever it is, would come down,” the YouTube rep said.

In the coming weeks, the rep said YouTube will be rolling out new features as part of its takedown system that will specifically request that users add context to videos involving terror content to demonstrate that documentary value. According to the rep, one way this can be done is by using YouTube’s built-in tools that allow users to add captions, cards or endcards to their videos. O’Brien asked if it were sufficient to add a caption that reads something like, “This for archival purposes. This is not for propaganda purposes.”

“Yes, exactly,” the rep replied.

The rep noted that YouTube has been working with other tech companies, such as Twitter, Facebook, and Microsoft, to share material with known ties to terrorist organizations. “But you should not be affected by that,” the rep told O’Brien, adding: “It’s possible that the video you shared includes al-Qaeda propaganda footage that’s in our shared database already.” In other words, it’s possible O’Brien’s videos contain a fingerprint that YouTube’s AI is directly trained to detect, and that that information has been shared with other companies helping in the battle “against radicalization and terrorist propaganda,” she said.

For law enforcement and intelligence agencies, acquiring access to information about the fingerprinted videos, or the identities of the people who’ve posted them, still requires a legal request—typically a subpoena. Of course, the more companies that have access to the data, the more opportunities the FBI, for example, has to acquire it.

“YouTube is an important global platform for news and information and we have clear policies that outline what content is acceptable to post,” a Google spokesperson told Gizmodo. “Sometimes graphic material is vital to our understanding of the world, whether it is posted to document wars or revolutions, to expose an injustice, or to ensure local events are seen globally. For all types of content on YouTube, but particularly with graphic content, adding context is important in helping the YouTube team review a video when it is flagged. Adding context within the video, like commentary or text, helps us understand background and intent.”

YouTube hopes to prevent other journalists from being banned like O’Brien was by “whitelisting” their accounts, meaning they will be flagged so that Google’s reviewers won’t mistake them for terrorist-run channels. The Google rep told O’Brien that her channel would be added to the whitelist. Individual videos, however, are still subject to the normal reporting and review procedures, so it’s still possible that some of her videos could be taken down.

“I don’t fault the company for coming to terms with terrorist propaganda on their platforms,” O’Brien said. “But YouTube’s decision tree should have more than two branches. Nuance is not binary. Neither is wisdom. I think it is humorous and frightening that a unidentified human determined that my channel supported terrorist propaganda.”

“Outside of YouTube or Google’s concerns about its public image or public safety, the company has a responsibility to engage its users about its civic and ethical responsibilities,” she added. “It must engage and articulate its policies, and if it can’t do that, then it really has not earned the credibility, nor would it deserve the power that it has to impact society. It also has a ethical responsibility to notify users if there are risks, legal or otherwise, for being identified in their protocols as a purveyor of inappropriate content.”
https://gizmodo.com/journalist-nearl...pos-1815314182





China Makes Chat Group Administrators — i.e. Regular Users — Criminally Liable for Unlawful Messages
Oiwan Lam

New regulations in China will make chat group administrators responsible — and even criminally liable — for messages containing politically sensitive material, rumors and violent or pornographic content.

The regulations also demand that all chat room users in mainland China verify their real identity.

Introduced by the Cyberspace Administration of China on 7 September, the “Regulation on the Management of Internet Chat Group Service” represents a bold policy shift by extending the work of regulating online content beyond government workers and companies to the users themselves. Indeed, chat group administrators are merely users of chat services like WhatsApp, WeChat and QQ who create and manage groups that might chat about anything from childcare to the Communist Party congress.

From 8 October onward, chat group administrators will become a key human resource in China's internet control infrastructure.

Dovetailing with China's new regulation on the management of online comments, the news rules will also force chat service companies (such as WeChat and QQ) to punish users who have not verified their identities or who have violated other terms of use, such as posting scams, rumors or politically sensitive material. Users who violate these rules will have certain account privileges suspended, and have their social credit scores lowered.

The regulations also require that the companies record, monitor and retain users’ chat records for at least six months and notify authorities whenever they spot abusive use of group chats.

The initial regulation, issued by the Cyberspace Administration, stressed that group administrators should be responsible for the contents posted on the chat group. But it did not specify what exactly these responsibilities would be. The Public Security Bureau soon after stepped in, presenting nine types of content that should not be posted on chat groups:

Regulation on the Management of Internet Chat Group Service: Prohibited Content

Effective 8 October, the following types of content will be prohibited in chat groups on China-based messaging platforms:

1. Sensitive political content
2. Rumors
3. Internal documents [of the Chinese Communist Party and government units]
4. Content that is vulgar, pornographic, violent or shows drug-related criminal acts
5. News from Hong Kong and Macau that has not been reported by official media outlets
6. Military information
7. State secrets
8. Videos from anonymous sources that insult or destroy police’s reputation
9. Other illegal information

Chat group administrators who fail to remove prohibited content from chats can face criminal charges or administrative detention.

In addition to the list, the Public Security Bureau also presented a number of past cases, cited below, in which chat group administrators were punished through criminal prosecution or “political consequences.”

Case one: Insulting police officers

A man from Jieshou county of Anhui province was frustrated by traffic police, who had established a late night checkpoint for drunk driving. In a chat room that he created, he wrote: “Are they nuts? Checking in the rain? [They are] a bunch of assholes who just want money.” As the insulting comments created negative social impact within his circle, the man was detained for five days for picking a quarrel.

Case two: Online petition

On 27 June 2016, several party members from Qianjiang city in Hubei province violated the law by illegally using a WeChat group to circulate a petition against a construction plan for a pesticide factory. The incident led to a public rally and obstructed public order. Recently, nine members of the chat group received party discipline, five received discipline warnings and 40 had to be re-educated. The administrator of the group received a discipline warning as he did not stop the petition and channelled the opinions in the chat group.

Case three: Indecent and obscene articles

A young person in Shenyang city created a 100+ member chat group. One of the members in the group kept sending out messages about a “big hit movie” and asked other group members to pay for access to [what turned out to be] a pornographic video. As the group administrator did not stop the member from selling the videos in the group, the police arrested him under the charge of “distributing obscene material.”

Case four: Gambling activities

A man from Fuxin prefecture of Liaoning province abused the “red envelope” function [a way to send money, as a gift] in several chat groups for gambling purposes between June to August 2015. The court sentenced the man to 2.5 years with a three-year suspension and a fine of RMB 50,000 yuan [approximately USD $8,000].

It is not clear whether these users’ activities were observed by chat service companies, chat group administrators, or some combination of the two.

Following the introduction of China's regulation on “rumors” in 2013, many netizens migrated from open social media platforms (such as Sina Weibo, similar to Twitter) to chat services for sharing information, because these tools allowed them to create closed groups and have semi-private conversations.

But in recent years, multiple netizens have been arrested due to the political nature of their comments on private chat groups. These new regulations seem to codify the practices behind these arrests and suggest that censorship over chat group messages will soon be much more robust.

On Weibo, netizens are anticipating how the new regulation might affect group chats that use dark humor:

你好,我想发个红包、加下群…..请填写下申请表格,分管领导签字上级部门盖章

Hi, I want to give a red envelope in my group…. pls fill up a form and ask your leader to sign the approval.

以后餐厅都要改成独立餐桌儿,汽车改成一个座儿的,陌生人认识先登记,从此天下太平

In the future, all resturants should redesign their tables for just one person and vehicles should only have one seat. You should pre-register before you meet a new friend. Then there will be peace on earth.

以后每个群都有地下党

In the future, every chat room who have a undercover CCP member stationed.

强烈支持党和国家出台的新政!群员违法,群主追责!并希望早日推广为村民违法,村长追责,县民违法,县长追责,市民违法,市长追 责。依次类推,违法犯罪就能尽快消灭,共产主义就一定会在我们这一代人实现。

Support the party and new government regulations! Group administrators are responsible for group members’ illegal behavior! I hope very soon that village heads will be responsible for villagers’ illegal behavior, that county heads will be responsible for county residents’ illegal behavior, and that city heads will be responsible for citizens’ illegal behavior… all crimes would then vanish and we can fulfill the dream of communism in our generation.

要不直接断网吧 回到石器时代

Why not just cut the internet cable and return to the stone age?

The new regulations will go into force on 8 October.
https://globalvoices.org/2017/09/13/...wful-messages/





Download This: the New Anonymous App that's Blowing Up with Teens
Karissa Bell

Another anonymous app is at the top of the App Store and it might be because it's figured out anonymous apps' biggest problem: bullying.

Called "tbh," short for "to be honest," the app takes an unconventional approach to anonymity. While it allows friends to anonymously communicate, it only allows users to exchange compliments, which are sent via in-app quizzes.

The app, which is aimed at middle schoolers and high schoolers, connects to your address book so you can find people you know. It serves up a series of "polls" about your friends. The questions change but they are all positive, asking you to choose the "world's best party planner," or who is "too lit to be legit."

The app keeps identities a secret, but users can see some details about who's picked them (e.g. "a girl in the tenth grade"). It's also borrowed some of the addictive dynamics of free-to-play games, though it doesn't use in-app purchases at the moment.

If someone "chooses" you in a poll, you earn "gems," which you can use to unlock more features within the app. You can only complete a set number of polls at a time and when you run out, you need to wait for a timer before you can take on more.

That all may sound gimmicky, but it's proven to be a winning formula with teens. The app, which is currently only available in a handful of states, has been steadily climbing the App Store charts since it launched in August. On Thursday, it reached the top spot, beating out Facebook, Snapchat, Gmail, and the other apps that typically sit at the top of the App Store.

Addicting Candy Crush-like rules aside, some of that success may also be attributed to tbh's emphasis on positivity. There are only positive "polls" so users aren't able to easily bully each other — a problem that's plagued Sarahah and other teen-centric anonymous apps.

Whether that will be enough to make the app stick with image-obsessed teens is another matter. But it's definitely off to a strong start.
http://mashable.com/2017/09/16/tbh-app-download-this/





The Teen-Friendly Chat App that's Beating Facebook and Snapchat in One Major Way
Karissa Bell

It's rare that a small startup is able to challenge the likes of Facebook and Snapchat in a meaningful way.

But Houseparty, a teen-centric video chat app is managing to do just that. One year since launching publicly (the company spent several months in a stealthy beta), Houseparty has grown to 20 million users who together have participated in more than half a billion video calls (or "parties," to use the app's terminology).

Twenty million may sound like a drop in the bucket compared to Facebook's billions or even Snapchat's 173 million users. But the video chat app, created by the same team who founded the once-hyped live streaming app Meerkat, is beating its bigger rivals in one important way: engagement.

The app's users, 60 percent of whom are under the age of 24, spend an average of 51 minutes a day chatting in the app. To put that in perspective, Snapchat's average user spends 30 minutes a day in the app. Facebook says its users spend an average of 50 minutes per day, but that's across Instagram, Facebook, and Messenger combined.

Facebook has taken notice. The Wall Street Journal reported earlier this year that the social network is working on a Houseparty clone called Bonfire as part of an aggressive bid to quash the competition.

Whether Snapchat is also feeling the heat is less clear. But as we've previously noted, there are some rather striking similarities between Houseparty and Snapchat. Both have a very young and very engaged user base. Both apps eschew the typical trappings of a big social network, saying their service is meant more for "close friends," not the whole world.

Both companies attribute this success to their ability to connect their users with their "real" friends so they can be their "authentic" selves.

"This feels like a new way for them to connect where they don't have to be this polished, filtered version of themselves they put out for public consumption," Houseparty cofounder Sima Sistani says of the app's allure for younger users.

Spend a few minutes on the app and it's easy to see why it's so appealing to teens. It's filled with emoji and other "in" jokes that might not make sense to the olds.

There's a "ghosting" mode, which lets you sign in without notifying your friends, allowing you to stealthily choose who you want to talk to. You can "pass a note" to surreptitiously text message someone in your current video chat without letting the other participants know. The app also just added a groups feature so you can designate specific friends you chat with frequently.

That may sounds like a whirlwind of high-stakes teen social drama, but Sistani says the app is building off social dynamics that have been in place long before there were ever smartphones.

"Those social behaviors haven't changed in half a century despite major changes in technology," she says, noting that things like call-waiting, three-way calling, and voicemail all subtly changed how young people communicate.

Of course, even with the traction Houseparty has, battling the likes of Facebook and Snapchat won't be easy, especially once one or both companies start copying core features. But Houseparty does have the advantage of a 20 million user-strong head start. Provided it can sustain the current engagement, it's not going to be easy for the competition to ignore.
http://mashable.com/2017/09/07/house...book-snapchat/





Amazon Is Hungry and It’s Coming for Your Cable Channels
Claire Atkinson

Amazon already accounts for about a quarter of all online sales in the United States. Now the company is holding talks to supersize its video-channel business, not just in the U.S. but around the globe.

In the past few weeks, Amazon has started talks to buy scores of small television channels, several major program providers confirmed to NBC News. A representative for Amazon declined to comment, but hinted there will be much to say in the coming weeks about its efforts in online video.

Currently, subscribers to Amazon Prime get TV, movies and music, as well as free shipping on online purchases. They can also pay extra for premium channels such as HBO and Showtime, along with a host of niche-interest services on topics such as health or horror.

As traditional pay-TV providers scale down their offerings into cheaper so-called skinny bundles, Amazon is looking to scoop up smaller TV channels with minimal distribution in order to build itself into a video destination for every imaginable niche, with a particular focus on millennial audiences. Many networks have channels like these, including Turner Broadcasting’s Adult Swim and Boomerang, or Viacom’s VH1 and CMT.

“They are doubling down on the channels business,” said one industry programmer who asked not to be identified because they are involved in the talks. “They’re interested in doing deals with smaller indie networks where they can get rights to channels that are not handcuffed into [traditional distribution] bundles and they’re interested in offering them individually. Eventually they may bundle them together.”

Customers can already buy scores of television services on Amazon, including Viacom’s Comedy Central Stand-up for $3.99, or Britbox, which offers British shows for $6.99. Amazon splits the revenue with channel owners.

Industry insiders say that Greg Hart, vice president of Amazon Video, is spearheading the current talks, aimed at creating a global platform for new online TV channels.

Tom Rogers, the former chief executive of TiVo and executive chairman of the sports app WinView, who is familiar with Amazon’s business, said Amazon may even want to offer the paid add-on channels as part of the Prime Video offering to boost their audience.

That possibility, he said, “could have more impact on the TV industry than any other development down the road.”

Analysts believe Prime has some 66 million subscribers, who pay $99 a year for the service.

Big tech companies are getting increasingly competitive in the online video market. Just last week, The Wall Street Journal reported that Facebook intends to spend $1 billion on content to increase its online video audience. In late August, reports revealed Apple had similarly earmarked $1 billion for content.

Amazon already spends around $4 billion a year to fuel its On-Demand subscription video offering. Its chief rival in that endeavor is Netflix, which spends more than $6 billion. But to put those numbers in context, Disney spent $12.4 billion on programming last year.

“It’s illustrative of the fact that Amazon, Apple and everyone who wants to be a player in video is going to spending very large numbers on rights,” said Brian Wieser, a senior analyst covering advertising at Pivotal Research in New York. The arms race for top-tier talent is intensifying, as is the demand for hits.

This month, Amazon’s global video chief, Roy Price, told Variety that the company’s chief executive, Jeff Bezos, wants him to find the next “Game of Thrones,” one of the biggest shows in HBO’s history. The series garnered 16.7 million viewers for its season finale last month.

Price told Variety that the company was ending its development of certain smaller series in pursuit of big-event TV.

Amazon has hired Robert Kirkman, one of the creators of AMC’s “The Walking Dead,” with the aim of bringing fans of the horror genre to the service. The show is one of the biggest entertainment series on TV.

Even with all the Academy Award buzz surrounding Amazon’s distribution of “Manchester by the Sea,” a multiple Oscar winner, the company has had some trouble replicating its critically acclaimed series “Transparent,” which began in 2014. Amazon has a few Oscar hopefuls being released later this year, including a new Woody Allen project called “Wonder Wheel,” out in December.

Of course, Netflix has been signing up big names, too, including ABC drama producer Shonda Rhimes, much to the chagrin of Disney.

“They want to find new things they can offer,” said Mark Mahaney, an internet sector analyst at RBC Capital, “because the more services you have, the more you engender loyalty and the more you can monetize the user base.”

Amazon isn’t first to the party in terms of offering online TV channels. Dish, the satellite service, has one called Sling, as does Sony PlayStation. Hulu (owned by Comcast — owner of NBCUniversal — Disney, 21st Century Fox and Time Warner) just launched a digital channel bundle, and AT&T’s DirecTV and Google’s YouTube also have rival packages. Their hope is to win audience data and sell digital advertising.

Each aim to challenge conventional pay-TV packages, which incidentally offer free-of-charge digital versions to subscribers of what they can get on TV.

While there are plenty of Silicon Valley pretenders to the video crown, Amazon is unique, largely because of its vast trove of purchasing data. Amazon could potentially track the ads viewed on its TV channels, and then link them to online purchases. Earlier this year, Amazon partnered with the drinks company Diageo to offer 20 minute “shoppable films.”

Amazon’s own advertising business is still relatively tiny, worth around $2 billion, according to estimates, but around 55 percent of product searches start on Amazon, according to a 2016 survey by Bloomreach, a software company.

Amazon, Apple, Google and Facebook like the idea of owning the online video experience of consumers — and stealing some of the $70 billion of ad revenue that is spent on TV annually.

It’s not clear that they can amass the same audiences as the TV networks, but Amazon’s attempts at building its audience are already evident. On Sept. 28, Amazon will begin its first major sports deal, streaming an NFL game — the Green Bay Packers versus the Chicago Bears — to its Prime subscribers. Amazon has a 10-game NFL package that it shares with other broadcasters, including NBC and CBS, having paid $50 million to steal away the digital football package from Twitter.

Amazon will also use its Alexa voice recognition machine to promote its football streaming, and in turn the NFL games will be used to promote its own online video shows.
https://www.nbcnews.com/tech/interne...annels-n801781





NFL TV Ratings Slide Worries Wall Street

CBS, ESPN, Fox and NBC will generate about $2.5 billion in NFL advertising revenue this season, but a 10 percent shortfall could translate to a $200 million cut in earnings, an analyst estimates.
Paul Bond , Georg Szalai

NFL's ratings woes continued in Week 2, and Wall Street is taking notice, given there are fewer excuses for falling viewership than there were a year ago when Hillary Clinton and Donald Trump were distracting TV-watching Americans.

While NFL games remain some of the most-watched content on television, ratings slid 12 percent in the NFL's opening weekend, with many blaming Hurricane Irma. But without dramatic weather, the second weekend was off 15 percent year-over-year. This comes after an 8 percent ratings slump last season.

Guggenheim Securities analyst Michael Morris said he had been optimistic heading into the new season because audiences would appreciate some changes, including fewer commercial breaks and allowing players to creatively celebrate touchdowns. Now, though, he says "early results do not support this optimism."

Jefferies analyst John Janedis figures CBS, ESPN, Fox and NBC will generate about $2.5 billion in NFL advertising revenue this season, but a 10 percent shortfall could translate to a $200 million cut in earnings.

Since the NFL season opened on Sept. 7, shares of NBC parent Comcast are off 9 percent, ESPN parent Disney has seen its stock drop 3 percent and shares of CBS are down 5 percent. Only shares of 21st Century Fox have risen in that time frame, up 2 percent.

"Continued declines in NFL ratings again this season will likely place further downward pressure on media stocks," said Morris. He added, in fact, that "the NFL is an indicator of overall primetime programming ratings performance."

Pundits, meanwhile, continue to opine on the reason for the fall, with some trying mightily to dismiss a J.D. Power survey in July that put most of the blame on players who protest the National Anthem, the most prominent of them being Colin Kaepernick, who does not currently play for an NFL team.

That survey indicated 30 percent of the viewers who watched less football in 2016 than they did the season prior said they did so because they were offended by players protesting the anthem.

Broadcasters know it's a storyline that has continued into the current season and the networks usually point their cameras at the players who kneel during the anthem.

The second-most cited reason for tuning in less was "game delays, including penalty flags" (24 percent), followed by a three-way tie at 18 percent between "off-field image problems with domestic violence," "excessive commercials and advertising" and "presidential election coverage."

The league and broadcasters have cut back on ads but domestic violence is front and center in 2017 again, this time courtesy of Dallas Cowboys superstar running back Ezekiel Elliott. The league alleges Elliott physically abused a former girlfriend and is trying to suspend him for six games while representatives for Elliott are duking it out with the NFL in courtrooms, generating headlines every step of the way.

Way down on the list of survey results is the bugaboo that has been distressing all Hollywood lately: "cord cutting, i.e. canceling my subscriptions to cable or satellite TV." That answer was cited 6 percent of the time on the J.D. Power survey.

Consumers without cable or satellite package can still view NFL games via streaming on online services like CBS All Access, Sling TV, Amazon Prime and the like, which, of course, siphons viewers away from traditional outlets.

"When the dust settles, it looks like the (virtual pay-TV providers) may have gained more traction than consensus expectations, even with modest marketing spend," Janedis said.

It's not all doom and gloom of course. Last year, 31 of the Top 100 programs on TV were NFL games, and Michael Nathanson of MoffettNathanson predicted last week that NBC's Sunday Night Football and ESPN's Monday Night Football would grow their audiences this year over last year based on better matchups.

Fox, meanwhile, says viewership for its Cowboys vs. Denver Broncos game on Sunday, which was delayed due to lightning, was up 18 percent over a game it broadcast a week before. And 12.3 million viewers ages 18-49 watched the Detroit Lions beat the New York Giants on Monday, up from 11.4 million who watched Monday Night Football on ESPN a week prior.

Plus, NFL Network says its Thursday Night Football broadcast of the Houston Texans beating the Cincinnati Bengals drew 8.1 million viewers, up 32 percent over the average of four Thursday night games on the network last year. The exclusive presentation of that contest was also the most-streamed regular season game ever across NFL Mobile by Verizon and the NFL's digital properties.

Also working in the favor of the NFL and its broadcast partners is an explosion in the popularity of fantasy sports. According to the Fantasy Sports Trade Association, 53.5 million people participate in the U.S., with 66 percent of them playing fantasy football, and they watch NFL games and visit websites like ESPN.com and CBSSports.com far more frequently than do those who do not play fantasy.
http://www.hollywoodreporter.com/new...street-1041187





Box Office

3-Day Weekend Box Office Estimates - Source: comScore

It
$60M
American Assassin
$14.8M
mother!
$7.5M
Home Again
$5.3M
The Hitman's Bodyguard
$3.6M
Annabelle: Creation
$2.6M
Wind River
$2.55M
Leap!
$2.1M
Spider-Man: Homecoming
$1.9M
Dunkirk
$1.3M





Linda Hamilton Set to Return to 'Terminator' Franchise

With James Cameron and Arnold Schwarzenegger involved, it's a reunion more than 25 years in the making.
Borys Kit

She’ll be back.

After waving hasta la vista, baby, more than 25 years ago, Linda Hamilton is returning to the world of Terminator, reuniting with James Cameron, the creator of the sci-fi franchise, for the new installment being made by Skydance and Paramount.

Cameron made the announcement at a private event celebrating the storied franchise, saying, "As meaningful as she was to gender and action stars everywhere back then, it’s going to make a huge statement to have that seasoned warrior that she’s become return."

With Hamilton’s return, Cameron hopes to once again make a statement on gender roles in action movies.

"There are 50-year-old, 60-year-old guys out there killing bad guys,” he said, referring to aging male actors still anchoring movies, “but there isn’t an example of that for women.”

Tim Miller, the filmmaker who made his breakout feature debut with Deadpool, is directing the sequel, which is returning to its roots by having the involvement of Cameron for the first time since 1991’s Terminator 2: Judgment Day.

Cameron is producing along with Skydance. And the new film, which will be distributed by Paramount with Fox handling it internationally, is based on a story crafted by Cameron. Cameron and Miller created a writers room to hammer out what is planned to be a trilogy that can stand as single movies or form an overarching story. David Goyer, whose credits include the Blade and Christopher Nolan’s Batman movies; Charles Eglee, who created Dark Angel with Cameron; Josh Friedman, who created the Terminator TV spinoff, The Sarah Connor Chronicles, and Justin Rhodes, a frequent Goyer collaborator, were part of that room.

Arnold Schwarzenegger, who has starred as both a bad guy and good guy portraying the cool killer robot sent from the future, is already set to return and with Cameron and now Hamilton on board, the new Terminator film will once again have its original creative team.

Hamilton starred in the first film, The Terminator, released in 1984 as a low-budget genre play, playing one of the silver screen's strongest female heroines, Sarah Connor. Connor was a waitress who is being hunted down by an unstoppable killing machine, played by Schwarzenegger, sent from the future. Connor learned that in the future, machines have taken over and that she is the mother of the human resistance leader.

The actress returned to the character in Cameron’s 1991 sequel, which was a summer blockbuster that pushed the visual effects envelope and set box office records for that time. This time Connor, buffed and in prime fighting form, was a hard-edged, take-no-prisoners warrior who fought like a bear to protect her son.

Both Hamilton and Cameron, who were married to each other in the later 1990s, sat out the installments that followed in 2003, 2009 and 2015.

Story details are, of course, being kept on a secure hard drive at Cyberdyne Systems, but Cameron and Miller are treating the new movie as a direct sequel to Cameron's Judgment Day. And the themes of the potential evils of technology will once again be at the fore.

But the new movie will also be seen as a passing of the baton to a new generation of characters.

"We’re starting a search for an 18-something woman to be the new centerpiece of the new story," Cameron said. "We still fold time. We will have characters from the future and the present. There will be mostly new characters, but we'll have Arnold and Linda’s characters to anchor it."

Hamilton is represented by Innovative Artists.
http://www.hollywoodreporter.com/hea...nchise-1040948





More are Paying to Stream Music, But YouTube Still Holds the Value Gap

Demand's there; compensating artists is another issue
Andrew Orlowski

With Google's user-generated content loophole firmly in lawmaker's sights, global music trade body IFPI has published new research looking at demand for music streaming.

The research confirms YouTube's pre-eminence as the world's de facto jukebox. 46 per cent of on-demand music streaming is from Google's video website. 75 per cent of internet users use video streaming to hear music.

All respondents using a smartphone to listen to music in the past six months (11,776) from all 13 countries surveyed (US, Canada, GB, France, Germany, Spain, Italy, Sweden, Australia, Japan, South Korea, Brazil, Mexico)

The paid-for picture is bullish: 50 per cent of internet users have paid for licensed music in the last six months, in one form or another, of which 53 per are 13- to 15-year-olds. Audio streaming is split between 39 per cent who stream for free and 29 per cent who pay.

Although mobile is massive, with 91 per cent using phones to access music in Mexico, the UK is a laggard at 59 per cent.

So what's the problem? European policy makers have become convinced by the "value gap" argument: compensation doesn't reflect usage. Google finds itself with a unique advantage here, thanks to YouTube's "user-generated content" exception, as we explained last year.

It's really twofold. Firstly, Google's supply chain of uploaders might not deliver quality – you never know what you'll get on YouTube – but despite notice-and-takedown orders, (dubbed "notice and shakedown" by musicians) the supply continues uninterrupted. And Google's promotion of advertising over paid subscriptions also depresses the market price for music, they argue.

In its defence, Google argues that ad-supported free streaming reaches demographics that the music industry found hard to monetise anyway, particularly children. The IFPI found that of audio streamers aged 13-15, 37 per cent paid and 62 per cent listened for free. Of the paid group, 33 per cent paid for their own (versus 63 per cent for 16-64) while 36 per cent were members of a family subscription plan.

The data has crumbs for both sides – music demand remains incredibly strong, and teenagers are increasingly paying.

Proposals to narrow the value gap by plugging the UGC loophole are under discussion as part of tweaks to the European Copyright framework. Internet companies without a music licence would have to take more care over what they serve and filter uploads. Services with a licence wouldn't have to. It's arguable whether this plugs the YouTube loophole at all, but it received strong support from UK digital and culture minister Matt Hancock last week.

"We are supporting further copyright reform, to support rights holders and help close the value gap. Where value is created online, it must be appropriately rewarded," Hancock said.

The European Commission has said the proposed filtering obligation "does not alter the provisions of Directive 2000/31/EC, nor does it provide a new interpretation of Article 3 of Directive 2001/29/EC (communication to the public)", but it's likely to be amended to protect ISPs later this year.

The survey polled users in 13 countries worldwide. You can find more here (PDF)
https://www.theregister.co.uk/2017/0...ing_value_gap/





HTC is Halting Trade of its Shares in Anticipation of Expected Takeover

All eyes on Google
Vlad Savov

HTC, one of Taiwan’s premier tech brands and a true pioneer in the development of the Android hardware ecosystem, has today announced it is about to halt trading of its shares tomorrow in anticipation of a “major announcement,” as first reported by Bloomberg’s Tim Culpan. Earlier this month, the company was rumored to be in the final stages of negotiating a takeover with Google, and today’s news appears to be setting the stage for that buyout becoming official. Or it could be some anonymous asset-holding company buying up what’s left of HTC, but the exciting scenario is definitely the one that involves Google.

The official HTC response to the reported Google negotiations was issued today in a boilerplate statement of “HTC does not comment on market rumor or speculation.” But the facts of HTC’s situation speak for themselves: the company has been operating at a loss for well over a year and, in spite of the excellence of its latest U11 flagship, wasn’t looking likely to survive much longer without outside assistance.

JUST IN:
HTC just announced it's shares will halt trading tomorrow (Thur) pending a major announcement.
— Tim Culpan (@tculpan) September 20, 2017

Google and HTC already have a close working relationship, having collaborated on the Google Pixel and Pixel XL smartphones of last year. The latest rumors point to HTC also producing the 2017 Pixel, though LG is expected to take over responsibility for building the second-gen Pixel XL. In any case, acquiring HTC is almost a no-brainer for a Google that is intent on developing and expanding its own hardware division. Google previously owned Motorola for a brief period of time and seemed intent on the same goal, but that plan ultimately unravelled. What has happened since then is that Google re-hired the Motorola chief it once had, Rick Osterloh, and founded a separate hardware team under his stewardship. Claude Zellweger, the one-time chief designer of HTC Vive, is also now at Google, working on that company’s Daydream virtual reality system.

It’s not immediately obvious what, if anything, Google would be acquiring from HTC. It could be just the smartphone business or just the Vive VR division, with a total takeover of the entire company presently being considered the less likely scenario. It’s also peculiar that HTC would give advance notice of halting trading — these moves are usually done immediately and designed to prevent shareholders from being freaked out by unfavorable news and rushing to sell off their stock. Is HTC foreshadowing unsavoury news for its stockholders? The most damaging thing for them would probably be the loss of the Vive VR unit, which has the greatest potential for growth.

Putting together a history of collaboration, similar goals in promoting VR and advancing smartphone design, and the favorable price of HTC’s current shares makes an HTC buyout the logical move for Google. Of course, the thing that spoiled the Google-Motorola relationship — namely, Samsung’s objection to Google invading its territory — could still pose an issue, though if Google’s going to proceed with making Pixel phones, it’s of only academic importance whether it owns the manufacturing company or not.

We’ll have to wait and see the exact details of HTC’s major announcement, which should coincide with the stop in share trading tomorrow.
https://www.theverge.com/2017/9/20/1...akeover-report





Google Is Buying HTC’s Smartphone Expertise for $1.1 Billion
Daisuke Wakabayashi

Google announced late Wednesday night that it is spending $1.1 billion to hire a team of engineers from the smartphone business of the struggling Taiwanese manufacturer HTC in a bid to bring more hardware expertise to its own mobile technology operations.

HTC said many of its estimated 2,000 employees affected by the deal were already working with the search giant on smartphones. Google leaned on HTC to manufacture its first Pixel smartphone, which was released last year, and is working with the company to produce the next version of the phone, which is expected to be announced on Oct. 4.

Bringing on the team from HTC is a sign that Google is doubling down on plans to produce its own hardware. Company executives have said it is important to tightly couple its artificial intelligence software, like the voice-controlled Google Assistant, with a range of devices.

Apple has followed a similar strategy for years, and that has provided the iPhone maker with an easier path when adding new features, such as augmented reality functions, since it designs nearly all of the internal parts of its phone.

Rick Osterloh, Google’s senior vice president in charge of hardware, said in a blog post that the HTC deal was a continuation of “our big bet on hardware.” Google also offers a smart speaker, the Google Home, and the Chromecast streaming video device.

The two sides did not disclose how many engineers and other key employees would head to Google as part of the deal. But Peter Shen, HTC’s chief financial officer, said the remaining company would still have more than 2,000 research and design staffers, down from about 4,000.

As part of the agreement, Google will also secure a nonexclusive licensing deal for some of HTC’s intellectual property.

HTC said it will continue to make its own smartphones — including a new premium model — although it plans to streamline its handset portfolio. By acquiring the HTC engineering group that had already been working with the company, Google secured hardware talent without taking on expensive assets, like manufacturing facilities.

The deal marks a reversal for Google. The company abandoned its plans to own a smartphone manufacturer when it sold Motorola to China’s Lenovo Group for $2.9 billion in 2014, less than three years after acquiring the handset business for $12.5 billion.

But Google, which makes the Android operating system for smartphones, retained many of Motorola’s patents — an important asset to fend off potential intellectual property lawsuits from Apple.

As the Android software world has become dominated by Samsung Electronics, Google has wanted more say in the manufacturing of the phones that use its operating system. Currently, most of Google’s smartphone software runs on devices manufactured by companies like Samsung and LG Electronics, and Google has only a limited say in what those companies produce.

With the introduction of the first Pixel smartphone last year, Google said it was creating the smartphone it had always intended to build. Google took control of the entire development process of the device, from the industrial design to the procurement of components. The Pixel received positive reviews, but it hasn’t made much of a dent in the smartphone market.

HTC was an early leader in smartphones. But it has struggled against more popular competitors while trying to fend off pressure from low-end Chinese manufacturers. Its decline is another indication of the challenges facing many smartphone manufacturers, who are struggling to make profits while competing against Apple and Samsung.

Operating losses piled up at HTC in recent years, forcing it to slash research and marketing spending, exacerbating the challenge of keeping pace with the bigger companies. HTC said it will continue with its virtual-reality business centered around its Vive VR headsets.

Google and HTC have a long history of working together. HTC sold the first smartphone to run Android software, in 2008, and it produced Google’s first-generation Nexus smartphone — the predecessor to the Pixel.

The transaction is scheduled to close, pending regulatory approval, in early 2018.

Carolyn Zhang in Shanghai contributed research.

Follow Daisuke Wakabayashi on Twitter @daiwaka
https://www.nytimes.com/2017/09/20/t...artphones.html





T-Mobile and Sprint are in Active Talks About a Merger

While T-Mobile CEO John Legere is expected to lead any combination that results from a merger, Softbank's Masayoshi Son has made it clear he wants a say in how the company is run.
David Faber

Both companies and their parents, Deutsche Telekom and Softbank, have been in frequent conversations about a stock-for-stock merger in which T-Mobile parent Deutsche Telekom would emerge as the majority owner.

People close to the situation stress that negotiators are still weeks away from finalizing a deal and believe the chances of reaching an agreement are not assured. The two sides have not yet set an exchange ratio for a deal, but are currently engaged in talks to hammer out a term sheet.

The companies declined to comment on the report.

T-Mobile and Sprint have had a seemingly endless dalliance over the years since Softbank took control of Sprint, pushed by the prospect of billions of dollars in cost synergies that a merger would bring. The last time the two companies held meaningful talks earlier this year, Softbank's Masayoshi Son indicated a willingness to sell Sprint to T-Mobile.

This time, given the all-stock nature contemplated, Softbank would emerge as a large minority holder in any combination. While T-Mobile CEO John Legere is expected to lead any combination that results from a merger, Son has made it clear he wants a say in how the company is run. That desire adds another layer of complexity to an already difficult transaction.

T-Mobile has not begun due diligence on Sprint, yet another step that could change current price expectations or the willingness to move forward.

The biggest issue is whether any merger between the No. 3 and No. 4 wireless carriers in the nation would be approved by antitrust regulators. The risk of rejection by the Department of Justice will play an important role in the final decision made by both sides as to whether they will proceed with a deal.

Given Softbank's high level of engagement on a Sprint-T-Mobile deal, its quixotic campaign to try to buy Charter Communications has slowed considerably. The effort is on hold. CNBC has reported it involved the creation of a new company infused with vast amounts of equity and debt to buy Charter at a premium and the 17 percent of Sprint that Softbank does not own. Dutch telecom company Altice has been actively soliciting funds to mount its own bid for Charter should Softbank make a move.
https://www.cnbc.com/2017/09/19/t-mo...-a-merger.html





A Sprint, T-Mobile Merger Remains a Very Bad Deal for Consumers
Karl Bode

The long-rumored Sprint, T-Mobile merger is finally getting closer to fruition. Reuters notes that the two companies have had a "breakthrough" in negotiations on a mega-merger, and are "close to agreeing tentative terms" on a deal that could be completed by the end of October. Regulators previously blocked an earlier attempt at this same transaction due to the fact it would seriously harm competition in the wireless space, but most analysts believe the Trump administration and FCC boss Ajit Pai will happily approve the deal.

That could prove problematic for not only wireless prices, but the recent resurgence in unlimited data plans.

While wireless carriers still often engage in theatrical non-price competition more often than not, the government's decision to block AT&T's acquisition of T-Mobile several years ago helped spur an unprecedented period of competition in wireless (something large ISPs and their policy armies like to ignore). The end result was a brasher and more competitive T-Mobile, who lead the way on a wave of improvements in the sector culminating most recently in the return of simpler, easier unlimited data plans.

The government's decision to block Sprint from acquiring T-Mobile helped keep that competition intact, something large ISPs and their policy folk would similarly like you to forget. As a result, T-Mobile has added more customers per quarter than any other wireless carrier for several years running, as the resulting competition put an end to numerous, nasty industry tactics including overcharging for international roaming, to obnoxious fees and long-term contracts.

And while the new, combined company will likely still be run by current popular T-Mobile CEO John Legere, the very act of eliminating one of only four major players in the wireless market will indisputably reduce the incentive to more seriously compete on price, and could help reverse the progress the sector has seen in recent years. It's well within reason that this reduced competition could also bring back metered plans and put an end to unlimited data.

Meanwhile, Sprint's balance sheet and network performance has notably improved over the last few years courtesy of deep-pocketed Softbank. The company also has other options (being bought by Charter, Dish or Comcast) that wouldn't involve eliminating a major wireless competitor, making the argument that this deal is essential to keep Sprint alive (one you'll see floated a lot in the coming months) rather flimsy. Especially when the net result will again be less competition, and less incentive than ever to seriously compete on price, network quality, and customer service.

Of course should the final details of the deal be hashed out, prepare to be inundated with an endless flood of "non profit" and other industry-backed editorials insisting that killing one of just four major wireless competitors in the market will be of immeasurable benefit to everyone. That ignores decades of history, which repeatedly makes clear that these types of deals always benefit just one party: the companies involved. The blind adoration of mindless M&As in the telecom space is the primary reason we all get to enjoy what passes for customer service from the likes of Charter or Comcast.

Given that Legere has spent the last few years convincing the public he's a massive consumer advocate (despite the company's failure to support net neutrality and that time Legere attacked the EFF), watching him try and sell this dog of a deal to Millennials should prove entertaining.
https://www.dslreports.com/shownews/...nsumers-140382





T-Mobile May Increase Deprioritization Threshold to 50GB this Week
About Alex Wagner

After raising its deprioritization threshold to 32GB back in May, it looks like T-Mobile is set to bump it up again this week.

On September 20th, T-Mobile will increase its “Fair Usage threshold” from 32GB to 50GB, according to a TmoNews source. The folks at Android Central received a similar tip today.

It’s said that this 50GB threshold won’t change every quarter and no longer involves a specific percentage of data users. As with the current 32GB threshold, customers that exceed this new 50GB deprioritization threshold in a single month may experience reduced speeds in areas where the network is congested.

To compare, Sprint’s deprioritization threshold is currently 23GB, while AT&T and Verizon’s are both 22GB.

T-Mo regularly touts its network as having “America’s best unlimited network”, and while this isn’t true unlimited, it’d still be good to see T-Mobile once again increase its deprioritization threshold and widen the gap between it and its competition. Some customers will still be affected with this new threshold, but this change would give T-Mo subscribers much more data usage each month before having their usage deprioritized.
http://www.tmonews.com/2017/09/t-mob...hreshold-50gb/





Verizon Backtracks—But Only Slightly—in Plan to Kick Customers Off Network

Rural users with no other options can switch plans but can’t get unlimited data.
Jon Brodkin

Verizon Wireless is giving a reprieve to some rural customers who are scheduled to be booted off their service plans, but only in cases when customers have no other options for cellular service.

Verizon recently notified 8,500 customers in 13 states that they will be disconnected on October 17 because they used roaming data on another network. But these customers weren't doing anything wrong—they are being served by rural networks that were set up for the purpose of extending Verizon's reach into rural areas.

As Verizon explained in 2015, the company set up its LTE in Rural America (LRA) program to provide technical support and resources to 21 rural wireless carriers. That support would help the carriers build 4G networks. Verizon benefited by being able to reach more customers in sparsely populated areas. Customers with these plans don't even see roaming indicators on their phones, as it appears that they're on the Verizon network.

But now Verizon is kicking customers off the network in cases when Verizon's roaming costs exceed what customers pay Verizon. Customers are being disconnected for using just a few gigabytes a month, as we reported yesterday.

Today, Verizon said it is extending the deadline to switch providers to December 1. The company is also letting some customers stay on the network—although they must switch to a new service plan.

"If there is no alternative provider in your area, you can switch to the S (2GB), M (4GB), 5GB single-line, or L (8GB) Verizon plan, but you must do so by December 1," Verizon said in a statement released today. These plans range from $35 to $70 a month, plus $20 "line fees" for each line. The 8,500 customers who received disconnection letters have a total of 19,000 lines.

Verizon sells unlimited plans in most of the country but said only those limited options would be available to these customers.

Verizon also reiterated its promise that first responders will be able to keep their Verizon service even though some public safety officials received disconnection notices.

"We have become aware of a very small number of affected customers who may be using their personal phones in their roles as first responders and another small group who may not have another option for wireless service," Verizon said. "After listening to these folks, we are committed to resolving these issues in the best interest of the customers and their communities. We're committed to ensuring first responders in these areas keep their Verizon service."

What counts as an alternative provider?

For customers who have no "alternative provider," there are still unanswered questions. For example, what counts as an alternative provider, and how can customers prove that Verizon is their only option?

When asked these questions, Verizon told Ars:

We are updating our systems to allow customers without other options to stay with Verizon. Once that's complete, which we expect to happen next week, we will contact each customer via SMS with more information about the process. Following that notification, they will need to contact us to ensure service continues after December 1, 2017.

But there may be customers who technically have another option but not one that meets their needs. Coverage in rural areas is often sparse, and the Verizon partnership with rural carriers offered a good deal. The combination of Verizon's mostly nationwide network and the roaming agreements with small carriers in Verizon's LRA program allowed rural customers to be covered both at home and when they travel across the country.

We asked Verizon what would happen if a customer's only other option is a local carrier with a limited network, which could force customers to pay huge roaming fees when they travel. We haven't gotten an answer.

Customers might be able to buy service from the same networks that they were roaming on. But in at least one case that we described yesterday, a network called Wireless Partners in Maine, the local carrier that provides roaming for Verizon does not sell cellular service to consumers directly. "Verizon is their only customer on this network," said Sarah Craighead Dedmon, editor of the Machias Valley News Observer in Machias, Maine.

Wireless Partners said that Verizon's changes are not good enough.

“Verizon’s announcement that it will be delaying terminations until December 1st is a step in the right direction, but leaves important questions unanswered," a Wireless Partners spokesperson said in a statement provided to Ars. "Will Verizon lift the prohibition on new customers? This is critical to the residents in our territory. Will current customers that receive a future termination letter also have the right to purchase an alternative plan with Verizon? What defines an 'alternative provider?' Are there ways current customers can prevent a future termination? How do public safety professionals retain their service? Why does Verizon continue to represent that these networks are part of the Verizon network on their coverage maps and when they market plans to prospective customers, but then claim they are roaming when they kick them off?"
More customers could lose service in the future

In other potentially bad news, Verizon said that the 8,500 customers who received disconnection letters may not be the last to face that hardship:

We will continue to regularly review the viability of accounts of customers who live outside of the Verizon network. Supporting these roaming customers can often be economically challenging, especially supporting those on plans with unlimited data or other high data plans. However, we are continuing to look for ways to support existing roaming customers with LTE service.

Verizon said the changes announced today will ensure that "we're there for those who need us."

"We have a long history of serving rural markets and care about you, your friends, and families in these communities," Verizon said.

UPDATE: US Sen. Jon Tester (D-Mont.) announced (apparently incorrectly) that Verizon will not terminate the service of any customers in his state. "Verizon responded to Tester's demands today and announced that the company will continue to serve Montanans and will not terminate the service of rural customers," Tester's announcement said. "Tester received his assurances from Verizon earlier today that no Montanans will be involuntarily removed from their contract."

But when contacted by Ars, a Verizon spokesperson said that is "not accurate."
https://arstechnica.com/information-...ative-carrier/





The Debate Over Neighborhood Zoning Could Hold Up Fast 5G Wireless For Years To Come

With 300,000 antennas to install in three years, wireless companies, cities, and the feds are bound to clash over where to put them.
Sean Captain

Two bureaucrats tangling over the intricacies of wireless networks may not seem like the stuff of headlines but this week’s debate between two FCC commissioners could shape the future of how we use our smartphones for decades to come.

Mignon Clyburn and Michael O’Rielly sat awkwardly next to each other on stage at Mobile World Congress Americas, the wireless industry’s big new bash, in San Francisco this week. They were part of a panel discussion along with new Trump-appointed commissioner Brendan Carr, as well as Meredith Attwell Baker, who heads the CTIA wireless trade association that put on the show. The seating arrangement—Clyburn to the left and O’Rielly to the right—fit their roles as vocal proponents of the commission’s political wings.

Having battled over topics like net neutrality and cable boxes, they now groused about the FCC’s latest dull-but-important controversy: the placement of transmitters for the new 5G wireless networks arriving in two or three years. Installing up to 300,000 cellular antennas—double what the U.S. currently has—in so little time is leading to a clash between overwhelmed local zoning officials and impatient industry and Trump administration officials, with Clyburn and O’Rielly fighting for each side.

The single-digit upgrade from “4G” to “5G” belies the massive technological change it will bring and the havoc it may wreak. Not only will 5G be a lot faster, it will also be ubiquitous—ranging from instantaneous, high-bandwidth connections for drones and robo cars to trickles of data from billions of temperature and moisture sensors. That requires scrounging for additional electromagnetic spectrum—pinching some from TV broadcasters, for instance, and also harnessing crummy “millimeter-wave” frequencies that can transmit only a few feet and not always make it around corners. “Many of these bands, just a few years ago, was dog spectrum,” said Clyburn. Making it workable requires placing “small cell” transmitters all over urban landscapes—with a size and density more like Wi-Fi hotspots than big, far-reaching cell towers.

But the bureaucracy is the same, say wireless industry boosters, with as much paperwork and cost for approving a small cell on a lamppost as for raising a 30- or 60-foot pole. Industry execs like to quote the figure that it takes a year to approve a cell that can be installed in an hour. Permit fees run to thousands of dollars per cell, and cells often require permits not only from cities but from Native American tribal authorities—even for locations far from tribal lands. At another talk during the show, Sprint’s VP of government affairs, Charles McKee, said that the company spent $173,000, just in tribal review fees, to install 23 small cells in the parking around the Houston Astrodome. Tribal authorities come in under the National Historic Preservation Act, even for a parking lot.

Local governments are trying to squeeze wireless carriers, say critics, with concessions for permits such as requiring carriers to install all new streetlights where a small cell goes up, to build a free municipal Wi-Fi network, or to pay 5% of gross revenues earned in the city. Some cities require even these small cells to be installed on poles that masquerade as cacti or other trees.

“There are bad actors in the space, and that’s going to require additional action,” said O’Rielly. “We are going to need to preempt those localities that are trying to extract a bounty in terms of profit…from wireless providers and therefore consumers, or that has a process to delay the deployment of technologies.” In July, the city of Tampa, for instance, enacted a 120-day moratorium on siting new 5G cells—a delay that won’t be helped by hurricane rebuilding.

Cities and neighborhood groups feel differently. Even small towers can be ugly, as can crowding cells onto lampposts, buildings, or anything else that sticks out of the ground. Locals need to be involved in setting rules for 5G construction, said Clyburn. “If you leave them out or ignore them you will have problems, be they protests or other bottlenecks,” she said. The delays might not be deliberate. Cities are overwhelmed with the applications to approve all of these new cell sites, she said. “If they don’t have the wherewithal…to do this in a timely manner, then we have a problem,” said Cyburn.

She bristled when O’Rielly used the word “preemption”—federal or state laws that prevent local governments from having a say in these zoning issues. About a dozen states have already passed laws to speed up 5G equipment approvals. The biggest state, California, is wrapping up its own bill. Meanwhile, O’Riely looked apoplectic when Cyburn called for “an infrastructure consortium where you’d have the municipal associations and the league of cities and counties and [wireless companies] sitting at the table across from each other, talking about what the goals and expectations are and how best to execute it.”

“There is an actual race going against the rest of the world,” said O’Rielly, “and I’m not willing to wait to have conversations for year upon year upon year around this issue.”

It’s tempting to see the 5G zoning battle as one of local communities vs. corporate America. Wireless boosters, though, say they are standing up for poor communities. “The problem is if we don’t cut the cost, where is [5G] going to get deployed?” asked Karr. “Only in the most high-end, urban, affluent communities.”

Technology often seems to evolve in an apolitical vacuum. The transition from 3G to 4G cost a lot of money and changed the country’s technology landscape, but not the physical landscape in most cases. With 5G, the isolated fights that pop up over an individual cell tower site will multiply and merge into a national phenomenon, especially in urban areas where small cells wind up encrusted all over the landscape. Whether they get installed quickly or extra carefully some constituency will get angry. In fact, the same people may get angry whichever way it goes.
https://www.fastcompany.com/40468468...-years-to-come





Microsoft and Facebook's Big Undersea Cable is Now Complete
Monica Chin

More than 17,000 feet below the ocean's surface, there now lies the "most technologically advanced subsea cable," providing up to 160 terabits (Tbps) of data per second — beating Google's alternative, now poorly named, "Faster." The cable is the handiwork of Facebook, Microsoft, and Spanish telecommunication company Telxius.

Construction on the cable, which stretches 4,000 miles from Virginia Beach, Virginia to Bilbao, Spain, began in August 2016. Microsoft announced its completion on Thursday, but it won't be operational until early 2018.

Facebook, Microsoft, and Telxius will jointly own the cable, which weighs almost 10.25 million pounds — as much as 34 blue whales. Telxius will serve as the cable's operator and will sell and lease its capacity to outside service providers. Microsoft and Facebook will use the cable to serve their own capacity needs.

Most transatlantic communication cables connect to the U.S. in either New York and New Jersey, but having the Marea as its called (meaning "tide" in Spanish) connect in Virginia diversifies connectivity between the U.S. and Europe. Hurricane Sandy, which hit New York and New Jersey in 2012, disconnected North America from Europe for several days.

"The superstorm sparked the realization that another major event could disrupt the vital connectivity lifeline across the Atlantic," Microsoft said in a blog post. "As part of its ongoing efforts to drive innovation and expand capacity of its global network, Microsoft sought options for making transatlantic connections more resilient."

Microsoft will not disclose the amount of its investment, or how much its partners have paid.
http://mashable.com/2017/09/22/micro...k-marea-cable/





Trump’s FCC Will Let Big Telecom Destroy Small Houston ISPs As It Rebuilds After Harvey

The FCC has rules in place to prevent Big Telecom from leaving local competitors in the lurch, but it’s currently scrapping those rules.
Kaleigh Rogers

As Texas starts to dry out from the damage of Hurricane Harvey, another perfect storm is forming. In a strange twist of timing, Big Telecom may walk away from the storm with an unfair advantage that could hurt smaller, local providers in Houston—and drive up costs for consumers in the process.

Across the country, Big Telecom has gradually been retiring old copper lines—used to delivery home phone as well as broadband—and replacing them with modern infrastructure, like fiber optic cables. When a natural disaster like Harvey wipes out the old wires, it's a good excuse for the provider to do a widespread upgrade. But because the FCC is currently rolling back regulations that require large ISPs—usually called the incumbent provider—to give access to infrastructure to small ISPs, these upgrades could leave smaller providers in the lurch.

"Businesses some people rely on for their phone and internet service are going to go away," said Harold Feld, the senior vice president at digital-rights advocacy group Public Knowledge. "It also means that the number of competitors in the market is going to decrease and therefore the prices will go up."

Existing regulations require major ISPs to share basic access infrastructure with smaller competitors, known as competitive local exchange carriers or CLECs. But earlier this year, the FCC began the process of rolling those rules back. It proposed deregulating this specific area back in April. If the deregulation proceeds undeterred, Big Telecom could have a free pass to build new networks in Texas without having to share with any of the local competitors.

"Their networks are really interconnected," said Angie Kronenberg, the chief advocate and general counsel for Incompas, a trade association for CLECs, referring to the relationship between CLECs and incumbent providers in Houston. Incompas is suing the FCC over the move to deregulate the 2015 decision.

Kronenberg told me small, local ISPs have their own infrastructure in some areas, but lease from incumbents in others. In Houston and the surrounding area, there are multiple CLECs. They depend on the access to the Big Telecom system, and not only provide choice for consumers but also help keep costs affordable by providing competition. If the FCC rolls back the 2015 rule, and Big Telecom starts replacing the system wiped out in the storm, these businesses will be SOL.

"This is the part that's too early to tell: how much will incumbents repair versus how much they will replace?" Kronenberg said, adding that if they choose to replace and the FCC continues its deregulation, it will very likely have an impact on price.

"There will be no price controls," Kronenberg said.

Right now, both incumbents and local competitors are just focused on getting their customers back online in the short term. A spokesperson for AT&T told me some wireline infrastructure is still out of service, and it's working with emergency management to get everyone back up and running. The spokesperson wouldn't comment on long-term plans to repair or upgrade the system.

In the months to come, Houston will serve as an important case study in the costs of FCC deregulation colliding with a literal hurricane.
https://motherboard.vice.com/en_us/a...s-after-harvey





Washington DC Braces for Net Neutrality Protests Later this Month

A coalition of activists and consumer groups are banding together to express concerns over an FCC proposal to rewrite the rules governing the internet
Dominic Rushe

Net neutrality advocates are planning two days of protest in Washington DC this month as they fight off plans to defang regulations meant to protect an open internet.

A coalition of activists, consumer groups and writers are calling on supporters to attend the next meeting of the Federal Communications Commission on 26 September in DC. The next day, the protest will move to Capitol Hill, where people will meet legislators to express their concerns about an FCC proposal to rewrite the rules governing the internet.

The FCC has received 22 million comments on “Restoring Internet Freedom”, the regulator’s proposal to dismantle net neutrality rules put in place in 2015. Opponents argue the rule changes, proposed by the FCC’s Republican chairman Ajit Pai, will pave the way for a tiered internet where internet service providers (ISPs) will be free to pick and choose winners online by giving higher speeds to those they favor, or those willing or able to pay more.

The regulator has yet to process the comments, and is reviewing its proposals before a vote expected later this year.

The activist groups are encouraging internet users to meet their lawmakers and tell them how a free and open internet is vital to their lives and their livelihoods.

Pai is a long term opponent of the current rules, which were brought in under the Obama administration. His proposals have sparked a firestorm of protest that led to the FCC’s comments system crashing under the weight of comments after comedian John Oliver ran a piece criticising Pai on his show Last Week Tonight. The FCC has claimed it was attacked by hackers but has yet to provide evidence.

“The FCC seems dead set on killing net neutrality, but they have to answer to Congress, and Congress has to answer to us, their constituents,” said Evan Greer, campaign director for Fight for the Future, one of the protest’s organisers.

“With this day of advocacy, we’re harnessing the power of the web to make it possible for ordinary internet users to meet directly with their senators and representatives to tell their stories, and make sure that lawmakers hear from the public, not just lobbyists for AT&T and Verizon,” she said.

Participating organizations in the protest include Fight for the Future, Public Knowledge, EFF, Center for Media Justice, Common Cause, Consumers Union, Free Press and the Writers Guild of America West.
https://www.theguardian.com/technolo...ternet-freedom





FCC Sued For Ignoring FOIA Request Investigating Fraudulent Net Neutrality Comments
Karl Bode

For months now we've noted how somebody is intentionally filling the FCC's net neutrality comment proceeding with bot-generated bogus comments supporting the agency's plan to kill net neutrality protections. Despite these fake comments being easily identifiable, the FCC has made it abundantly clear it intends to do absolutely nothing about it. Similarly, the FCC has told me it refuses to do anything about the fact that someone is using my name to file comments like this one falsely claiming I support killing net neutrality rules (you may have noticed I don't).

While nobody has identified who is polluting the FCC comment system with fake support, it should be fairly obvious who this effort benefits. By undermining the legitimacy of the public FCC comment proceeding (the one opportunity for transparent, public dialogue on this subject), it's easier for ISPs and the FCC to downplay the massive public opposition to killing popular net neutrality rules. After all, most analysis has shown that once you remove form, bot and other automated comments from the proceeding, the vast, vast majority of consumers oppose what the FCC and Trump administration are up to.

Attempts to dig deeper into this mystery haven't gone well. Freelance writer Jason Prechtel filed a Freedom of Information Act (FOIA) request on June 4 asking the FCC for data on the bogus comments, the API keys used, and how the FCC has worked to address the problem. But while the FCC acknowledged the FOIA request, it wound up giving Prechtel the runaround throughout the summer -- stating on June 14 that it would be extending the deadline for responding to his request from July 3 to July 18 -- before ultimately deciding to ignore his request altogether.

As a result, Prechtel has filed a lawsuit against the FCC (pdf), stating the agency is breaking the law by sitting on its hands. From a Medium post written by Prechtel explaining the suit:

"As the agency is legally obliged to respond to my request, and as the underlying questions behind my request still haven’t been answered, I have filed a lawsuit against the FCC for their refusal to conduct a reasonably timely search for the records, and have demanded the release of these records. Even now, over three months after my FOIA request, and even after I’ve filed a lawsuit, this request is still listed as “under agency review”.

If you're playing along at home, this is just one of several lawsuits that have been filed against the agency for its Keystone Cops-esque handling of the network neutrality proceeding to date. The FCC has been sued for obfuscating details on its meetings with major ISPs in regards to net neutrality, and also faces a lawsuit over the agency's apparently completely fabricated DDoS attack it claimed occurred conveniently at the exact same time John Oliver told his viewers to file comments with the agency. Perhaps the more observant will notice a trend at Ajit Pai's FCC?

Again, nobody knows who's behind this effort to pollute the public discourse, and the FCC is making it pretty clear it doesn't want to make it any easier to find out. Having covered the sector for twenty years, this sort of thing is well within the behavioral norms of the wide variety of "non profit," "non-partisan" groups hired by ISPs to pee in the discourse pool. Whoever's to blame, it's pretty clear the FCC is playing a role in not only making it harder to understand what happened, but in undermining the value of the public comment period.

As the FCC moves to formally vote to kill the rules in a month or two, expect Ajit Pai and friends to increasingly use the dysfunction they helped cement to downplay legitimate public opposition to its plan. After that, you can expect all of this dysfunction to play a starring role in the multiple, inevitable lawsuits that will be filed against the agency in the wake of the vote. Again, how was this blistering shitshow a better idea than simply listening to the will of the public and leaving the existing, popular net neutrality rules alone?
https://www.techdirt.com/articles/20...comments.shtml





California Sides With Comcast, Votes To Kill Broadband Privacy Law Favored By EFF
Karl Bode

You'll recall that earlier this year, AT&T, Verizon and Comcast successfully lobbied the GOP and Trump administration to kill consumer broadband privacy protections that were supposed to take effect last March. While big ISPs engaged in breathless hysteria about the "draconian" nature of the rules, the restrictions were quite modest -- simply requiring ISPs be transparent about what user data gets collected and sold. They also made it more difficult for big ISPs to charge users significantly more money just to opt out of private data collection, an idea both AT&T and Comcast have already flirted with.

But in quickly axing the rules, big ISPs --- and the regulators and lawmakers paid to love them -- got a bit more than they bargained for. The ham-fisted rush to kill the protections quickly resulted in more than a dozen states passing a patchwork collection of new state laws aimed at protecting broadband consumers. Among the most notable was California Assemblyman Ed Chau's AB 375. The proposal largely mirrors the FCC's proposal, though it took an even harder stance against ISPs looking to abuse the lack of competition to effectively make privacy a paid, premium option.

The law quickly received praise from the EFF, which argued that the law would be a good template for other states moving forward, lessening the chance for over-reaching, inconsistent, and poorly written state measures. But large ISPs, Facebook and Google lobbyists quickly got to work demonizing Chau's proposal too, falsely claiming it would somehow weaken user security and magically increase pop ups all over the internet. These and other claims were recently picked apart in an EFF blog post:

"The prediction of "recurring pop-ups" is also false because if anything, the bill would "likely result in fewer pop-ups, not to mention fewer intrusive ads during your everyday browser experience," Gillula wrote. "That’s because A.B. 375 will prevent Internet providers from using your data to sell ads they target to you without your consent—which means they’ll be less likely to insert ads into your Web browsing, like some Internet providers have done in the past.."

But the lobbying had its intended effect, and California lawmakers voted to kill the effort in a night vote over the weekend:

"It is extremely disappointing that the California legislature failed to restore broadband privacy rights for residents in this state in response to the Trump Administration and Congressional efforts to roll back consumer protection,” EFF Legislative Counsel Ernesto Falcon said. “Californians will continue to be denied the legal right to say no to their cable or telephone company using their personal data for enhancing already high profits. Perhaps the legislature needs to spend more time talking to the 80% of voters that support the goal of A.B. 375 and less time with Comcast, AT&T, and Google's lobbyists in Sacramento.”

While the proposal can be reintroduced next year, fighting upstream against the collective lobbying firepower of massive ISPs and Silicon Valley giants like Facebook and Google has proven no easy task. And there have been some comments from FCC Commissioners that they may try and use FCC authority to hamstring these efforts as well. You see, it's a "states rights" issue if you try to prevent states from letting ISP lobbyists write protectionist law hamstringing competition, but those concerns magically disappear when states move to actually protect consumers from duopoly harm.

It's worth re-iterating that ISPs spent years arguing consumers didn't need added privacy protections because the sector would self-regulate. Of course, Verizon subsequently highlighted the folly of such claims when it was busted modifying user packets to track users around the internet without telling them. AT&T similarly did the same when it began charging users $400 to $550 more per year to opt out of behavioral advertising. And other, smaller cable companies like CableONE joined the fun when they proclaimed they'd be using consumer financial data to provide worse customer service to bad credit customers.

The origins of this aggressively bad behavior? The lack of competition in the broadband space. And with the Trump administration looking to effectively gut all oversight of one of the least-competitive and least-liked sectors in American industry, anybody thinking these privacy issues will magically resolve themselves (instead of say, just getting progressively worse) hasn't been paying attention.
https://www.techdirt.com/articles/20...ored-eff.shtml





Altice Rolls Out ‘Economy Internet’ in Connecticut, U.S.
Alexander Soule

A $15-a-month broadband service Altice tested in Norwalk is being expanded across its Optimum territories in Connecticut and nationally, with the service made available to new subscribers who qualify based on their annual income.

The Altice “Economy Internet” service can handle downloads at speeds of up to 30 megabits per second, with the service including a free WiFi router in the home, email service and no caps on data.

To get the new service, households must qualify for the National School Lunch Program for students or Supplemental Security Income for seniors.
http://www.newstimes.com/business/ar...n-12208830.php





Netflix, Microsoft, and Google Just Quietly Changed How the Web Works

The organization that sets standards for the web just failed to beat back a stupid, greedy technology.
Adrianne Jeffries

This week the World Wide Web Consortium, the non-profit that debates and sets the standards that make all the web’s browsers and websites compatible, held its most contentious vote in history.

The proposed standard that was voted on is called Encrypted Media Extensions, or EME. Basically it standardizes parts of how copyrighted video is delivered within a browser. The most obvious effect of this will be that users will never have to download the Microsoft Silverlight or Adobe Flash add-ons in order to watch a copyright protected video from an authorized source like Netflix. This transition began in 2012 but is now set in stone.

Opponents, who include net neutrality father Tim Wu and stakeholders like the Ethereum Foundation, say this change will make the web less secure, less open, less accessible for people with hearing and vision impairment, and harder to archive. Proponents, who include large media companies like Netflix, argue it would actually make the web more secure, more open, more accessible, and, okay, more difficult to archive, but let’s not dwell on that. If you boil down the reason why EME was contentious, it’s because some people saw it as a gift to large corporations that would make the web worse for users, and extrapolated from there that the web’s most important organization is now in the pocket of Big Capitalism.

Show Gratitude

The consortium, also know by the awkward acronym W3C, does not normally share the breakdown of its votes. But because this vote was so controversial, it did. Out of the consortium’s 463 members, which include stakeholders from academia to nonprofits to major Silicon Valley corporations, 108 voted yes, 57 objected, 20 abstained, and the rest didn’t participate. The fact that just 185 out of 463 members voted or explicitly abstained may sound like a low turnout, but it was in fact historically high. “We’ve never had such a high percentage in my recollection,” said Jeff Jaffe, the consortium’s CEO.

What is also unusual is that after the vote took place, and the consortium officially endorsed the new standard, one of its members — Electronic Frontier Foundation, a San Francisco-based digital civil rights group that joined as a full member in 2013 expressly to fight EME— resigned in protest. No member has quit the consortium in protest before, Jaffe said. At least one staff member, Harry Halpin, resigned in protest as well.

“Our mission is to lead the web to its full potential.”
— Jeff Jaffe, CEO of the World Wide Web Consortium

There is no consensus on how bad EME will actually be for users. But what’s potentially more concerning is the perception that the organization that architects the world wide web has been colonized by big business. The World Wide Web Consortium was started at MIT in 1994 by Tim Berners-Lee, the creator of the web, in collaboration with the CERN science center in Geneva with support from DARPA and the European Commission. It has always maintained that it is a “neutral forum.” From early press releases: “The Consortium is neutral forum, and no member has a priori a greater say than another.” “The Consortium is vendor-neutral.” Now, the passage of EME is fueling the perception that the consortium is in the pocket of its large corporate members. The consortium’s press release announcing EME included laudatory statements from the MPAA, the RIAA, and the cable industry. “Thanks for handing the internet to the media corporations,” went a typical response on Twitter. “I sincerely hope that a competitor to your mafia arises and takes control.”

It’s unavoidable that on the internet, the interests of users and corporations will collide. Preserving personal privacy is at odds with serving advertising. Making websites equally fast and accessible no matter what their content is or who owns it — also known as net neutrality — is at odds with Comcast’s bottom line. But the most frequent flashpoint in the users-versus-corporations conflict seems to be copyright.

So many fun things about the internet, like memes and mashups and Let’s Plays and GIFs of the Olympics and videos of Lenny Kravitz’s dick, are subject to be removed at any moment by copyright holders and their creators punished. The whims of copyright holders also outweigh, by default, the need for public awareness of a security flaw in a website or application that, say, millions of people may use. If a security researcher finds a vulnerability, the burden is suddenly on them to figure out how to act on it without ending up in jail for violating copyright law.

One frustrating offshoot of this endless copyright war is a sub-war over digital rights management, or DRM. The term DRM is pretty broad, but it usually refers to a software lock that prevents a song from being copied or video from being downloaded or a printer from using unauthorized ink. If you’re a 90s kid, you may remember the great DRM debate that flamed up around Napster. People were rampantly pirating music, and the music industry wanted to stop them. Someone proposed that the problem could be solved with technology, and companies started plunging money into DRM.

The trouble with DRM is that it’s sort of ineffective. It tends to make things inconvenient for people who legitimately bought a song or movie while failing to stop piracy. Some rights holders, like Ubisoft, have come around to the idea that DRM is counterproductive. Steve Jobs famously wrote about the inanity of DRM in 2007. But other rights holders, like Netflix, are doubling down. The prevailing winds at the consortium concluded that DRM is now a fact of life, and so it would be be better to at least make the experience a bit smoother for users. If the consortium didn’t work with companies like Netflix, Berners-Lee wrote in a blog post, those companies would just stop delivering video over the web and force people into their own proprietary apps. The idea that the best stuff on the internet will be hidden behind walls in apps rather than accessible through any browser is the mortal fear for open web lovers; it’s like replacing one library with many stores that each only carry books for one publisher. “It is important to support EME as providing a relatively safe online environment in which to watch a movie, as well as the most convenient,” Berners-Lee wrote, “and one which makes it a part of the interconnected discourse of humanity.” Mozilla, the nonprofit that makes the browser Firefox, similarly held its nose and cooperated on the EME standard. “It doesn’t strike the correct balance between protecting individual people and protecting digital content,” it said in a blog post. “The content providers require that a key part of the system be closed source, something that goes against Mozilla’s fundamental approach. We very much want to see a different system. Unfortunately, Mozilla alone cannot change the industry on DRM at this point.”

So is the World Wide Web Consortium carrying water for corporate interests? Jaffe says absolutely not. He pushed back on the idea that the consortium has become more like an industry group or trade association, a characterization I’ve seen pop up more and more recently. “Our mission is to lead the web to its full potential,” he said. “I think roughly a third of our members are not for profit in one fashion or another. So we are a very broad multi stakeholder group that is concerned about the evolution of the technology for the world wide web.”

It’s true that pretty much anyone can become a member of the World Wide Web Consortium as long as they have a demonstrated stake in web technologies and can pay their dues. Membership fees are assigned according to a rate table that accounts for revenue and country of origin, so that powerful players pay more than small ones. The fees for the U.S., for example, range from $2,250 to $77,000. It’s a noble impulse, but it’s actually regressive. The top tier, $77,000, is for companies making more than $1 billion. In 2016, Google had a profit of $89 billion. That means while a small company might pay a palpable percentage of its revenue, Google will pay a percentage that is virtually 0.

Still, Jaffe says the membership of the consortium is still populist. Fewer than 100 members are for-profit companies with revenue of $50 million or more per year, Jaffe said. “More than 75 percent of our membership is NGOs, academic institutions, startups, small and medium companies, so I think within our membership there is certainly ample number of companies that are not large commercial companies,” he said. While we don’t know which members voted in favor of EME, because that’s still confidential, the fact that it passed with just 108 votes suggests that a caucus of just the $50 million-plus companies would still be quite powerful. A caucus of $50 million companies plus smaller companies with ambitions to become $50 million companies, even more so.

It’s hard to argue that Goliath didn’t win this last round against David. Corporate interests won not just because they had the votes at the consortium, but because they created conditions in the real world that the consortium had to adapt to in order to remain relevant. YouTube and Netflix dominate the market for videos that people want to watch; if they want to lock up those videos with DRM, they have a lot of leverage to do so. By the time Mozilla rolled over, Google, Microsoft, and Apple had already implemented EME into the other three major browsers. “As a result, the new implementation of DRM will soon become the only way browsers can provide access to DRM-controlled content,” Mozilla wrote. The deal was sealed.

Before the EFF quit, the organization offered a compromise that would have forced rights holders to sign a covenant saying they would not use copyright law to sue people who interacted with their soon-to-be standardized DRM in the course of doing security research or accessibility work. It was a nonstarter; by all accounts, the members representing rights holders were not open to compromise. In isolation, supporting EME in the browser in order to facilitate a stupid, greedy technology is not the end of the web as we know it. But this vote marks a significant milestone in the history of the World Wide Web Consortium: the moment the users fought back and lost.
https://theoutline.com/post/2304/net...-the-web-works





Harvard Study Proves Apple Slows Down old iPhones to Sell Millions of New Models
Mr Robot

If you were Apple, what tricks would you utilize to increase the sales of your latest product?

If you know corporations, you’d know they use any possible trick they can as a generality to increase their profit: think of how huge a factor it would make in the sale of new iPhones if the old ones became slower.

People have made the anecdotal observation that their Apple products become much slower right before the release of a new model.

Now, a Harvard University study has done what any person with Google Trends could do, and pointed out that Google searches for “iPhone slow” spiked multiple times, just before the release of a new iPhone each time.

The study was performed by student Laura Trucco. The study also compared the results to “Samsung Galaxy slow,” and found that the same spike in searches did not occur before the release of a new Samsung phone.

This isn’t the first time the theory has been put forth. Surprisingly a New York Times writer suggested Apple might configure its new operating systems to only really work properly on new devices.

Writer Catherine Rampell said:

When major innovations remain out of reach, and degrading durability threatens to tick off loyal customers, companies like Apple can still take a cue from the fashion industry.

The public has to get wise to the way corporations generally operate. They hire efficiency experts and people who can maximize the profit made from any product they produce. Not every corporation is the same, but the inherent incentive must be understood, or people will continuously be played by the corporations.
http://www.anongroup.org/harvard-study-iphones/





The iPhone X from an Android User’s Perspective

Peeking over the fence
Vlad Savov

It’s been almost a year since the Google Pixel made me put down my iPhone and transformed me from a Google apps user on Apple hardware to a pure Google acolyte. In the grand tug of war between mobile religions, I’m now pulled in the direction of Android, and I can’t express much regret about it. But Apple has just made official its biggest redesign and rethink of the iPhone ever, and so I was definitely curious about the iPhone X and the future it paints for the Apple ecosystem. As it turns out, though, the iPhone X really isn’t a phone designed to draw me back in; it’s more customer service to existing iPhone users than an appeal to new ones.

The Android user hat isn’t the only one I wear, but here are my main iPhone X takeaways from the perspective of someone deeply immersed in the Android realm:

A radical iPhone redesign is a good thing for everyone, no matter what it looks like or who buys it. I think this is an important point that’s all too often disregarded: any sufficiently ambitious company should dread the stagnation of its competitors, which is liable to lead to complacency and a slowdown in progress. When the United States put people on the moon in the 1960s, those efforts were spurred by the threat of the Soviet Union making it there first. Having a strong rival is essential to keeping up the pace of innovation. Google and its myriad Android hardware partners have always had that in Apple’s iPhone, and this major redesign will give them a fresh and different antagonist to measure up against.

The new iPhone X hardware design doesn’t thrill me at all. I know this part is subjective, but having seen the Galaxy S8 and Note 8, the Essential Phone and the LG V30, I am no longer wowed by (almost) bezel-less screens. I’ve now used multiple devices like that and, in all honesty, the absentee bezels are something I forget about very quickly. I don’t feel like I’m using a radically new and different design, and though it’s a little awkward to return to a phone with old-school bezels like the HTC U11, I’ve recently done it and survived the supposed regression. What I don’t fancy about the iPhone’s new look is the extra glossy glass back, punctuated by a chunky, protruding dual-camera module: it’s supposed to be ultra minimalist, yet it has this big eyesore on it. And the same is true of the front, where the top notch makes for a good brand identifier but a questionable design choice.

The under-the-hood upgrades that Apple announced are likely to be significant, including the first GPU designed by Apple itself and a battery life that’s supposedly two hours longer than that of the iPhone 7. The cameras are said to have physically larger sensors too, which may help the iPhone catch up to the rapidly advancing Android competition in the cameraphone race. But does any of that excite my gadget lust or make me wonder whether Apple’s Photos could be as good as Google Photos? No, not yet. The iPhone 7 of last year had one of the most powerful processors ever put inside a mobile device, but I can’t think of a single occasion where I was using an Android phone and wishing I had the extra power of the iPhone.

Apple’s embrace of Qi wireless charging on both the iPhone 8 and iPhone X will not only be significant, it might be the final piece required to make wireless charging a truly mainstream feature. Even if you never buy an iPhone, you should be glad that Apple and Samsung — the two most prolific smartphone makers — have both chosen the same standard. At present, I get a kick out of charging the LG V30 on Samsung’s wireless charging dock, but in the future this sort of cross-compatibility and universality will stretch across both iOS and Android. That’s great news for all, and it lays the foundation for one day having a smartphone that has no ports at all, eschewing cables in favor of wireless data, audio, and power transfer.

Face ID will probably work well, but I don’t see the value in it. Something peculiar has happened in 2017, a year that’s seen both Samsung and Apple abandon their perfectly functional fingerprint sensors embedded in the home button at the front of their flagship phones and replacing it with arcane alternatives. Obviously, the underlying driver is the move to strip away display bezels, but couldn’t both companies have placed a fingerprint sensor in the middle of the back of their devices? Google, LG, Huawei, and countless others have been doing it for years and left no unhappy customers. Instead, Samsung tucked its fingerprint reader in an awkward off-center position and gave us iris scanning to unlock our phones, while Apple’s iPhone X has the world’s most sophisticated (or is that over-engineered?) face authentication system. The best-case scenario for Face ID that I can see is that it matches Touch ID, which already worked very nicely; I can’t get excited for such a sidestep. More worryingly, I expect Android OEMs will go crazy copying Apple’s Face ID, and I expect many of them to do it sloppily, creating the threat of much less secure phones.

Using the same depth-sensing tech and hardware as Face ID, Apple’s animoji system has charmed quite a few people with its ability to motion-capture the user’s face and turn it into animated emoji. I guess I’m too old and / or jaded to find that appealing. Do I get frustrated when silly augmented-reality apps don’t perfectly map their silliness onto my face? Sure, I do, for about 0.5 seconds. Then I move on to doing something more important like watching cat GIFs on the internet. My point is that I don’t think Apple is solving an especially major problem with its animoji, and it’d take some ingenious application of the tech to convince me that I should care about or want it.

The swipe-based iPhone X interface is nothing we haven’t seen before. Whether you want to go as far back as the Palm Pre, or more recently the Nokia N9 or BlackBerry Z10, there have been plenty of attempts at making gesture interfaces work. For various reasons, they’ve all failed to find traction among users, but, like socialism, many people still think that the idea is sound and just hasn’t been properly implemented yet. I remain skeptical. What I’ve seen of the new iPhone UI suggests it’s difficult to intuit and, without the safety valve of a home button, many neophytes might find it bewildering to get around. Until further notice, I’m filing away the iPhone X interface on my list of big questions to be answered about this enigmatic new device.

A major point of distinction I’ve noticed between myself, a person outside the iOS bubble, and my friends and colleagues inside the Apple ecosystem is that we basically can’t think of a new iPhone the same way. I consider it in clinical terms, assessing its specs, value for money, likely durability of the design, and relative advantages to Android alternatives. People who are already entrenched iOS users regard the iPhone X in a more emotional fashion; it’s like they’re about to have a new child rather than a new phone. “I don’t care, I’ve waited three years to upgrade, I’m buying an iPhone X” is one refrain I’ve heard, and it’s thoroughly understandable. If your entire life is tied up in iMessage and other Apple services, your only option for a new phone is another iPhone, and so the iPhone X is a hugely exciting device just by virtue of finally being meaningfully different.

The level of commitment and loyalty that Apple has engendered among its users is exactly what Google is trying to establish with its Pixel line among Android users. I see nothing fanatical or excessive about it, as I think both companies are “locking down” users through the strength of the services and conveniences they provide. But the further we go into the iPhone X and Google Pixel future, the more dividing lines I’m seeing between the two paths. Apple’s new features are intrinsically tied to its new hardware — such as the multi-sensor array required for Face ID and animoji face mapping — while Google is providing free Google Photos storage and the best camera algorithms in the business with its Pixel line. It’s probably because I value the latter company’s offering so highly that I can’t truly get excited about the novelties from the former.

In summary, I’m glad the iPhone X exists, and I’m optimistic about it making positive waves in the wider smartphone market, but I am not myself attracted by it. That’s in part because of the pace of innovation among Android rivals, and in part because Apple is serving a demographic that I’m no longer squarely in the middle of. I have no problem with any of that, I think it’s the sign of a vibrant market that there’s choice and variety. But for now at least, I think I’ll skip the $999 glass iPhone and look forward instead to October 4th and Google’s next Pixels. The difference for me, as a yearlong Android devotee, is that an Apple event is fun and exciting just out of sheer tech enthusiasm, but a new Google product launch is thrilling because it has a high chance of being my next phone purchase.
https://www.theverge.com/2017/9/16/1...er-perspective





Apple: iPhones Are Too 'Complex' to Let You Fix Them

Apple wants iPhone repair to be "fairly priced and accessible" but says it needs to control access to the market.
Jason Koebler

Apple's top environmental officer made the company's most extensive statements about its stance on the repairability of Apple hardware Tuesday at TechCrunch Disrupt in San Francisco. The company's message is that rather than repairability, the company designs its products for "durability." If a repair is needed, it should be "accessible," but only at an Apple-authorized repair shop, which are often inconveniently located and aren't given permission to do hard repairs. And Apple says it's working toward sustainability in the supply chain, but Apple's recycling policies are in direct contradiction with those stated goals.

"I don't think you can say repairability equals longevity," Lisa Jackson, Apple's vice president of policy and social initiatives said. "I often say if you're in the repair business, repair seems like the answer. But actually you need to design for the life cycle. And Apple has designed for some time around durability, around the idea we can release the latest and greatest product, your old product still works and has value."

"Our first thought is, 'You don't need to repair this.' When you do, we want the repair to be fairly priced and accessible to you," she added. "To think about these very complex products and say the answer to all our problems is that you should have anybody to repair and have access to the parts is not looking at the whole problem."

I have repeatedly been hard on Apple for its stance on third-party repair. This is because much of what Apple and Jackson are saying doesn't really make sense or isn't true once it's examined. With its "Authorized Service Provider" program, Apple unilaterally sets both the prices and geographic distribution of repairs. It dreams of a world in which it has a repair monopoly—if it believed in competition, it wouldn't lobby against state efforts to require manufacturers to make repair parts and guides available to the public.

If you live in, say, Valentine, Nebraska and want your phone repaired by an "authorized" Apple store or partner and you want same-day service (i.e., you don't want to stay overnight in a hotel while someone fixes your cracked phone screen), you need to travel to Omaha, 250 miles away. This is not an "accessible" repair.

Apple Stores and Apple authorized service centers are also not capable or, in the case of authorized stores, allowed to fix many problems one is likely to encounter on their phone or iPad. For instance, authorized service providers aren't allowed to replace an iPhone charging port—which takes most third-party repair companies 10 minutes and costs $30 or so. Instead, these devices must be mailed to a larger service center to be fixed offsite. Apple Stores and authorized service centers also can't do micro soldering, which many third party shops can do and is necessary to fix common iPhone 6 "touch disease" defects or to fix an iPad's backlight, among many other potential maladies that can affect a phone or iPad.

"If the program worked well, I would have joined a long time ago," one independent repair shop owner told me earlier this year. "The only thing they allow you to repair are screens and batteries. If there's a broken camera, you have to send it back. Broken charge port, send it back. If it's an iPad, you have to send it back. These are repairs that take minutes to do, and you have to send it out."

Apple certainly isn't wrong in that it's important to make durable devices. But all electronics will break at some point, and the Institute of Electrical and Electronics Engineers says that extending a phone's life from one to four years "decreases its environmental impact by about 40 percent."

Jackson also spoke extensively about the importance of cell phone recycling and part reusability further down the line. Apple has often touted "Liam," its recycling robot, but Liam only comes into play when phones are turned into official Apple recycling channels. A few centralized Liams are no match for a robust hardware reuse program; phones and laptops turned into local recycling centers should be refurbished (by humans) and resold. Unfortunately, Apple actively fights against such a solution.

"In my mind from an environmental standpoint, [the solution] is looking at the resources that it takes to make a product and moving to a circular use of those resources," Jackson said. "They don't pass through the system once and then end up in somebody's landfill or somebody's drawer. There's incentive to you to get those resources back and then work on reusing them, hopefully in product."

This is all well-and-good in theory. But Apple's policies often prevent this from happening. As I reported earlier this year, Apple requires recyclers who deal with its products to sign "must shred" clauses, meaning that any material that is recycled on behalf of Apple must be destroyed. Apple told state regulators in documents I acquired using a Freedom of Information Act request that these electronics that end up at recyclers are not to be reused.

Jackson and Apple make it seem that, in eschewing third party repair, the company makes up for it in other areas. But its business practices show quite plainly that in areas such as recycling, it doesn't back that talk up. And so if Apple doesn't support independent repair and is specifically making sure its products that go to third-party recyclers aren't being reused, well, then, what does it stand for?
https://motherboard.vice.com/en_us/a...horized-repair





Turning Off Wi-Fi and Bluetooth in iOS 11's Control Center Doesn’t Actually Turn Off Wi-Fi or Bluetooth

And it’s a feature, not a bug.
Lorenzo Franceschi-Bicchierai

Turning off Bluetooth and Wi-Fi when you're not using them on your smartphone has long been standard, common sense, advice. Unfortunately, with the iPhone's new operating system iOS 11, turning them off is not as easy as it used to be.

Now, when you toggle Bluetooth and Wi-Fi off from the iPhone's Control Center—the somewhat confusing menu that appears when you swipe up from the bottom of the phone—it actually doesn't completely turn them off. While that might sound like a bug, that's actually what Apple intended in the new operating system. But security researchers warn that users might not realize this and, as a consequence, could leave Bluetooth and Wi-Fi on without noticing.

"It is stupid," Collin Mulliner, a security researcher who's studied Bluetooth for years, told Motherboard in a Twitter chat. "It is not clear for the user."

To be clear, and to be fair, this behavior is exactly what Apple wants. In its own documentation, the company says that "in iOS 11 and later, when you toggle the Wi-Fi or Bluetooth buttons in Control Center, your device will immediately disconnect from Wi-Fi and Bluetooth accessories. Both Wi-Fi and Bluetooth will continue to be available." That is because Apple wants the iPhone to be able to continue using AirDrop, AirPlay, Apple Pencil, Apple Watch, Location Services, and other features, according to the documentation.

Motherboard tested this behavior on an iPhone with iOS 11 installed and verified that Bluetooth and Wi-Fi remain on in the settings after turning them off in the Control Center, as some users have started to notice.

Andrea Barisani, a security researcher and one of the first people to notice this change, said in a Twitter direct message that the new user interface is not obvious at all and makes the user experience more "uncomfortable."

Turning off Bluetooth and Wi-Fi reduces your exposure to potential attacks to hardware, firmware and software, so "it's good practice," Barisani told me. Just last week, security researchers revealed the existence of a series of bugs in the way some operating systems implemented Bluetooth that allowed hackers to take over victim's devices as long as the Bluetooth was on—without needing to trick the user into clicking a malicious link or do anything at all.

It's worth mentioning that both Bluetooth and Wi-Fi will become active again when you toggle them off in the Control Center at 5 AM local time, according to Apple's documentation. It's unclear why that is, but just so you know.

Apple did not immediately respond to a request for comment.

The takeaway is that if you want to really and completely turn off Bluetooth and Wi-Fi on iOS11 you can't do it from the Control Center anymore, you'll have to do it through the Settings app.
https://motherboard.vice.com/en_us/a...h-apple-ios-11





Google Experiment Tests Top 5 Browsers, Finds Safari Riddled With Security Bugs
Catalin Cimpanu

The Project Zero team at Google has created a new tool for testing browser DOM engines and has unleashed it on today's top five browsers, finding most bugs in Apple's Safari.

The tool — named Domato — is a fuzzer, a security testing toolkit that feeds a software application with random data and analyzes the output for abnormalities.

Google engineer Ivan Fratric created Domato with the goal of fuzzing DOM engines, the browser components that read HTML code and organize it into the DOM (Document Object Model), which is then "painted" and displayed inside the browser window that human users view on their screens.
Google: DOM engine bugs should be a priority

Fratric says he focused on DOM engines because it's "a rare case that a vendor will publish a security update that doesn’t contain fixes for at least several DOM engine bugs," showing how prevalent they are today.

He also argues that while Flash bugs provide a cross-browser attack surface, once Flash reaches end-of-life (in 2020), attackers will focus their efforts on DOM engines, the browser's biggest attack surface.

With Domato he wants to help browser vendors test and patch as many security bugs in their respective DOM engines before it is too late.
Google test finds 17 security bugs in Safari's DOM engine

To prove Domato's capabilities, Fratric took today's top five browsers — Chrome, Firefox, Internet Explorer, Edge, and Safari — and subjected them to 100 million fuzz tests with Domato.

Results showed that Safari had by far the worst DOM engine, with 17 new bugs discovered after Fratric's test. Second was Edge with 6, then IE and Firefox with 4, and last was Chrome with only 2 new issues.

Non-security bugs were ignored, and Fratric also pointed out that if Microsoft wouldn't have added MemGC (user-after-free exploit mitigation) in IE and Edge, those browsers would have faired much worse.

Google said it contacted each browser vendor and reported the newly found bugs, and also provided copies of the Domato engine so each vendor can perform more extensive tests.

Fratric has also open-sourced the Domato source code on GitHub and hopes that others adapt it to work on other applications, not just browser DOM engines.

Domato is just the latest fuzzing tool released by Google engineers, who appear to be in love with this technique when it comes to discovering security bugs. Previous tools include OSS-Fuzz and syzkaller.
https://www.bleepingcomputer.com/new...security-bugs/





Hackers Using iCloud's Find My iPhone Feature to Remotely Lock Macs and Demand Ransom Payments
Juli Clover

Over the last day or two, several Mac users appear to have been locked out of their machines after hackers signed into their iCloud accounts and initiated a remote lock using Find My iPhone.

With access to an iCloud user's username and password, Find My iPhone on iCloud.com can be used to "lock" a Mac with a passcode even with two-factor authentication turned on, and that's what's going on here.

Apple allows users to access Find My iPhone without requiring two-factor authentication in case a person's only trusted device has gone missing.

2-factor authentication not required to access Find My iPhone and a user's list of devices.

Affected users who have had their iCloud accounts hacked are receiving messages demanding money for the passcode to unlock a locked Mac device.

Y'all my MacBook been locked and hacked. Someone help me @apple @AppleSupport pic.twitter.com/BE110TMgSv
— Jovan (@bunandsomesauce) September 16, 2017

The usernames and passwords of the iCloud accounts affected by this "hack" were likely found through various site data breaches and have not been acquired through a breach of Apple's servers.

Impacted users likely used the same email addresses, account names, and passwords for multiple accounts, allowing people with malicious intent to figure out their iCloud details.

It's easy to lock a Mac with a passcode in Find My iPhone if you have someone's Apple ID and password.

To prevent an issue like this, Apple users should change their Apple ID passwords, enable two-factor authentication, and never use the same password twice. Products like 1Password, LastPass, and even Apple's own iCloud Keychain are ideal ways to generate and store new passwords for each and every website.

So a hacker gained access to my iCloud account (despite two-factor authorization) while I was asleep this morning.
— Jason Caffoe (@jcaffoe) September 20, 2017

Users who have had their Macs locked will need to get in contact with Apple Support for assistance with removing the Find My iPhone lock.
https://www.macrumors.com/2017/09/20...mote-mac-lock/





4K iTunes Content Limited to Streaming Only, No Downloads
Juli Clover

Apple has updated its iTunes Store on iOS devices and the Apple TV with plenty of 4K movies ahead of the launch of the Apple TV 4K, but as made clear in a recent support document, 4K content from Apple can be streamed, but not downloaded directly on a device.

According to Apple, customers can download a local copy of an HD movie, and on occasion, HD movies that support HDR and Dolby Vision, but 4K movies are not available for download and thus can't be watched without an internet connection.

You can download a local copy of an HD movie, and you might be able to download HDR and Dolby Vision versions, but you can't download a 4K version.

That means customers who have had their previously-purchased iTunes movies upgraded from HD to 4K at no cost can stream those movies in 4K, but can only download HD versions. Newly purchased content is also restricted from download.

It's not clear why Apple is not allowing customers to download 4K content onto their devices, but it could potentially be a licensing issue. Apple is providing 4K content at the same price as HD content, though movie studios were rumored to want to charge more. It's also possible it's a local storage issue, as 4K movies have large file sizes.

To stream 4K content to the new Apple TV 4K, Apple recommends a minimum speed of 25Mb/s, according to the support document. If an internet connection isn't fast enough, Apple will downscale the video quality.

In addition to the download restriction, one other major negative surfaced today -- the 4K Apple TV does not support 4K content from YouTube at this time. YouTube streams its 4K content using a VP9 video format, a codec the Apple TV does not support. The 4K Apple TV is limited to H.264, HEVC (H.265), and MP4.

Netflix and 4K content from other streaming services is supported, however, and Apple has promised 4K content from Amazon Prime Video will be available when the app launches later this year.

The first Apple TV 4K orders will begin arriving to customers on Friday, September 22, the official launch date of the device.
https://www.macrumors.com/2017/09/21...treaming-only/





Apple Blocking Ads that Follow Users Around Web is 'Sabotage', Says Industry

New iOS 11 and macOS High Sierra will stop ads following Safari users, prompting open letter claiming Apple is destroying internet’s economic model
Alex Hern

For the second time in as many years, internet advertisers are facing unprecedented disruption to their business model thanks to a new feature in a forthcoming Apple software update.

iOS 11, the latest version of Apple’s operating system for mobile devices, will hit users’ phones and tablets on Tuesday. It will include a new default feature for the Safari web browser dubbed “intelligent tracking prevention”, which prevents certain websites from tracking users around the net, in effect blocking those annoying ads that follow you everywhere you visit.

The tracking prevention system will also arrive on Apple’s computers 25 September, as part of the High Sierra update to macOS. Safari is used by 14.9% of all internet users, according to data from StatCounter.

Six major advertising consortia have already written an open letter to Apple expressing their “deep concern” over the way the change is implemented, and asking the company to “to rethink its plan to … risk disrupting the valuable digital advertising ecosystem that funds much of today’s digital content and services”.

Tracking of users around the internet has become crucial to the inner workings of many advertising networks. By using cookies, small text files placed on a computer which were originally created to let sites mark who was logged in, advertisers can build a detailed picture of the browsing history of members of the public, and use that to more accurately profile and target adverts to the right individuals.

Many of these cookies, known as “third-party” cookies because they aren’t controlled by the site that loads them, can be blocked by browsers already. But advertisers also use “first-party” cookies, loaded by a site the user does visit but updated as they move around the net. Blocking those breaks many other aspects of the internet that users expect to work, such as the ability to log into sites using Facebook or Twitter passwords.

To tackle this, the new Safari feature uses a “machine learning model”, Apple says, to identify which first-party cookies are actually desired by users, and which are placed by advertisers. If the latter, the cookie gets blocked from third-party use after a day, and purged completely from the device after a month, drastically limiting the ability of advertisers to keep track of where on the web Safari users visit.

It is this algorithmic approach which spurred the six US advertising bodies, including the Interactive Advertising Bureau and the Association of National Advertisers, to write to Apple. In their letter, published by AdWeek, the advertisers argue: “The infrastructure of the modern internet depends on consistent and generally applicable standards for cookies, so digital companies can innovate to build content, services and advertising that are personalised for users and remember their visits.

“Apple’s Safari move breaks those standards and replaces them with an amorphous set of shifting rules that will hurt the user experience and sabotage the economic model for the internet.”

Apple responded to the letter saying: “Ad tracking technology has become so pervasive that it is possible for ad tracking companies to recreate the majority of a person’s web browsing history. This information is collected without permission and is used for ad re-targeting, which is how ads follow people around the internet.”

Apple has shown little concern for advertisers’ needs in the past. In 2015, it led that year’s update for iOS with a feature that allowed widespread mobile ad blocking on the platform for the first time. The move arguably kicked off an arms race that led major media companies to increase their use of subscription models, and ceded an ever-increasing portion of the digital advertising market to Facebook and Google, two companies whose models are more resilient to adblocking than many smaller publishers.

Google has also made a move on the adblocking market, testing a built-in adblocker for its Chrome browser, which is used by 54.9% of all internet users according to StatCounter. The feature, which is expected to hit the final release of the browser sometime this year, blocks what the company calls “intrusive ads”: autoplaying video and audio, popovers which block content, or interstitial ads that take up the entire screen.
Unsurprisingly, Google’s own advertising products are not deemed intrusive.
https://www.theguardian.com/technolo...afari-internet





P&G Cuts More Than $100 Million in ‘Largely Ineffective’ Digital Ads

Consumer product giant steers clear of ‘bot’ traffic and objectionable content
Alexandra Bruell and Sharon Terlep

Procter & Gamble Co. PG -0.30% said that its move to cut more than $100 million in digital marketing spend in the June quarter had little impact on its business, proving that those digital ads were largely ineffective.

Almost all of the consumer product giant’s advertising cuts in the period came from digital, finance chief Jon Moeller said on its earnings call Thursday. The company targeted ads that could wind up on sites with fake traffic from software known as “bots,” or those with objectionable content.

“What it reflected was a choice to cut spending from a digital standpoint where it was ineffective, where either we were serving bots as opposed to human beings or where the placement of ads was not facilitating the equity of our brands,” he said.

Chief Executive David Taylor said in an interview that the digital spending cuts are part of a bigger push by the company to more quickly halt spending on items -- from ad campaigns to product development programs -- that aren’t working.

“We got some data that said either it was in a bad place or it was not effective,” Mr. Taylor said of the digital cuts. “And we shut it down and said, ‘We’re not going to follow a formula of how much you spend or share of voice. We want every dollar to add value for the consumer or add value for our stakeholders.”

After cutting back on certain digital ads, “we didn’t see a reduction in the growth rate,” said Mr. Moeller during the call. “What that tells me is that the spending we cut was largely ineffective.”

P&G also said it reduced overhead, agency fee and ad-production costs in the quarter.

P&G, whose brands include Bounty, Crest, Tide and Pampers, spent $2.45 billion on U.S. advertising, not including spending on some digital platforms, according to Kantar Media. Long the biggest advertiser in the world, its pronouncements on trends in ad spending are watched closely.

The company about a year ago said that it would move away from ads on Facebook that target specific consumers, after finding that ultra-niche targeting compromises reach and has limited effectiveness. P&G indicated it wouldn’t pull back on overall Facebook spending.

It’s unclear whether P&G has shifted more spending to other media, including television, as it tweaks its digital spending approach. TV networks have been making an aggressive case that marketers have over-allocated budgets to the dark alleys of digital, and should move ad money back into TV.

The cuts echo marketing executives’ mounting concerns around the efficacy of digital advertising and the growing perception that they are wasting money on digital ads that never reach their intended audience.

P&G, which is facing a proxy fight with activist investor Nelson Peltz, reported 2% increase in organic revenue in the quarter and full year ended June 30. The company posted a higher profit in the most recent quarter despite a slump in consumer spending.
Mr. Peltz’s Trian Fund Management LP criticized P&G’s cutback on digital spending. P&G’s improved earnings “came as a result of reducing advertising, specifically digital, a tactic we believe will damage the value of the company’s brands if continued in the long term,” the firm said in a statement.

It’s unclear what impact the digital cuts have on P&G’s overall marketing spend.

P&G said it’s committed to advertising that delivers tangible results for its brands.

Personal care brand Always, for example, saw “a significant increase” in awareness and equity scores since its “Like a Girl” campaign launched a few years ago, said Mr. Taylor on the call. The campaign shed a light on gender bias, challenging what it meant to do something “like a girl.”

Mr. Taylor on the call talked about the importance of “having a superior product” that has “a point of view” as more consumers use social media to share their opinions.

Always is among the many brands that have taken on a larger social cause or purpose in their marketing in recent years.

P&G is among the packaged-goods giants tweaking their marketing spending and strategy as they face larger business challenges. Rival Unilever is also undergoing a marketing reorganization, including drastically cutting the number of agencies it works with.

Spending cuts are hurting the ad agencies that rely on business from the big consumer-goods spenders. Interpublic Group of Co s, which owns McCann Worldgroup and IPG Mediabrands, said during its latest earnings call this week that spending cuts by consumer packaged goods clients reduced its revenue in the second quarter by almost 1%.
https://www.wsj.com/articles/p-g-cut...ads-1501191104





Google, Bing, Yahoo! Data Hoarding is Like Homeopathy. It Doesn't Work – New Study Claims

Boffins find search quality unaffected
Thomas Claburn

Data, it has been argued, is the new oil – the fuel for the information economy – but its importance to search engines may be overstated.

In a paper released on Monday through the National Bureau of Economic Research, Lesley Chiou, an associate professor at Occidental College, and Catherine Tucker, a professor at the MIT Sloan School of Management, all in the US, argue that retaining search log data doesn't do much for search quality.

Data retention has implications in the debate over Europe's right to be forgotten, the authors suggest, because retained data undermines that right. It's also relevant to US policy discussions about privacy regulations.

A decade ago, Google changed its search data retention policy for server logs from as long as it wants, to... as long as it wants, with a caveat: the data is identifiable only for the first 18‑24 months, after which it gets anonymized.

It was an issue other search engine providers like Microsoft and Yahoo! had to confront, too.

By 2008, Google had settled on the removal of the last 8 bits of the IP address after nine months, and on more substantive anonymization after 18 months.

At the time, the company said one of its reasons for keeping search logs was "to improve our search algorithms for the benefit of users."

There are other reasons to retain data, such as legal compliance and anti-spam efforts.

But it can be beneficial to avoid keeping too much data around. Data retention turns a company into a magnet for legal requests and represents a liability in the event of hacking. Storage infrastructure also has a cost.

To determine whether retention policies affected the accuracy of search results, Chiou and Tucker used data from metrics biz Hitwise to assess web traffic being driven by search sites.

They looked at Microsoft Bing and Yahoo! Search during a period when Bing changed its search data retention period from 18 months to 6 months and when Yahoo! changed its retention period from 13 months to 3 months, as well as when Yahoo! had second thoughts and shifted to an 18‑month retention period.

According to Chiou and Tucker, data retention periods didn't affect the flow of traffic from search engines to downstream websites.

"Our findings suggest that long periods of data storage do not confer advantages in search quality, which is an often-cited benefit of data retention by companies," their paper states.

Asked via email whether these findings suggest that Google has overstated the value of search log data, Chiou told The Register, "Our study examined retention data policies for Yahoo! and Bing and did not study Google, as Google did not undergo any changes in its retention policy at the time. Our paper does not find evidence that Yahoo!'s and Bing's change conferred an advantage."

Chiou and Tucker observe that the supposed cost of privacy laws to consumers and to companies may be lower than perceived. They also contend that their findings weaken the claim that data retention affects search market dominance, which could make data retention less relevant in antitrust discussions of Google.
https://www.theregister.co.uk/2017/0...search_better/





Franklin Foer's 'World Without Mind' Argues that Silicon Valley Will Lead Us to Our Doom
Steven Zeitchik

To many Americans, large technology firms embody much of what’s good about the modern world. Google holds the key to new depths of knowledge. Amazon is the white-knight savior of impulse shopping. Facebook builds the connective tissue to old friends and colleagues.

Franklin Foer‎ has a different perspective. In his new book, "World Without Mind,” the veteran journalist lays out a more ominous‎ view of where Big Tech would like to take us — in many ways, already has taken us.

Investigating the practices of these digital gatekeepers, he has crafted an anti-Silicon Valley manifesto that while occasionally slipping into alarmism and get-off-my-lawn-ism makes a cogently scary case against the influence of U.S. tech firms (but not, crucially, technology itself). Silicon Valley, he argues, may say it wants to improve the world. But its true endgame is the advancement of an ideological agenda.‎ And it's a terrifying one.

By introducing addictive new features, the book says, these companies have made us hopelessly dependent. Once hooked, consumers are robbed of choice, milked for profit, deprived of privacy and made the subjects of stealth social engineering experiments. “We are,” Foer writes, “the screws and rivets in their grand design.”

Those sound like some grandiose claims. Foer supports them — to a point.

The author previously wrote another globalist study through a particular lens, the entertaining and insightful sports social history "How Soccer Explains the World.” He also served as editor of a revamped (until it wasn’t) New Republic.

It was that latter experience that fuels this book — and, clearly, Foer’s pessimism. The prestigious New Republic was bought in 2012 by Facebook co-founder Chris Hughes, who hired Foer in a fit of shared rosy thinking about long-form journalism. But Hughes would a few years later come to embrace Silicon Valley’s principles of efficiency and data, a pivot that ultimately drove out Foer and many longtime writers. That opened up the author’s eyes.

These firms have a program: to make the world less private, less individual, less creative, less human.

Foer lays out in elaborate detail how the data-driven science of web traffic can hold good journalism hostage, as he says it did at TNR and continues to do elsewhere. He also goes company by company, digital behemoth by digital behemoth, presenting the motivations, methods and mind-sets he says present a threat to individuality.

In some of the more surprising and futuristic sections, he argues that Google's expansions have less to do with new businesses than with a sweeping artificial intelligence-driven ideology meant to reduce human autonomy. (Anyone who has ever found their brains unable to process directions without the help of Google Maps has begun to get a small taste of what will be, in Foer's estimation, a much larger meal.‎)

Or take Amazon, he says, which has subjugated book publishing to its rule by controlling many parts of the distribution chain. The book argues that the company has consolidated so much power that even upstanding journalists, worried about their own books, become afraid to criticize it. (Monopolies form the core of the threat, according to the author, with each of these tech giants dividing up control of different aspects of modern life like a chef carving a roasted chicken.)

The author saves some of his most provocative rhetoric for Facebook. Calling its M.O. a “paternalistic nudging,” he describes a company that treat humans as a giant data set, noting how Facebook employees can run “experiments” on the service's tens of millions of users. The Mark Zuckerberg-led firm, he says, furnishes the illusion of free will and individual identity. But what really compels it is the achievement of certain social outcomes. By manipulating the news feeds of its massive user base, Facebook seeks to do everything from getting preferred political candidates elected (by subtly motivating the Americans who would vote for them) to controlling collective emotions (by adding or removing positive adjectives in feeds). The point is not demonstrated conclusively, but Foer offers a number of smoking guns.

Big Tech has imposed its will on the resident population with neither our input nor our permission.

Foer could hardly be called a Luddite: He admits purchasing and owning myriad digital devices over the years and readily acknowledges the improvements they've afforded. But such conveniences mask a dirtier agenda, he argues.‎

“[It’s] chilling to hear [co-founder Larry Page] contemplate how Google will someday employ more than one million people,” Foer writes as he describes the company’s effort to blend humans with machines and dilute the human will. “That’s not just a boast about dominating an industry where he faces no true rivals, it’s a boast about something far vaster, a statement of Google’s intent to impose its values and theological convictions on the world.”

Or, as Foer says of all the companies’ efforts to decode people like a string of data: “They have built their empires by pulverizing privacy; they will further ensconce themselves by pushing boundaries, by taking even more invasive steps that build toward an even more complete portrait of us.”

In its march to Wall Street and pop-cultural dominance, Big Tech has certainly had its prophets of doom — the you-are-not-a-gadget-ism of Jaron Lanier, to take one example. But it has rarely had one like Foer, as much journalist and historian as social critic, who dived into the world a researcher and emerged a partisan. Foer draws on numerous historical tech innovations, from Descartes’ automatons to Western Union’s cozy relationship with the Associated Press, to offer the early templates for modern Big Tech practices.

The narrative also traces the roots of technological innovation in Northern California — particularly 1960s prophet Stewart Brand, who long before Steve Jobs was planting the hippie seeds from which all this has sprung. It is here, he asserts, that countercultural ideals of improvement would begin morphing into the egoism that these were the people who should best decide how to enact them.

Foer also has a knack for finding the aptly revealing quote from a Silicon Valley executive; this is a book interested in petard-hoisting.

“World Without Mind” becomes a little too preoccupied with journalism and creativity, particularly in its latter sections. The effect of certain technology on media can be profound and frightening. But it is hardly the sum of the changes the digital world has wrought. Little time is spent on Wikipedia or crowdfunding, for example, enhancements that would seem to bring less downside. Or coming innovations such as self-driving cars and virtual reality, whose full effects remain unmeasurable but certainly offer their share of upside. Apple gets scant coverage too. Taken in total, it can make Foer seem like a pessimistic cherry-picker.

Some may also find his unifying theories‎ a little too grand. A Google creating a human-challenging AI and an Amazon shipping books at a discount may not be united in conspiracy; their respective consequences may also not be equally significant.‎

But he mostly and persistently, with the zealotry of the companies he derides, builds a strong philosophical case. Like an occupying power dividing up territory, he asserts, Big Tech has imposed its will on the resident population with neither our input nor our permission. These firms have a program: to make the world less private, less individual, less creative, less human.

Are these companies merely the latest wave of capitalist enterprises, slightly drunk on power, yet not fundamentally that different from many of their nontech counterparts? Or is the combination of vast wealth, ambition, know-how and ideological certitude an insidious force — capable, with our love and permission, of bending us to their will?

Foer makes his position clear. Readers may be less certain, but they're certainly left with a lot to fear.

“World Without Mind: The Existential Threat of Big Tech”

Franklin Foer

Penguin Press: 272 pp., $27
http://www.sandiegouniontribune.com/...912-story.html





Facebook’s War On Free Will

How technology is making our minds redundant.
Franklin Foer

All the values that Silicon Valley professes are the values of the 60s. The big tech companies present themselves as platforms for personal liberation. Everyone has the right to speak their mind on social media, to fulfil their intellectual and democratic potential, to express their individuality. Where television had been a passive medium that rendered citizens inert, Facebook is participatory and empowering. It allows users to read widely, think for themselves and form their own opinions.

We can’t entirely dismiss this rhetoric. There are parts of the world, even in the US, where Facebook emboldens citizens and enables them to organise themselves in opposition to power. But we shouldn’t accept Facebook’s self-conception as sincere, either. Facebook is a carefully managed top-down system, not a robust public square. It mimics some of the patterns of conversation, but that’s a surface trait.

In reality, Facebook is a tangle of rules and procedures for sorting information, rules devised by the corporation for the ultimate benefit of the corporation. Facebook is always surveilling users, always auditing them, using them as lab rats in its behavioural experiments. While it creates the impression that it offers choice, in truth Facebook paternalistically nudges users in the direction it deems best for them, which also happens to be the direction that gets them thoroughly addicted. It’s a phoniness that is most obvious in the compressed, historic career of Facebook’s mastermind.

Mark Zuckerberg is a good boy, but he wanted to be bad, or maybe just a little bit naughty. The heroes of his adolescence were the original hackers. These weren’t malevolent data thieves or cyberterrorists. Zuckerberg’s hacker heroes were disrespectful of authority. They were technically virtuosic, infinitely resourceful nerd cowboys, unbound by conventional thinking. In the labs of the Massachusetts Institute of Technology (MIT) during the 60s and 70s, they broke any rule that interfered with building the stuff of early computing, such marvels as the first video games and word processors. With their free time, they played epic pranks, which happened to draw further attention to their own cleverness – installing a living cow on the roof of a Cambridge dorm; launching a weather balloon, which miraculously emerged from beneath the turf, emblazoned with “MIT”, in the middle of a Harvard-Yale football game.

The hackers’ archenemies were the bureaucrats who ran universities, corporations and governments. Bureaucrats talked about making the world more efficient, just like the hackers. But they were really small-minded paper-pushers who fiercely guarded the information they held, even when that information yearned to be shared. When hackers clearly engineered better ways of doing things – a box that enabled free long-distance calls, an instruction that might improve an operating system – the bureaucrats stood in their way, wagging an unbending finger. The hackers took aesthetic and comic pleasure in outwitting the men in suits.

When Zuckerberg arrived at Harvard in the fall of 2002, the heyday of the hackers had long passed. They were older guys now, the stuff of good tales, some stuck in twilight struggles against The Man. But Zuckerberg wanted to hack, too, and with that old-time indifference to norms. In high school he picked the lock that prevented outsiders from fiddling with AOL’s code and added his own improvements to its instant messaging program. As a college sophomore he hatched a site called Facemash – with the high-minded purpose of determining the hottest kid on campus. Zuckerberg asked users to compare images of two students and then determine the better-looking of the two. The winner of each pairing advanced to the next round of his hormonal tournament. To cobble this site together, Zuckerberg needed photos. He purloined those from the servers of the various Harvard houses. “One thing is certain,” he wrote on a blog as he put the finishing touches on his creation, “and it’s that I’m a jerk for making this site. Oh well.”

His brief experimentation with rebellion ended with his apologising to a Harvard disciplinary panel, as well as to campus women’s groups, and mulling strategies to redeem his soiled reputation. In the years since, he has shown that defiance really wasn’t his natural inclination. His distrust of authority was such that he sought out Don Graham, then the venerable chairman of the Washington Post company, as his mentor. After he started Facebook, he shadowed various giants of corporate America so that he could study their managerial styles up close.

Still, Zuckerberg’s juvenile fascination with hackers never died – or rather, he carried it forward into his new, more mature incarnation. When he finally had a corporate campus of his own, he procured a vanity address for it: One Hacker Way. He designed a plaza with the word “HACK” inlaid into the concrete. In the centre of his office park, he created an open meeting space called Hacker Square. This is, of course, the venue where his employees join for all-night Hackathons. As he told a group of would-be entrepreneurs, “We’ve got this whole ethos that we want to build a hacker culture.”

Plenty of companies have similarly appropriated hacker culture – hackers are the ur-disrupters – but none have gone as far as Facebook. By the time Zuckerberg began extolling the virtues of hacking, he had stripped the name of most of its original meaning and distilled it into a managerial philosophy that contains barely a hint of rebelliousness. Hackers, he told one interviewer, were “just this group of computer scientists who were trying to quickly prototype and see what was possible. That’s what I try to encourage our engineers to do here.” To hack is to be a good worker, a responsible Facebook citizen – a microcosm of the way in which the company has taken the language of radical individualism and deployed it in the service of conformism.

Zuckerberg claimed to have distilled that hacker spirit into a motivational motto: “Move fast and break things.” The truth is that Facebook moved faster than Zuckerberg could ever have imagined. His company was, as we all know, a dorm-room lark, a thing he ginned up in a Red Bull–induced fit of sleeplessness. As his creation grew, it needed to justify its new scale to its investors, to its users, to the world. It needed to grow up fast. Over the span of its short life, the company has caromed from self-description to self-description. It has called itself a tool, a utility and a platform. It has talked about openness and connectedness. And in all these attempts at defining itself, it has managed to clarify its intentions.

Though Facebook will occasionally talk about the transparency of governments and corporations, what it really wants to advance is the transparency of individuals – or what it has called, at various moments, “radical transparency” or “ultimate transparency”. The theory holds that the sunshine of sharing our intimate details will disinfect the moral mess of our lives. With the looming threat that our embarrassing information will be broadcast, we’ll behave better. And perhaps the ubiquity of incriminating photos and damning revelations will prod us to become more tolerant of one another’s sins. “The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly,” Zuckerberg has said. “Having two identities for yourself is an example of a lack of integrity.”

The point is that Facebook has a strong, paternalistic view on what’s best for you, and it’s trying to transport you there. “To get people to this point where there’s more openness – that’s a big challenge. But I think we’ll do it,” Zuckerberg has said. He has reason to believe that he will achieve that goal. With its size, Facebook has amassed outsized powers. “In a lot of ways Facebook is more like a government than a traditional company,” Zuckerberg has said. “We have this large community of people, and more than other technology companies we’re really setting policies.”

Without knowing it, Zuckerberg is the heir to a long political tradition. Over the last 200 years, the west has been unable to shake an abiding fantasy, a dream sequence in which we throw out the bum politicians and replace them with engineers – rule by slide rule. The French were the first to entertain this notion in the bloody, world-churning aftermath of their revolution. A coterie of the country’s most influential philosophers (notably, Henri de Saint-Simon and Auguste Comte) were genuinely torn about the course of the country. They hated all the old ancient bastions of parasitic power – the feudal lords, the priests and the warriors – but they also feared the chaos of the mob. To split the difference, they proposed a form of technocracy – engineers and assorted technicians would rule with beneficent disinterestedness. Engineers would strip the old order of its power, while governing in the spirit of science. They would impose rationality and order.

This dream has captivated intellectuals ever since, especially Americans. The great sociologist Thorstein Veblen was obsessed with installing engineers in power and, in 1921, wrote a book making his case. His vision briefly became a reality. In the aftermath of the first world war, American elites were aghast at all the irrational impulses unleashed by that conflict – the xenophobia, the racism, the urge to lynch and riot. And when the realities of economic life had grown so complicated, how could politicians possibly manage them? Americans of all persuasions began yearning for the salvific ascendance of the most famous engineer of his time: Herbert Hoover. In 1920, Franklin D Roosevelt – who would, of course, go on to replace him in 1932 – organised a movement to draft Hoover for the presidency.

The Hoover experiment, in the end, hardly realised the happy fantasies about the Engineer King. A very different version of this dream, however, has come to fruition, in the form of the CEOs of the big tech companies. We’re not ruled by engineers, not yet, but they have become the dominant force in American life – the highest, most influential tier of our elite.

There’s another way to describe this historical progression. Automation has come in waves. During the industrial revolution, machinery replaced manual workers. At first, machines required human operators. Over time, machines came to function with hardly any human intervention. For centuries, engineers automated physical labour; our new engineering elite has automated thought. They have perfected technologies that take over intellectual processes, that render the brain redundant. Or, as the former Google and Yahoo executive Marissa Mayer once argued, “You have to make words less human and more a piece of the machine.” Indeed, we have begun to outsource our intellectual work to companies that suggest what we should learn, the topics we should consider, and the items we ought to buy. These companies can justify their incursions into our lives with the very arguments that Saint-Simon and Comte articulated: they are supplying us with efficiency; they are imposing order on human life.

Nobody better articulates the modern faith in engineering’s power to transform society than Zuckerberg. He told a group of software developers, “You know, I’m an engineer, and I think a key part of the engineering mindset is this hope and this belief that you can take any system that’s out there and make it much, much better than it is today. Anything, whether it’s hardware or software, a company, a developer ecosystem – you can take anything and make it much, much better.” The world will improve, if only Zuckerberg’s reason can prevail – and it will.

The precise source of Facebook’s power is algorithms. That’s a concept repeated dutifully in nearly every story about the tech giants, yet it remains fuzzy at best to users of those sites. From the moment of the algorithm’s invention, it was possible to see its power, its revolutionary potential. The algorithm was developed in order to automate thinking, to remove difficult decisions from the hands of humans, to settle contentious debates.

The essence of the algorithm is entirely uncomplicated. The textbooks compare them to recipes – a series of precise steps that can be followed mindlessly. This is different from equations, which have one correct result. Algorithms merely capture the process for solving a problem and say nothing about where those steps ultimately lead.

These recipes are the crucial building blocks of software. Programmers can’t simply order a computer to, say, search the internet. They must give the computer a set of specific instructions for accomplishing that task. These instructions must take the messy human activity of looking for information and transpose that into an orderly process that can be expressed in code. First do this … then do that. The process of translation, from concept to procedure to code, is inherently reductive. Complex processes must be subdivided into a series of binary choices. There’s no equation to suggest a dress to wear, but an algorithm could easily be written for that – it will work its way through a series of either/or questions (morning or night, winter or summer, sun or rain), with each choice pushing to the next.

For the first decades of computing, the term “algorithm” wasn’t much mentioned. But as computer science departments began sprouting across campuses in the 60s, the term acquired a new cachet. Its vogue was the product of status anxiety. Programmers, especially in the academy, were anxious to show that they weren’t mere technicians. They began to describe their work as algorithmic, in part because it tied them to one of the greatest of all mathematicians – the Persian polymath Muhammad ibn Musa al-Khwarizmi, or as he was known in Latin, Algoritmi. During the 12th century, translations of al-Khwarizmi introduced Arabic numerals to the west; his treatises pioneered algebra and trigonometry. By describing the algorithm as the fundamental element of programming, the computer scientists were attaching themselves to a grand history. It was a savvy piece of name-dropping: See, we’re not arriviste, we’re working with abstractions and theories, just like the mathematicians!

There was sleight of hand in this self-portrayal. The algorithm may be the essence of computer science – but it’s not precisely a scientific concept. An algorithm is a system, like plumbing or a military chain of command. It takes knowhow, calculation and creativity to make a system work properly. But some systems, like some armies, are much more reliable than others. A system is a human artefact, not a mathematical truism. The origins of the algorithm are unmistakably human, but human fallibility isn’t a quality that we associate with it. When algorithms reject a loan application or set the price for an airline flight, they seem impersonal and unbending. The algorithm is supposed to be devoid of bias, intuition, emotion or forgiveness.

Silicon Valley’s algorithmic enthusiasts were immodest about describing the revolutionary potential of their objects of affection. Algorithms were always interesting and valuable, but advances in computing made them infinitely more powerful. The big change was the cost of computing: it collapsed, just as the machines themselves sped up and were tied into a global network. Computers could stockpile massive piles of unsorted data – and algorithms could attack this data to find patterns and connections that would escape human analysts. In the hands of Google and Facebook, these algorithms grew ever more powerful. As they went about their searches, they accumulated more and more data. Their machines assimilated all the lessons of past searches, using these learnings to more precisely deliver the desired results.

For the entirety of human existence, the creation of knowledge was a slog of trial and error. Humans would dream up theories of how the world worked, then would examine the evidence to see whether their hypotheses survived or crashed upon their exposure to reality. Algorithms upend the scientific method – the patterns emerge from the data, from correlations, unguided by hypotheses. They remove humans from the whole process of inquiry. Writing in Wired, Chris Anderson, then editor-in-chief, argued: “We can stop looking for models. We can analyse the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.”

On one level, this is undeniable. Algorithms can translate languages without understanding words, simply by uncovering the patterns that undergird the construction of sentences. They can find coincidences that humans might never even think to seek. Walmart’s algorithms found that people desperately buy strawberry Pop-Tarts as they prepare for massive storms.

Still, even as an algorithm mindlessly implements its procedures – and even as it learns to see new patterns in the data – it reflects the minds of its creators, the motives of its trainers. Amazon and Netflix use algorithms to make recommendations about books and films. (One-third of purchases on Amazon come from these recommendations.) These algorithms seek to understand our tastes, and the tastes of like-minded consumers of culture. Yet the algorithms make fundamentally different recommendations. Amazon steers you to the sorts of books that you’ve seen before. Netflix directs users to the unfamiliar. There’s a business reason for this difference. Blockbuster movies cost Netflix more to stream. Greater profit arrives when you decide to watch more obscure fare. Computer scientists have an aphorism that describes how algorithms relentlessly hunt for patterns: they talk about torturing the data until it confesses. Yet this metaphor contains unexamined implications. Data, like victims of torture, tells its interrogator what it wants to hear.

Like economics, computer science has its preferred models and implicit assumptions about the world. When programmers are taught algorithmic thinking, they are told to venerate efficiency as a paramount consideration. This is perfectly understandable. An algorithm with an ungainly number of steps will gum up the machinery, and a molasses-like server is a useless one. But efficiency is also a value. When we speed things up, we’re necessarily cutting corners; we’re generalising.

Algorithms can be gorgeous expressions of logical thinking, not to mention a source of ease and wonder. They can track down copies of obscure 19th-century tomes in a few milliseconds; they put us in touch with long-lost elementary school friends; they enable retailers to deliver packages to our doors in a flash. Very soon, they will guide self-driving cars and pinpoint cancers growing in our innards. But to do all these things, algorithms are constantly taking our measure. They make decisions about us and on our behalf. The problem is that when we outsource thinking to machines, we are really outsourcing thinking to the organisations that run the machines.

Mark Zuckerberg disingenuously poses as a friendly critic of algorithms. That’s how he implicitly contrasts Facebook with his rivals across the way at Google. Over in Larry Page’s shop, the algorithm is king – a cold, pulseless ruler. There’s not a trace of life force in its recommendations, and very little apparent understanding of the person keying a query into its engine. Facebook, in his flattering self-portrait, is a respite from this increasingly automated, atomistic world. “Every product you use is better off with your friends,” he says.

What he is referring to is Facebook’s news feed. Here’s a brief explanation for the sliver of humanity who have apparently resisted Facebook: the news feed provides a reverse chronological index of all the status updates, articles and photos that your friends have posted to Facebook. The news feed is meant to be fun, but also geared to solve one of the essential problems of modernity – our inability to sift through the ever-growing, always-looming mounds of information. Who better, the theory goes, to recommend what we should read and watch than our friends? Zuckerberg has boasted that the News Feed turned Facebook into a “personalised newspaper”.

Unfortunately, our friends can do only so much to winnow things for us. Turns out, they like to share a lot. If we just read their musings and followed links to articles, we might be only a little less overwhelmed than before, or perhaps even deeper underwater. So Facebook makes its own choices about what should be read. The company’s algorithms sort the thousands of things a Facebook user could possibly see down to a smaller batch of choice items. And then within those few dozen items, it decides what we might like to read first.

Algorithms are, by definition, invisibilia. But we can usually sense their presence – that somewhere in the distance, we’re interacting with a machine. That’s what makes Facebook’s algorithm so powerful. Many users – 60%, according to the best research – are completely unaware of its existence. But even if they know of its influence, it wouldn’t really matter. Facebook’s algorithm couldn’t be more opaque. It has grown into an almost unknowable tangle of sprawl. The algorithm interprets more than 100,000 “signals” to make its decisions about what users see. Some of these signals apply to all Facebook users; some reflect users’ particular habits and the habits of their friends. Perhaps Facebook no longer fully understands its own tangle of algorithms – the code, all 60m lines of it, is a palimpsest, where engineers add layer upon layer of new commands.

Pondering the abstraction of this algorithm, imagine one of those earliest computers with its nervously blinking lights and long rows of dials. To tweak the algorithm, the engineers turn the knob a click or two. The engineers are constantly making small adjustments here and there, so that the machine performs to their satisfaction. With even the gentlest caress of the metaphorical dial, Facebook changes what its users see and read. It can make our friends’ photos more or less ubiquitous; it can punish posts filled with self-congratulatory musings and banish what it deems to be hoaxes; it can promote video rather than text; it can favour articles from the likes of the New York Times or BuzzFeed, if it so desires. Or if we want to be melodramatic about it, we could say Facebook is constantly tinkering with how its users view the world – always tinkering with the quality of news and opinion that it allows to break through the din, adjusting the quality of political and cultural discourse in order to hold the attention of users for a few more beats.

But how do the engineers know which dial to twist and how hard? There’s a whole discipline, data science, to guide the writing and revision of algorithms. Facebook has a team, poached from academia, to conduct experiments on users. It’s a statistician’s sexiest dream – some of the largest data sets in human history, the ability to run trials on mathematically meaningful cohorts. When Cameron Marlow, the former head of Facebook’s data science team, described the opportunity, he began twitching with ecstatic joy. “For the first time,” Marlow said, “we have a microscope that not only lets us examine social behaviour at a very fine level that we’ve never been able to see before, but allows us to run experiments that millions of users are exposed to.”

Facebook likes to boast about the fact of its experimentation more than the details of the actual experiments themselves. But there are examples that have escaped the confines of its laboratories. We know, for example, that Facebook sought to discover whether emotions are contagious. To conduct this trial, Facebook attempted to manipulate the mental state of its users. For one group, Facebook excised the positive words from the posts in the news feed; for another group, it removed the negative words. Each group, it concluded, wrote posts that echoed the mood of the posts it had reworded. This study was roundly condemned as invasive, but it is not so unusual. As one member of Facebook’s data science team confessed: “Anyone on that team could run a test. They’re always trying to alter people’s behaviour.”

There’s no doubting the emotional and psychological power possessed by Facebook – or, at least, Facebook doesn’t doubt it. It has bragged about how it increased voter turnout (and organ donation) by subtly amping up the social pressures that compel virtuous behaviour. Facebook has even touted the results from these experiments in peer-reviewed journals: “It is possible that more of the 0.60% growth in turnout between 2006 and 2010 might have been caused by a single message on Facebook,” said one study published in Nature in 2012. No other company has made such claims about its ability to shape democracy like this – and for good reason. It’s too much power to entrust to a corporation.

The many Facebook experiments add up. The company believes that it has unlocked social psychology and acquired a deeper understanding of its users than they possess of themselves. Facebook can predict users’ race, sexual orientation, relationship status and drug use on the basis of their “likes” alone. It’s Zuckerberg’s fantasy that this data might be analysed to uncover the mother of all revelations, “a fundamental mathematical law underlying human social relationships that governs the balance of who and what we all care about”. That is, of course, a goal in the distance. In the meantime, Facebook will keep probing – constantly testing to see what we crave and what we ignore, a never-ending campaign to improve Facebook’s capacity to give us the things that we want and things we don’t even know we want. Whether the information is true or concocted, authoritative reporting or conspiratorial opinion, doesn’t really seem to matter much to Facebook. The crowd gets what it wants and deserves.

The automation of thinking: we’re in the earliest days of this revolution, of course. But we can see where it’s heading. Algorithms have retired many of the bureaucratic, clerical duties once performed by humans – and they will soon begin to replace more creative tasks. At Netflix, algorithms suggest the genres of movies to commission. Some news wires use algorithms to write stories about crime, baseball games and earthquakes – the most rote journalistic tasks. Algorithms have produced fine art and composed symphonic music, or at least approximations of them.

It’s a terrifying trajectory, especially for those of us in these lines of work. If algorithms can replicate the process of creativity, then there’s little reason to nurture human creativity. Why bother with the tortuous, inefficient process of writing or painting if a computer can produce something seemingly as good and in a painless flash? Why nurture the overinflated market for high culture when it could be so abundant and cheap? No human endeavour has resisted automation, so why should creative endeavours be any different?

The engineering mindset has little patience for the fetishisation of words and images, for the mystique of art, for moral complexity or emotional expression. It views humans as data, components of systems, abstractions. That’s why Facebook has so few qualms about performing rampant experiments on its users. The whole effort is to make human beings predictable – to anticipate their behaviour, which makes them easier to manipulate. With this sort of cold-blooded thinking, so divorced from the contingency and mystery of human life, it’s easy to see how long-standing values begin to seem like an annoyance – why a concept such as privacy would carry so little weight in the engineer’s calculus, why the inefficiencies of publishing and journalism seem so imminently disruptable.

Facebook would never put it this way, but algorithms are meant to erode free will, to relieve humans of the burden of choosing, to nudge them in the right direction. Algorithms fuel a sense of omnipotence, the condescending belief that our behaviour can be altered, without our even being aware of the hand guiding us, in a superior direction. That’s always been a danger of the engineering mindset, as it moves beyond its roots in building inanimate stuff and begins to design a more perfect social world. We are the screws and rivets in the grand design.
https://www.theguardian.com/technolo...r-on-free-will





Facebook to Turn Over Thousands of Russian Ads to Congress, Reversing Decision
Craig Timberg, Carol D. Leonnig and Elizabeth Dwoskin

Facebook on Thursday announced it would turn over to Congress copies of more than 3,000 politically themed advertisements bought through Russian accounts during the 2016 U.S. presidential campaign, reversing a decision that had frustrated lawmakers.

The company has been struggling for months to address the steadily mounting evidence that Russians manipulated the social media platform in their bid to tip the presidential election in favor of Republican Donald Trump.

Democratic lawmakers in recent days had demanded that Facebook be more open about what it knows and to dig more deeply into its troves of data to analyze the propaganda effort, which the company has acknowledged involved at least 470 fake accounts and pages created by a shadowy Russian company that spent more than $100,000 targeting U.S. voters. Lawmakers particularly wanted copies of the ads bought through the fake accounts, some of which Facebook officials showed to Hill investigators and then took away, making further study impossible. The company said sharing the ads would compromise privacy of users.

Facebook chief executive Mark Zuckerberg announced a reversal of that decision Thursday, saying that the company believed it could share the ads with Congress without compromising user privacy. The company already had shared at least some of the same information with special counsel Robert S. Mueller III.

“I care deeply about the democratic process and protecting its integrity,” Zuckerberg said on Facebook Live, a video streaming service provided by the company. “Facebook’s mission is all about giving people a voice and bringing people closer together. Those are deeply democratic values, and we’re proud of them. I don’t want anyone to use our tools to undermine democracy. That’s not what we stand for.”

The company has been slow to respond to signs, dating back to November, that Russians used Facebook and other technology platforms to deliver propaganda and manipulate voter sentiment. As evidence has grown, including from Facebook’s own internal investigations, lawmakers have pushed the company and others to search more deeply and more quickly for answers — both to determine what happened in 2016 and to head off a repeat in future elections.

Hill investigators are still seeking a longer, more-detailed version of an investigative report into election meddling on the platform that Facebook concluded in April. A 13-page final version was released publicly that month but without many of the details included in earlier drafts, which were several times longer, say people familiar with the investigation. The public report made no explicit mention of Russia, nor did it discuss the possibility that the propaganda may have included messages delivered through advertising — the core of Facebook’s multi-billion-dollar business.

The steps Zuckerberg announced Thursday — which included efforts to improve the review of political ads and enhance transparency about who buys them — drew some praise from lawmakers who recently had expressed frustration with the company.

“This is an iconic company in many ways, but they really rely on the trust of their users. I think the steps they took today were important and necessary. But there are still a lot of questions.” said Sen. Mark R. Warner (D-Va.), the highest-ranking Democrat on the Senate Intelligence Committee.

Warner said he believes the full impact of the 470 fake Russian accounts and pages remains unknown and likely has been played down by Facebook. He said more than 3,000 ads could have reached tens of thousands of users through their Facebook “friends,” causing a dramatic influence on an election narrowly won by Trump in several states. Warner said Facebook still needs to do more intense investigation into other ways that its platform was manipulated.

“Americans ought to be able to see the content of ads that are used for and against candidates,” Warner said. “Americans both need to know what happened in the election of 2016, and have confidence going forward that if they see an ad it isn’t sponsored by a foreign government.”

His House Intelligence Committee counterpart, Rep. Adam B. Schiff (D-Calif.), said in a statement: “The data Facebook will now turn over to the Committee should help us better understand what happened, beyond the preliminary briefings we already received. It will be important for the Committee to scrutinize how rigorous Facebook’s internal investigation has been, to test its conclusions and to understand why it took as long as it did to discover the Russian sponsored advertisements and what else may yet be uncovered.”

Republicans on the committee did not have any immediate comment.

The halting response from Facebook has resulted from a combination of pressures on Zuckerberg, who has been reluctant to impose restrictions on users’ free speech but also have come to accept that stronger measures are necessary against abuse, say people familiar with his thinking.

“What we’re seeing is Mark’s idealism coming up against the hard realization that they were hacked in this way that nobody was expecting,” said Tim O’Reilly, chief executive of Silicon Valley publisher O’Reilly Media. “He is acknowledging that online media enables targeting in ways that are impossible for broadcast media to do – so the tools of disclosure need to actually be more transparent.”

Facebook is not alone in drawing the attention of investigators. U.S. intelligence agencies have portrayed a broad propaganda campaign by the Russians, and numerous independent researchers have detailed evidence of propaganda flowing through Google, Twitter and other tech platforms. Lawmakers have called for their full cooperation and made clear that tech executives should be prepared to testify before investigative committees on Capitol Hill.

But Facebook has drawn particular attention in the weeks since it announced, on Sept. 6, that the Internet Research Agency, a notorious troll farm based in St. Petersburg, Russia, had purchased political ads through Facebook.

Zuckerberg, in his remarks, also vowed to continue investigating and cooperating with federal authorities, including on unanswered questions involving the possible involvement of other Russian groups and those in former Soviet states. Facebook will begin requiring that political ads make clear what accounts have bought them and also what other ads the account is running elsewhere on the social media platform, Zuckerberg said.

The company also plans to expand its team working on election integrity and build better connections with electoral officials in many nations.

“We are in a new world,” Zuckerberg said. “It is a new challenge for internet communities to deal with nation states attempting to subvert elections. But if that’s what we must do, we are committed to rising to the occasion.”

Tom Hamburger and Dana Priest contributed to this report.
https://www.washingtonpost.com/busin...4c2_story.html





Hackers Hid Backdoor In CCleaner Security App With 2 Billion Downloads -- 2.3 Million Infected
Thomas Fox-Brewster

Users of Avast-owned security application CCleaner for Windows have been advised to update their software immediately, after researchers discovered criminal hackers had installed a backdoor in the tool. The tainted application allows for download of further malware, be it ransomware or keyloggers, with fears millions are affected. According to Avast's own figures, 2.27 million ran the affected software, though the company said users should not panic.

The affected app, CCleaner, is a maintenance and file clean-up software run by a subsidiary of anti-virus giant Avast. It has 2 billion downloads and claims to be getting 5 million extra a week, making the threat particularly severe, researchers at Cisco Talos warned. Comparing it to the NotPetya ransomware outbreak, which spread after a Ukrainian accounting app was infected, the researchers discovered the threat on September 13 after CCleaner 5.33 caused Talos systems to flag malicious activity.

Further investigation found the CCleaner download server was hosting the backdoored app as far back as September 11. Talos warned in a blog Monday that the affected version was released on August 15, but on September 12 an untainted version 5.34 was released. For weeks then, the malware was spreading inside supposedly-legitimate security software.

The malware would send encrypted information about the infected computer - the name of the computer, installed software and running processes - back to the hackers' server. The hackers also used what's known as a domain generation algorithm (DGA); whenever the crooks' server went down, the DGA could create new domains to receive and send stolen data. Use of DGAs shows some sophistication on the part of the attackers.

Downplaying the threat?

CCleaner's owner, Avast-owned Piriform, has sought to ease concerns. Paul Yung, vice president of product at Piriform, wrote in a post Monday: "Based on further analysis, we found that the 5.33.6162 version of CCleaner and the 1.07.3191 version of CCleaner Cloud was illegally modified before it was released to the public, and we started an investigation process.

"The threat has now been resolved in the sense that the rogue server is down, other potential servers are out of the control of the attacker.

"Users of CCleaner Cloud version 1.07.3191 have received an automatic update. In other words, to the best of our knowledge, we were able to disarm the threat before it was able to do any harm."

Not all are convinced by the claims of Piriform, acquired by Avast in July. "I have a feeling they are downplaying it indeed," said Martijn Grooten, editor of security publication Virus Bulletin. Of the Piriform claim it had no evidence of much wrongdoing by the hacker, Grooten added: "As I read the Cisco blog, there was a backdoor that could have been used for other purposes.

"This is pretty severe. Of course, it may be that they really only stole ... 'non-sensitive data' ... but it could be useful in follow-up targeted attacks against specific users."

In its blog, Talos' researchers concluded: "This is a prime example of the extent that attackers are willing to go through in their attempt to distribute malware to organizations and individuals around the world. By exploiting the trust relationship between software vendors and the users of their software, attackers can benefit from users' inherent trust in the files and web servers used to distribute updates."

Avast CTO: No need to panic

Avast chief technology officer Ondrej Vlcek said there was, however, little reason to panic. He told Forbes the company used its Avast security tool to scan machines on which the affected CCleaner app was installed (in 30 per cent of Avast installs, CCleaner was also resident on the PC). That led to the conclusion that the attackers hadn't launched the second phase of their attack to cause more harm to victims.

"2.27 million is certainly a large number, so we're not downplaying in any way. It's a serious incident. But based on all the knowledge, we don't think there's any reason for users to panic," Vlcek added. "To the best of our knowledge, the second-stage payload never activated... It was prep for something bigger, but it was stopped before the attacker got the chance." He said Cisco Talos wasn't the first to notify Avast of the issues, another unnamed third party was.

It's unclear just who was behind the attacks. Yung said the company wouldn't speculate on how the attack happened or possible perpetrators. For now, any concerned users should head to the Piriform website to download the latest software.
https://www.forbes.com/sites/thomasb.../#d0bbf87316a8





CCleaner Malware Outbreak is Much Worse than it First Appeared

Microsoft, Cisco, and VMWare among those infected with additional mystery payload.
Dan Goodin

The recent CCleaner malware outbreak is much worse than it initially appeared, according to newly unearthed evidence. That evidence shows that the CCleaner malware infected at least 20 computers from a carefully selected list of high-profile technology companies with a mysterious payload.

Talos

Previously, researchers found no evidence that any of the computers infected by the booby-trapped version of the widely used CCleaner utility had received a second-stage payload the backdoor was capable of delivering. The new evidence—culled from data left on a command-and-control server during the last four days attackers operated it—shows otherwise. Of 700,000 infected PCs, 20 of them, belonging to highly targeted companies, received the second stage, according to an analysis published Wednesday by Cisco Systems' Talos Group.

Because the CCleaner backdoor was active for 31 days, the total number of infected computers is "likely at least in the order of hundreds," researchers from Avast, the antivirus company that acquired CCleaner in July, said in their own analysis published Thursday.

From September 12 to September 16, the highly advanced second stage was reserved for computers inside 20 companies or Web properties, including Cisco, Microsoft, Gmail, VMware, Akamai, Sony, and Samsung. The 20 computers that installed the payload were from eight of those targeted organizations, Avast said, without identifying which ones. Again, because the data covers only a small fraction of the time the backdoor was active, both Avast and Talos believe the true number of targets and victims was much bigger.

More fileless malware

The second stage appears to use a completely different control network. The complex code is heavily obfuscated and uses anti-debugging and anti-emulation tricks to conceal its inner workings. Craig Williams, a senior technology leader and global outreach manager at Talos, said the code contains a "fileless" third stage that's injected into computer memory without ever being written to disk, a feature that further makes analysis difficult. Researchers are in the process of reverse engineering the payload to understand precisely what it does on infected networks.

"When you look at this software package, it's very well developed," Williams told Ars. "This is someone who spent a lot of money with a lot of developers perfecting it. It's clear that whoever made this has used it before and is likely going to use it again."

Stage one of the malware collected a wide assortment of information from infected computers, including a list of all installed programs, all running processes, the operating-system version, hardware information, whether the user had administrative rights, and the hostname and domain name associated with the system. Combined, the information would allow attackers not only to further infect computers belonging to a small set of targeted organizations, but it would also ensure the later-stage payload is stable and undetectable.

Now that it's known the CCleaner backdoor actively installed a payload that went undetected for more than a month, Williams renewed his advice that people who installed the 32-bit version of CCleaner 5.33.6162 or CCleaner Cloud 1.07.3191 reformat their hard drives. He said simply removing the stage-one infection is insufficient given the proof now available that the second stage can survive and remain stealthy.

The group behind the attack remains unknown. Talos was able to confirm an observation, first made by AV provider Kaspersky Lab, that some of the code in the CCleaner backdoor overlaps with a backdoor used by a hacking group known both as APT 17 and Group 72. Researchers have tied this group to people in China. Talos also noticed that the command server set the time zone to one in the People's Republic of China. Williams warned, however, that attackers may have deliberately left the evidence behind as a "false flag" intended to mislead investigators about the true origin of the attack.

The CCleaner campaign is at least the third in two months to work by attacking developers of legitimate software used and trusted by a large or influential base of users. The NotPetya ransomware worm in July was seeded after attackers infected M.E.Doc, a developer of a tax-accounting application that's widely used in Ukraine. The attackers then caused the company's update mechanism to spread the ransomware. Last month, network-management software used by more than 100 banks worldwide was infected with a powerful backdoor after the tool developer, NetSarang, was hacked. Such supply-chain infections are concerning, because they work against people who do nothing more than install legitimate updates from trusted vendors.

The picture coming into focus now looks serious. Attackers gained control of the digital signing certificate and infrastructure used to distribute a software utility downloaded more than 2 billion times. They maintained that control with almost absolute stealth for 31 days, and, during just four days of that span, they infected 700,000 computers. Of the 700,000 infected PCs—again, believed to be a fraction of the total number of compromises during the campaign—a highly curated number of them received an advanced second-stage payload that researchers still don't understand. It's almost inevitable that more shoes will drop in this unfolding story.
https://arstechnica.com/information-...irst-appeared/





Steam Inventory Helper Reportedly Spies On Users

If you installed the Steam Inventory Helper on your computer, you may want to uninstall it as soon as possible: recent reports suggest this extension used to buy and sell digital goods on Steam is spying on its users.

Redditor Wartab made a thorough analysis of the tool and reached the following conclusions:

• The spyware code tracks your every move starting from the moment you visit a website until you leave. It also tracks where you are coming from on the site.
• Steam Inventory Helper tracks your clicks, including when you are moving your mouse and when you are focusing on an input.
• When you click a link, it sends the link’s URL to a background script.
• Fortunately, the code does not monitor what you type.

The purpose of this spyware is to collect data about gamers for promotional purposes.

Steam spyware

Here’s what Wartab wrote on Reddit:

I have just analyzed the current code of Steam Inventory Helper. Step by step what it does:

On every single page you visit, SIH executes code at document_start (meaning as soon as the page is opened). It even executes on your about:blank page and in all sub-frames on the currently visited site! The code executed is js/common/frame.js […]

What this script does is very nasty. First of all, it monitors EVERY SINGLE HTTP request you make. […] It will then send to their own server a summary of this HTTP request if some condition is met (promoteButter?). […]

Bottom line is: they are monitoring what sites you visit and may be sending a lot of your online activity to their own server. I couldn’t figure out when they do it, yet, but it seems to be for promotional stuff. More importantly, in the future, even if what they do now is legit, you will not be informed about any changes to their permissions, because it basically already has every permission it can get in that regard. Therefore I strongly suggest uninstalling and reporting this extension.

Steam has yet to issue any comment on this matter.
Users fear Steam could go into full tracking mode

This entire debacle has unsurprisingly let many users down and put them on edge, fearing that Steam could include something harmless to enable the permission at first and later enable a full-on tracking mode in a later update. However, it’s not likely Steam would ever resort to such a move considering the negative reaction to the spyware that was just discovered.

As a result of this news, many users have decided to uninstall many other Steam-related apps and extensions, fearing these programs might be spying on them as well. All in all, this revelation deals a devastating blow to Steam’s reputation.
Back to the user data privacy debate

There have been many revelations concerning the breaching of user data lately, like reports revealing that Netgear routers collect analytics data, Windows 10 Enterprise ignores user privacy settings, and the infamous advanced NSA backdoor infection that affected tens of thousands of Windows computers.

How can users prevent companies and other organizations from collecting data on their digital behavior and preferences? It seems that in the era of no privacy, VPN software could be the answer. However, any tool claiming to protect your online privacy isn’t 100% bulletproof and ultimately, there is no perfect solution to this quandary. With an increasing number of users becoming more aware and subsequently concerned about their data privacy, we’re sure developers will focus more on creating software that helps them defend the integrity of user privacy.

Just like the antivirus industry boomed after malware started to invade the internet, we’re sure companies that develop user data privacy software will become very successful in the future. No matter where you fall — consumer, developer, or hacker — the battle to keep user data private has only just begun.
http://windowsreport.com/steam-spyware/





Security Researchers Warn that GO Keyboard is Spying on Millions of Android Users
Mark Wycislik-Wilson

Security researchers from Adguard have issued a warning that the popular GO Keyboard app is spying on users. Produced by Chinese developers GOMO Dev Team, GO Keyboard was found to be transmitting personal information about users back to remote servers, as well as "using a prohibited technique to download dangerous executable code."

Adguard made the discovery while conducting research into the traffic consumption and unwanted behavior of various Android keyboards. The AdGuard for Android app makes it possible to see exactly what traffic an app is generating, and it showed that GO Keyboard was making worrying connections, making use of trackers, and sharing personal information.

Adguard notes that there are two versions of the keyboard in Google Play which it claims have more than 200 million users in total. GO Keyboard - Emoji keyboard, Swipe input, GIFs has a user rating of 4.5 stars; the very similarly-named GO Keyboard - Emoticon keyboard, Free Theme, GIF has a rating of 4.4 stars. Both versions of the app are still being updated.

Within the app description, the developers say:

PRIVACY and security
We will never collect your personal info including credit card information. In fact, we cares for privacy of what you type and who you type! [sic]

But Adguard points out that this is contradicted by the company's privacy policy. In addition to this, GO Keyboard shares personal information right after installation, communicates with dozens of tracking servers, and has access to sensitive data on phone. Adguard concedes that this is fairly typical for modern apps, but goes on to say that the app violates Google Play policies.

In the Malicious Behavior section of the Developer Policy Center, Google says that "apps that steal a user’s authentication information (such as usernames or passwords) or that mimic other apps or websites to trick users into disclosing personal or authentication information" are not permitted.

This is activity, Adguard says, that GO Android engages in:

Without explicit user consent, the GO keyboard reports to its servers your Google account email in addition to language, IMSI, location, network type, screen size, Android version and build, device model, etc.

Google's policies also ban the practice of downloading "executable code, such as dex files or native code, from a source other than Google Play." Again, Adguard found that this is exactly what GO Keyboard is doing -- downloading and executing code from a remote server. Adguard notes that:

Some of the downloaded plugins are marked as Adware or PUP by multiple AV engines.

Adguard has reported its findings to Google, and says that the permissions used by the app are extra cause for concern:

What's important, given the apps' extensive permissions, remote code execution introduces severe security and privacy risks. At any time the server owner may decide to change the app behavior and not just steal your email address, but do literally whatever he or she wants. Remember, it's a keyboard, and every important bit of information you enter goes through it!

We informed Google of these violations and are waiting for their reaction. Whatever their decision is, we find this behavior unacceptable and dangerous. Having 200+ Million users does not make an app trustworthy. Do not blindly trust mobile apps and always check their privacy policy and what permissions do they require before the installation.
https://betanews.com/2017/09/21/go-k...pying-warning/





Western Digital Ships 12 TB WD Gold HDD: 8 Platters and Helium
Anton Shilov

Western Digital has begun to ship its WD Gold HDD with 12 TB capacity to partners and large retailers. The 3.5” drive relies on the same platform as the HGST Ultrastar He12 launched this year, and will initially be available to select customers of the company. The WD Gold 12 TB is designed for enterprise workloads and has all the performance and reliability enhancements that we come to expect, but the availability at retail should make them accessible to wider audiences.

From a hardware point of view, the WD Gold 12 TB is similar to the HGST Ultrastar He12 12 TB hard drive: both are based on the fourth-generation HelioSeal technology that uses eight perpendicular magnetic recording platters with a 1.5 TB capacity for each platter. The internal architecture of both HDDs was redesigned compared to predecessors to accommodate the eighth platter. Since the WD Gold and the Ultrastar He12 are aimed at nearline enterprise environments, they are equipped with various sensors and technologies to protect themselves against vibration and as a result, guarantee sustained performance. For example, the WD Gold and the Ultrastar He12 attach their spindles both to the top and the bottom of the drives. In addition the HDDs feature a special technology that increases the accuracy of head positioning in high-vibration environments to improve performance, integrity, and reliability. Finally, both product families support TLER (time-limited error recovery) rebuild assist mode to speed up RAID recovery time.

Since the WD Gold 12 TB and the HGST Ultrastar He12 are similar internally and feature the same 7200 RPM spindle speed, they also have similar performance — the manufacturer puts them both at 255 MB/s sustained transfer rate and 4.16 ms average latency. The main difference between the WD Gold and the HGST Ultrastar He12 are the enterprise options for the latter: there are models with the SAS 12 Gb/s interface and there are models with SED support and Instant Secure Erase feature.

Western Digital aims its WD Gold and HGST Ultrastar He-series drives at operators of cloud and exascale data centers that demand maximum capacity. The 12 TB HDDs can increase the total storage capacity for a single rack from 2440 TB to 2880 TB, replacing 10 TB drives with 12 TB drives, which can be a major benefit for companies that need to maximize their storage capacity per watt and per square meter. Where the HGST-branded drives are made available primarily through B2B channels, the WD Gold are sold both through B2B and B2C channels and thus can be purchased by wider audiences. For example, boutique PC makers, as well as DIY enthusiasts, may start using the WD Gold 12 TB for their high-end builds, something they could not do with the HGST drives. These HDDs may be considered as an overkill for desktops, but since WD’s desktop offerings top at 6 TB, the WD Gold (and the perhaps inevitable future WD Red Pro 12 TB) is the WD’s closest rival for Seagate’s BarraCuda Pro drives.

The WD Gold HDD is currently available directly from Western Digital for $521.99 as well as from multiple retailers, including Newegg for $539.99. While over $500 for a hard drive is expensive, it is actually less than Western Digital charged for its WD Gold 8 TB about 1.5 years ago ($595) and considerably less than the initial price of the WD Gold 10 TB drive last April.
http://www.anandtech.com/show/11842/...latters-helium





Rolling Stone, Once a Counterculture Bible, Will Be Put Up for Sale
Sydney Ember

From a loft in San Francisco in 1967, a 21-year-old named Jann S. Wenner started a magazine that would become the counterculture bible for baby boomers. Rolling Stone defined cool, cultivated literary icons and produced star-making covers that were such coveted real estate they inspired a song.

But the headwinds buffeting the publishing industry, and some costly strategic missteps, have steadily taken a financial toll on Rolling Stone, and a botched story three years ago about an unproven gang rape at the University of Virginia badly bruised the magazine’s journalistic reputation.

And so, after a half-century reign that propelled him into the realm of the rock stars and celebrities who graced his covers, Mr. Wenner is putting his company’s controlling stake in Rolling Stone up for sale, relinquishing his hold on a publication he has led since its founding.

Mr. Wenner had long tried to remain an independent publisher in a business favoring size and breadth. But he acknowledged in an interview last week that the magazine he had nurtured would face a difficult, uncertain future on its own.

“I love my job, I enjoy it, I’ve enjoyed it for a long time,” said Mr. Wenner, 71. But letting go, he added, was “just the smart thing to do.”

The sale plans were devised by Mr. Wenner’s 27-year-old son, Gus, who has aggressively pared down the assets of Rolling Stone’s parent company, Wenner Media, in response to financial pressures. The Wenners recently sold the company’s other two magazines, Us Weekly and Men’s Journal. And last year, they sold a 49 percent stake in Rolling Stone to BandLab Technologies, a music technology company based in Singapore.

Both Jann and Gus Wenner, the president and chief operating officer of Wenner Media, said they intended to stay on at Rolling Stone. But they said they also recognized that the decision could ultimately be up to the new owner.

Still, the potential sale of Rolling Stone — on the eve of its 50th anniversary, no less — underscores how inhospitable the media landscape has become as print advertising and circulation have dried up.

“There’s a level of ambition that we can’t achieve alone,” Gus Wenner said last week in an interview at the magazine’s headquarters in Midtown Manhattan. “So we are being proactive and want to get ahead of the curve.”

“Publishing is a completely different industry than what it was,” he added. “The trends go in one direction, and we are very aware of that.”

The Wenners’ decision is also another clear sign that the days of celebrity editors are coming to a close. Earlier this month, Graydon Carter, the editor of Vanity Fair and a socialite and star in his own right, announced he planned to leave the magazine after 25 years. Robbie Myers, the longtime editor of Elle, Nancy Gibbs of Time magazine and Cindi Leive of Glamour also said last week that they were stepping down.

Anthony DeCurtis, a veteran music critic and a longtime Rolling Stone contributing editor, said he never thought Jann Wenner would sell Rolling Stone.

“That sense of the magazine editor’s hands on the magazine — that’s what’s going to get lost here,” he said. “I don’t know who’s going to be able to step in and do that anymore.”

Wenner Media has hired bankers to explore its sale, but the process is just beginning. BandLab’s stake in the company could also complicate matters. Neither Jann nor Gus Wenner would name any potential buyers, but one possible suitor is American Media Inc., the magazine publisher led by David J. Pecker that has already taken Us Weekly and Men’s Journal off Wenner Media’s hands.

The Wenners said that they expected a range of opportunities, and Jann Wenner said he hoped to find a buyer that understood Rolling Stone’s mission and that had “lots of money.”

“Rolling Stone has played such a role in the history of our times, socially and politically and culturally,” he said. “We want to retain that position.”

Jann Wenner tried his hand at other magazines over the decades, including the outdoor lifestyle magazine Outside and Family Life. But it was Rolling Stone that helped guide, and define, a generation.

“Who lives through the ’60s, ’70s, ’80s and ’90s and cannot be somehow wistful at this moment?” said Terry McDonell, a former top editor at Rolling Stone who also ran other Wenner magazines.

Rolling Stone filled its pages with pieces than ran in the thousands of words by standard bearers of the counterculture, including Hunter S. Thompson — whose “Fear and Loathing in Las Vegas” was published in the magazine in two parts — and Tom Wolfe. It started the career of the celebrity photographer Annie Leibovitz, who for many years delivered electrifying cover images, including an iconic photograph in 1981 of a naked John Lennon curled in a fetal position with Yoko Ono.

Music coverage in all of its forms — news, interviews, reviews — was the core of Rolling Stone, but its influence also stretched into pop culture, entertainment and politics. A bastion of liberal ideology, the magazine became a required stop for Democratic presidential candidates — Mr. Wenner has personally interviewed several, including Bill Clinton and Barack Obama — and it has pulled no punches in its appraisal of Republicans. In 2006, Rolling Stone suggested George W. Bush was the “worst president in history.” More recently, the magazine featured Justin Trudeau, the prime minister of Canada, on its cover with the headline, “Why Can’t He Be Our President?”

The magazine also published widely acclaimed political stories, including one in 2009 on Goldman Sachs by the writer Matt Taibbi, who famously described the company as “a great vampire squid wrapped around the face of humanity.” The next year, the magazine ran a piece with the headline, “The Runaway General,” that ended the career of Gen. Stanley A. McChrystal.

But that was perhaps the last Rolling Stone cover piece that gained significant journalistic acclaim. And the magazine’s reputation as a tastemaker for the music world had long since eroded, as Mr. Wenner clung to the past with covers that featured artists from his generation, even as younger artists emerged. Artists like Paul McCartney, Bruce Springsteen and Bob Dylan have continued to secure cover spots in recent years.

Rolling Stone suffered a devastating blow to its reputation when it retracted a debunked 2014 article about a gang rape at the University of Virginia. A damning report on the story by the Columbia Graduate School of Journalism cited fundamental journalistic failures. The article prompted three libel lawsuits against Rolling Stone, one of which led to a highly publicized trial last year that culminated with a federal jury awarding the plaintiff $3 million in damages.

The financial picture had also been bleak. In 2001, Jann Wenner sold a 50 percent stake in Us Weekly to the Walt Disney Company for $40 million, then borrowed $300 million five years later to buy back the stake. The deal saddled the company with debt for more than a decade, preventing it from investing as much as it might have in its magazines.

At the same time, Rolling Stone’s print advertising revenue and newsstand sales fell. And as readers increasingly embraced the web for their news and entertainment, Mr. Wenner remained skeptical, with a stubbornness that hamstrung his company.

Wenner Media was already a small magazine publisher. But the sale of Us Weekly and Men’s Journal, which together brought in roughly three-quarters of Wenner Media’s revenue, has left it further diminished.

Regardless, the sale of Rolling Stone would be Jann Wenner’s denouement, capping his unlikely rise from dope-smoking Berkeley dropout to silver-haired media mogul. An admirer of John Lennon and publishing mavens like William Randolph Hearst, Mr. Wenner — who invested $7,500 of borrowed money to start Rolling Stone along with his mentor, Ralph J. Gleason — was at turns idealist and desperado, crafting his magazine into a guide for the counterculture epoch while also gallivanting with superstars. He once boasted that he had turned down a $500 million offer for Rolling Stone, more than he could ever dream of getting for the magazine today. (BandLab invested $40 million to acquire its 49-percent stake in the magazine last year.)

Though he said he still cared deeply about Rolling Stone, Mr. Wenner has placed the magazine’s fate firmly in Gus’s hands, and he appears content to let someone else determine its path forward.

“I think it’s time for young people to run it,” he said.

Sitting in his second-floor office surrounded by a collection of rock ’n’ roll artifacts, Gus Wenner expressed hope that a new owner would provide the resources Rolling Stone needed to evolve and survive.

“It’s what we need to do as a business,” he said. “It’s what we need to do to grow the brand.”

Then, as only someone who had spent his life around rock ’n’ roll could, he gestured confidently to a tome of Bob Dylan lyrics on his desk. “If you’re not busy being born,” Mr. Wenner said, “then you’re busy dying.”

Ben Sisario contributed reporting.
https://www.nytimes.com/2017/09/17/b...zine-sale.html





One Year After Bricking Third-Party Ink With Update, HP Is Back on Its Bullshit
Sam Rutherford

The astronomical prices printer makers charge for cartridges have long been a favorite subject of internet comedians (with more than one noting that printer ink is now more valuable than gold), so it came as a bit of a surprise when HP actually made some concessions after pushing out an update that bricked unofficial ink cartridges last year. Unfortunately, it seems HP has now returned to its iron-fisted ways, once again locking down the use of third-party ink with a software update.

Last September, HP decided to push out Keurig-like DRM, preventing customers from using certain printers after inserting third-party ink cartridges until they replaced them with official tanks from HP. After more than 10,000 people joined the Electronic Frontier Foundation to complain about it, HP eventually backed down, apologizing for not properly “communicating about the authentication procedure to customers” and issuing an optional update to remove the “security” feature.

But according to ghacks.net, a new firmware update for HP Officejet printers released yesterday appears to be identical to the reviled DRM update released exactly one year ago. When you try to use third-party ink after installing the new/old firmware, you apparently run into an error that says “One or more cartridges appear to be damaged. Remove them and replace with new cartridges.” Depending on how many cartridges your specific printer uses, it may be possible to insert one or two without getting an error. But it seems when all of the ink cartridge slots are filled up, the warning message will be displayed again.

The new firmware reportedly affects printers from HP’s OfficeJet 6800 series, OfficeJet Pro 6200 series, OfficeJet Pro X 450 series, OfficeJet Pro 8600 series and more. We have reached out to HP for comment and will update this article if and when we hear back.

However, not all hope is lost, as there is a way to disable the Dynamic Securty feature that prevents third-party ink cartridges from being used. If you go HP’s support page here, you can find your printer model and download a different version of your printer’s firmware that doesn’t have issues with third-party ink. After you install the alternate firmware, you’ll then want to block any new firmware from being installed to prevent this situation from happening again in the future.

Or you could say fuck it, and get a printer from another company like Canon or Epson, which feature printers with refillable reservoirs instead of ridiculous replacement cartridges.
https://gizmodo.com/one-year-after-b...-hp-1809073739





Artificial Intelligence Just Made Guessing Your Password a Whole Lot Easier
Matthew Hutson

Last week, the credit reporting agency Equifax announced that malicious hackers had leaked the personal information of 143 million people in their system. That’s reason for concern, of course, but if a hacker wants to access your online data by simply guessing your password, you’re probably toast in less than an hour. Now, there’s more bad news: Scientists have harnessed the power of artificial intelligence (AI) to create a program that, combined with existing tools, figured more than a quarter of the passwords from a set of more than 43 million LinkedIn profiles. Yet the researchers say the technology may also be used to beat baddies at their own game.

The work could help average users and companies measure the strength of passwords, says Thomas Ristenpart, a computer scientist who studies computer security at Cornell Tech in New York City but was not involved with the study. “The new technique could also potentially be used to generate decoy passwords to help detect breaches.”

The strongest password guessing programs, John the Ripper and hashCat, use several techniques. One is simple brute force, in which they randomly try lots of combinations of characters until they get the right one. But other approaches involve extrapolating from previously leaked passwords and probability methods to guess each character in a password based on what came before. On some sites, these programs have guessed more than 90% of passwords. But they’ve required many years of manual coding to build up their plans of attack.

The new study aimed to speed this up by applying deep learning, a brain-inspired approach at the cutting edge of AI. Researchers at Stevens Institute of Technology in Hoboken, New Jersey, started with a so-called generative adversarial network, or GAN, which comprises two artificial neural networks. A “generator” attempts to produce artificial outputs (like images) that resemble real examples (actual photos), while a “discriminator” tries to detect real from fake. They help refine each other until the generator becomes a skilled counterfeiter.

Giuseppe Ateniese, a computer scientist at Stevens and paper co-author, compares the generator and discriminator to a police sketch artist and eye witness, respectively; the sketch artist is trying to produce something that can pass as an accurate portrait of the criminal. GANs have been used to make realistic images, but have not been applied much to text.

The Stevens team created a GAN it called PassGAN and compared it with two versions of hashCat and one version of John the Ripper. The scientists fed each tool tens of millions of leaked passwords from a gaming site called RockYou, and asked them to generate hundreds of millions of new passwords on their own. Then they counted how many of these new passwords matched a set of leaked passwords from LinkedIn, as a measure of how successful they’d be at cracking them.

On its own, PassGAN generated 12% of the passwords in the LinkedIn set, whereas its three competitors generated between 6% and 23%. But the best performance came from combining PassGAN and hashCat. Together, they were able to crack 27% of passwords in the LinkedIn set, the researchers reported this month in a draft paper posted on arXiv. Even failed passwords from PassGAN seemed pretty realistic: saddracula, santazone, coolarse18.

Using GANs to help guess passwords is “novel,” says Martin Arjovsky, a computer scientist who studies the technology at New York University in New York City. The paper “confirms that there are clear, important problems where applying simple machine learning solutions can bring a crucial advantage,” he says.

Still, Ristenpart says “It’s unclear to me if one needs the heavy machinery of GANs to achieve such gains.” Perhaps even simpler machine learning techniques could have assisted hashCat just as much, he says. (Arjovsky concurs.) Indeed, an efficient neural net produced by Carnegie Mellon University in Pittsburgh, Pennsylavania, recently showed promise, and Ateniese plans to compare it directly with PassGAN before submitting his paper for peer review.

Ateniese says that though in this pilot demonstration PassGAN gave hashCat an assist, he’s “certain” that future iterations could surpass hashCat. That’s in part because hashCat uses fixed rules and was unable to produce more than 650 million passwords on its own. PassGan, which invents its own rules, can create passwords indefinitely. “It’s generating millions of passwords as we speak,” he says. Ateniese also says PassGAN will improve with more layers in the neural networks and training on many more leaked passwords.

He compares PassGAN to AlphaGo, the Google DeepMind program that recently beat a human champion at the board game Go using deep learning algorithms. “AlphaGo was devising new strategies that experts had never seen before,” Ateniese says. “So I personally believe that if you give enough data to PassGAN, it will be able to come up with rules that humans cannot think about.”

And if you’re worried about your own security, experts suggest ways to create strong passwords—such as by making them long (but still easy to remember)—and using two-step authentication.
https://www.sciencemag.org/news/2017...ole-lot-easier





Wikileaks Releases Documents it Claims Detail Russia Mass Surveillance Apparatus
Natasha Lomas

Wikileaks has released a new cache of documents which it claims detail surveillance apparatus used by the Russian state to spy on Internet and mobile users. It’s the first time the organization has leaked (what it claims is) material directly pertaining to the Russian state.

As ever, nothing is straightforward when it comes to Wikileaks. And founder Julian Assange continues to face charges that his ‘radical transparency’ organization is a front for Kremlin agents (charges that stepped up after Wikileaks released a massive trove of hacked emails from the DNC last year at a key moment in the U.S. presidential election).

So it’s entirely possible Wikileaks/Assange is here trying to deflect from such charges by finally dumping something on Russia.

Safe to say the Twitter arguments are already breaking out (e.g. see this tweet comment thread).

And it’s not possible at this point to verify the veracity and/or value of the documents Wikileaks is releasing here.

Spy Files Russia

Writing a summary of the cache of mostly Russian-language documents, Wikileaks claims they show how a long-established Russian company which supplies software to telcos is also installing infrastructure, under state mandate, that enables Russian state agencies to tap into, search and spy on citizens’ digital activity — suggesting a similar state-funded mass surveillance program to the one utilized by the U.S.’s NSA or by GCHQ in the U.K. (both of which were detailed in the 2013 Snowden disclosures).

RELEASE: Spy Files #Russia https://t.co/CJMQVrNXef #SORM #FSB pic.twitter.com/QZPKY0HEWx

— WikiLeaks (@wikileaks) September 19, 2017

The documents which Wikileaks has published (there are just 34 “base documents” in this leak) relate to a St. Petersburg-based company, called Peter-Service, which it claims is a contractor for Russian state surveillance. The company was set up in 1992 to provide billing solutions before going on to become a major supplier of software to the mobile telecoms industry.

Wikileaks writes:

The technologies developed and deployed by PETER-SERVICE today go far beyond the classical billing process and extend into the realms of surveillance and control. Although compliance to the strict surveillance laws is mandatory in Russia, rather than being forced to comply PETER-SERVICE appears to be quite actively pursuing partnership and commercial opportunities with the state intelligence apparatus.

As a matter of fact PETER-SERVICE is uniquely placed as a surveillance partner due to the remarkable visibility their products provide into the data of Russian subscribers of mobile operators, which expose to PETER-SERVICE valuable metadata, including phone and message records, device identifiers (IMEI, MAC addresses), network identifiers (IP addresses), cell tower information and much more. This enriched and aggregated metadata is of course of interest to Russian authorities, whose access became a core component of the system architecture.

One of Wikileaks’ initially stated media partners for the release, the Italian newspaper La Repubblica, (which has since been removed from the media partners’ list and replaced with a different Italian publication’s name — so, er, working with Assange must surely be a lol a minute… ) reports that the documents cover “an extended timespan from 2007 to June 2015”, and describes the contents as “extremely technical”.

It also has a few caveats, noting the documents do not mention Russia’s spy agency, the FSB, but rather “speak only of state agencies”, a formula it asserts “certainly includes law enforcement, who use metadata for legal interception”.

It also says the documents do “not clarify what other state apparatus accesses those data through the solution of the St. Petersburg company”.

Wikileaks says that under Russia law operators must maintain a Data Retention System (DRS), which can store data for up to three years. La Repubblica reports that Peter-Service’s DRS stores telephone traffic data and “allows Russian state agencies to query the database of all stored data in search of information” — which it specifies can include calls made by a certain telephone company’s customer; payment systems used; the cell phone number to which a user is calling.

“The manuals published by WikiLeaks contain the images of interfaces that allow you to search within these huge data fields, so access is simple and intuitive,” it adds.

According to Wikileaks, Peter-Service’s DRS solution can handle 500,000,000 connections per day in one cluster. While the claimed average search time for subscriber related-records from a single day is ten seconds. “State intelligence authorities use the Protocol 538 adapter built into the DRS to access stored information,” it adds.

Peter-Service has also apparently developed a tool called TDM (Traffic Data Mart) — which allows the database to be queried to determine “where users’ data traffic is stored in order to understand visited sites, forums, social media”, as well as how much time is spent on a certain site and the electronic device used to access it.

Wikileaks describes TDM as “a system that records and monitors IP traffic for all mobile devices registered with the operator”, and says it maintains a list of categorized domain names — “which cover all areas of interest for the state. These categories include blacklisted sites, criminal sites, blogs, webmail, weapons, botnet, narcotics, betting, aggression, racism, terrorism and many more”.

“Based on the collected information the system allows the creation of reports for subscriber devices (identified by IMEI/TAC, brand, model) for a specified time range: Top categories by volume, top sites by volume, top sites by time spent, protocol usage (browsing, mail, telephony, bittorrent) and traffic/time distribution,” it adds.

Wikileaks points to a 2013 Peter-Service slideshow presentation (it says this also appears to be publicly available on the company’s website), which it claims is targeted not at telco customers but at state entities such as Russia’s FSB and Interior Ministry (despite this document apparently being in the public domain) — in which the company focuses on a new product, called DPI*GRID; which it says is a hardware device for Deep Packet Inspection that takes the form of “black boxes” apparently able to handle 10Gb/s traffic per unit.

“The national providers are aggregating Internet traffic in their infrastructure and are redirecting/duplicating the full stream to DPI*GRID units,” writes Wikileaks. “The units inspect and analyse traffic (the presentation does not describe that process in much detail); the resulting metadata and extracted information are collected in a database for further investigation. A similar, yet smaller solution called MDH/DRS is available for regional providers who send aggregated IP traffic via a 10Gb/s connection to MDH for processing.”

Wikileaks also makes a point of noting that the presentation was written “just a few months after Edward Snowden disclosed the NSA mass surveillance program and its cooperation with private U.S. IT-corporations such as Google and Facebook”.

“Drawing specifically on the NSA Prism program, the presentation offers law enforcement, intelligence and other interested parties, to join an alliance in order to establish equivalent data-mining operations in Russia,” it adds — sticking its boot firmly back into U.S. government mass surveillance programs.
https://techcrunch.com/2017/09/19/wi...nce-apparatus/





Distrustful U.S. Allies Force Spy Agency to Back Down in Encryption Fight
Joseph Menn

An international group of cryptography experts has forced the U.S. National Security Agency to back down over two data encryption techniques it wanted set as global industry standards, reflecting deep mistrust among close U.S. allies.

In interviews and emails seen by Reuters, academic and industry experts from countries including Germany, Japan and Israel worried that the U.S. electronic spy agency was pushing the new techniques not because they were good encryption tools, but because it knew how to break them.

The NSA has now agreed to drop all but the most powerful versions of the techniques - those least likely to be vulnerable to hacks - to address the concerns.

The dispute, which has played out in a series of closed-door meetings around the world over the past three years and has not been previously reported, turns on whether the International Organization of Standards should approve two NSA data encryption techniques, known as Simon and Speck.

The U.S. delegation to the ISO on encryption issues includes a handful of NSA officials, though it is controlled by an American standards body, the American National Standards Institute (ANSI).

The presence of the NSA officials and former NSA contractor Edward Snowden’s revelations about the agency’s penetration of global electronic systems have made a number of delegates suspicious of the U.S. delegation’s motives, according to interviews with a dozen current and former delegates.

A number of them voiced their distrust in emails to one another, seen by Reuters, and in written comments that are part of the process. The suspicions stem largely from internal NSA documents disclosed by Snowden that showed the agency had previously plotted to manipulate standards and promote technology it could penetrate. Budget documents, for example, sought funding to “insert vulnerabilities into commercial encryption systems.”

More than a dozen of the experts involved in the approval process for Simon and Speck feared that if the NSA was able to crack the encryption techniques, it would gain a “back door” into coded transmissions, according to the interviews and emails and other documents seen by Reuters.

“I don’t trust the designers,” Israeli delegate Orr Dunkelman, a computer science professor at the University of Haifa, told Reuters, citing Snowden’s papers. “There are quite a lot of people in NSA who think their job is to subvert standards. My job is to secure standards.”

The NSA, which does not confirm the authenticity of any Snowden documents, told Reuters it developed the new encryption tools to protect sensitive U.S. government computer and communications equipment without requiring a lot of computer processing power.

NSA officials said via email they want commercial technology companies that sell to the government to use the techniques, and that is more likely to happen when they have been designated a global standard by the ISO.

Asked if it could beat Simon and Speck encryption, the NSA officials said: “We firmly believe they are secure.”

THE CASE OF THE DUAL ELLIPTIC CURVE

ISO, an independent organization with delegations from 162 member countries, sets standards on everything from medical packaging to road signs. Its working groups can spend years picking best practices and technologies for an ISO seal of approval.

As the fight over Simon and Speck played out, the ISO twice voted to delay the multi-stage process of approving them.

In oral and written comments, opponents cited the lack of peer-reviewed publication by the creators, the absence of industry adoption or a clear need for the new ciphers, and the partial success of academics in showing their weaknesses.

Some ISO delegates said much of their skepticism stemmed from the 2000s, when NSA experts invented a component for encryption called Dual Elliptic Curve and got it adopted as a global standard.

ISO’s approval of Dual EC was considered a success inside the agency, according to documents passed by Snowden to the founders of the online news site The Intercept, which made them available to Reuters. The documents said the agency guided the Dual EC proposal through four ISO meetings until it emerged as a standard.

In 2007, mathematicians in private industry showed that Dual EC could hide a back door, theoretically enabling the NSA to eavesdrop without detection. After the Snowden leaks, Reuters reported that the U.S. government had paid security company RSA $10 million to include Dual EC in a software development kit that was used by programmers around the world.

The ISO and other standards groups subsequently retracted their endorsements of Dual EC. The NSA declined to discuss it.

In the case of Simon and Speck, the NSA says the formulas are needed for defensive purposes. But the official who led the now-disbanded NSA division responsible for defense, known as the Information Assurance Directorate, said his unit did not develop Simon and Speck.

“There are probably some legitimate questions around whether these ciphers are actually needed,” said Curtis Dukes, who retired earlier this year. Similar encryption techniques already exist, and the need for new ones is theoretical, he said.

ANSI, the body that leads the U.S. delegation to the ISO, said it had simply forwarded the NSA proposals to the organization and had not endorsed them.

FROM JAIPUR TO HAMILTON

When the United States first introduced Simon and Speck as a proposed ISO standard in 2014, experts from several countries expressed reservations, said Shin’ichiro Matsuo, the head of the Japanese encryption delegation.

Some delegates had no objection. Chris Mitchell, a member of the British delegation, said he supported Simon and Speck, noting that “no one has succeeded in breaking the algorithms.” He acknowledged, though, that after the Dual EC revelations, “trust, particularly for U.S. government participants in standardization, is now non-existent.”

At a meeting in Jaipur, India, in October 2015, NSA officials in the American delegation pushed back against critics, questioning their expertise, witnesses said.

A German delegate at the Jaipur talks, Christian Wenzel-Benner, subsequently sent an email seeking support from dozens of cryptographers. He wrote that all seven German experts were “very concerned” about Simon and Speck.

“How can we expect companies and citizens to use security algorithms from ISO standards if those algorithms come from a source that has compromised security-related ISO standards just a few years ago?” Wenzel-Benner asked.

Such views helped delay Simon and Speck again, delegates said. But the Americans kept pushing, and at an October 2016 meeting in Abu Dhabi, a majority of individual delegates approved the techniques, moving them up to a country-by-country vote.

There, the proposal fell one vote short of the required two-thirds majority.

Finally, at a March 2017 meeting in Hamilton, New Zealand, the Americans distributed a 22-page explanation of its design and a summary of attempts to break them - the sort of paper that formed part of what delegates had been seeking since 2014.

Simon and Speck, aimed respectively at hardware and software, each have robust versions and more “lightweight” variants. The Americans agreed in Hamilton to compromise and dropped the most lightweight versions.

Opponents saw that as a major if partial victory, and it paved the way to compromise. In another nation-by-nation poll last month, the sturdiest versions advanced to the final stage of the approval process, again by a single vote, with Japan, Germany and Israel remaining opposed. A final vote takes place in February.

Reporting by Joseph Menn; Editing by Jonathan Weber and Ross Colvin
https://www.reuters.com/article/us-c...-idUSKCN1BW0GV





D.C. Court Rules Tracking Phones without a Warrant is Unconstitutional
Kathryn Watson

Law enforcement use of one tracking tool, the cell-site simulator, to track a suspect's phone without a warrant violates the Constitution, the D.C. Court of Appeals said Thursday in a landmark ruling for privacy and Fourth Amendment rights as they pertain to policing tactics.

The ruling could have broad implications for law enforcement's use of cell-site simulators, which local police and federal agencies can use to mimic a cell phone tower to the phone connect to the device instead of its regular network.

In a decision that reversed the decision of the Superior Court of the District of Columbia and overturned the conviction of a robbery and sexual assault suspect, the D.C. Court of Appeals determined the use of the cell-site simulator "to locate a person through his or her cellphone invades the person's actual, legitimate and reasonable expectation of privacy in his or her location information and is a search."

The Fourth Amendment guarantees, "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."

D.C. Metropolitan Police's use of such cell-site simulator technology to nab suspect Prince Jones in 2013 "violated the Fourth Amendment," the court decided against the U.S. government on Thursday.

"We thus conclude that under ordinary circumstances, the use of a cell-site simulator to locate a person through his or her cellphone invades the person's actual, legitimate and reasonable expectation of privacy in his or her location information and is a search," the court ruling said. "The government's argument to the contrary is unpersuasive."

A December 2016 report from the House Oversight and Government Reform Committee found U.S. taxpayers spent $95 million on 434 cell-site simulator devices between 2010 and 2014, with the price tag for a single device hovering around $500,000.

"While law enforcement agencies should be able to utilize technology as a tool to help officers be safe and accomplish their missions, absent proper oversight and safeguards, the domestic use of cell-site simulators may well infringe upon the constitutional rights of citizens to be free from unreasonable searches and seizures, as well as the right to free association," the report said.

Under former Attorney General Eric Holder — under some pressure from Congress — the Department of Justice in 2015 issued a policy that federal authorities could only use cell-site simulators with a warrant. But that policy was never inked into law, and policies can change. Attorney General Jeff Sessions' tough-on-crime stance has worried some privacy advocates as to how he might use tools like cell-site simulators.
https://www.cbsnews.com/news/d-c-cou...hone-tracking/





Senators Hear Emotional Testimony On Controversial Sex-Trafficking Bill
Harper Neidig

The Senate Commerce Committee on Tuesday took up a controversial online sex-trafficking bill, hearing testimony from victims' families who urged lawmakers to act.

The hearing room was silent as Yvonne Ambrose tearfully told the panel about how her daughter, Desiree Robinson, was trafficked online and later raped and murdered.

“If there were stricter rules in place for posting on these websites then my child would still be with me today,” Ambrose said.

At issue is the Stop Enabling Sex Traffickers Act (SESTA), championed by Sens. Rob Portman (R-Ohio) and Richard Blumenthal (D-Conn.), who have clashed with Silicon Valley over the bill.

The legislation would alter Section 230 of the Communications Decency Act, which protects web publishers from being sued for content posted by third parties on their sites. The bill would strip those protections away from websites that promote sex trafficking.

Internet companies worry the bill could leave them unfairly liable for content posted by their users. But they are fighting an uphill battle to win over lawmakers.

Law enforcement groups and victims' rights advocates are forcefully painting the bill as a necessary step to crack down on sex trafficking.

Portman, who was added to the witness list just hours before the hearing began, testified that the bill is narrowly crafted to only target websites that are knowingly enabling sex trafficking. He insisted legitimate sites like Google and Facebook would not be affected.

“They have to be proven to have knowingly facilitated, supported or assisted in online sex trafficking to be liable in the first place,” Portman told the committee Tuesday. “Because the standard is so high, our bill protects good tech actors and targets rogue online actors like Backpage."

Backpage is a website for classified ads, similar to Craigslist, that has for years been accused of facilitating prostitution and underage sex trafficking.

Web companies insist they are going to great lengths to fight sex trafficking and that the bill would be counterproductive for those efforts.

“SESTA is a well-intentioned response to a terrible situation,” said Abigail Slater, general counsel for the Internet Association, a trade group representing most major Silicon Valley companies.

But she added: “We are concerned that SESTA opens up liability for frivolous lawsuits that do little for victims of sex trafficking."

The Internet Association has been leading opposition against the bill.

The industry group was backed up at the hearing by Sen. Ron Wyden (D-Ore.), one of the original architects of the Communications Decency Act, who told the committee that amending the law is not the way to fight sex trafficking.

“Absolutely nothing in the 230 statute protects against violating federal criminal law,” Wyden said.

None of the committee members has come out against the bill, but a handful have indicated they are open to revising it to address the tech industry's concerns.
http://thehill.com/policy/technology...silicon-valley





WhatsApp Reportedly Refused to Build a Backdoor for the UK Government

It’s not the first time the UK Government have tried to access messages
Thuy Ong

Messaging service WhatsApp rejected a UK Government request to create a way to access encrypted messages earlier this year, reports Sky News, citing an anonymous security source. The British Government reportedly asked WhatsApp in a meeting during summer to produce technical solutions that would allow access — known as a backdoor. Sky News reports that 80 percent of investigations into terrorism and serious crime are affected by encryption.

"It is crucially important that we can access their communications — and when we can't, it can provide a black hole for investigators," the source said. Extremists are known to use encryption apps like WhatsApp and Telegram to communicate, and an inability to access those messages has been a constant source of frustration for law enforcement.

Sky News reports that UK intelligence officials believe that reaching a compromise with these tech companies is possible and are firm in the hope that encrypted messages can be accessed with a warrant. Major tech companies have been strongly opposed to building backdoors though because it would undermine their services’ security, so a compromise doesn’t look likely.

In a statement on its website, WhatsApp says that “we carefully review, validate, and respond to law enforcement requests based on applicable law and policy, and we prioritize responses to emergency requests.” Apps like WhatsApp use end-to-end encryption that scrambles messages through a code. So WhatsApp can only hand over metadata like the account name and email address and can’t see the actual messages being sent.

It’s not the first time the UK Government has tried to access messages sent on the platform. Earlier this year UK home secretary Amber Rudd said it was "completely unacceptable" that intelligence services could not read WhatsApp messages sent and received by Khalid Masood, the perpetrator of a terrorist attack at Westminster in London in March.

Past incidents like the San Bernardino attack saw an exhausting court case against Apple for access to the suspects’ phones and only ended when the FBI found a work around to access the data.

Creating a backdoor would be problematic, as some in the tech community have already pointed out. Apple’s Tim Cook has previously said that weakening encryption would hurt the public, while terrorists would just find new ways to communicate.
https://www.theverge.com/2017/9/20/1...ypted-messages





Before Wisconsin, Foxconn Vowed Big Spending in Brazil. Few Jobs Have Come.
David Barboza

Before the Taiwanese manufacturing giant Foxconn pledged to spend $10 billion and create 13,000 jobs in Wisconsin, the company made a similar promise in Brazil.

At a news conference in Brazil, Foxconn officials unveiled plans to invest billions of dollars and build one of the world’s biggest manufacturing hubs in the state of São Paulo. The government had high expectations that the project would yield 100,000 jobs.

Six years later, Brazil is still waiting for most of those jobs to materialize.

“The area where Foxconn said it would build a plant is totally abandoned,” said Guilherme Gazzola, the mayor of Itu, one of the cities that hoped to benefit from the project. “They haven’t even expressed an interest in meeting us.”

Foxconn’s experience in Brazil and other parts of the world illustrates how difficult it has been for it to replicate its enormously successful Chinese manufacturing model elsewhere.

In China, Foxconn has built vast factories backed by large government subsidies. Its operations — assembling iPhones for Apple, Kindles for Amazon and PlayStations for Sony — employ legions of young assembly-line workers who often toil 60 hours a week for about $2.50 an hour. Labor protests in China are rare, or quashed swiftly.

But the model does not translate easily to other countries, where Foxconn must navigate different social, political and labor conditions.

In Brazil, Foxconn’s plans unraveled quickly. The administration that had wooed the company was soon swept from power amid corruption allegations and an impeachment vote. Some of the tax breaks that had been promised were reduced or abandoned, as economic growth and consumer spending slumped.

Today, Foxconn employs only about 2,800 workers in Brazil.

Foxconn does the “big song and dance, bringing out the Chinese dragon dancers, ribbon cuttings, toasts and signature of the usual boilerplate agreements,” said Alberto Moel, an investor and adviser to early-stage tech companies who until recently was a technology analyst at the research firm Sanford C. Bernstein. “Then, when it gets down to brass tacks, something way smaller materializes.”

Foxconn said in a statement that it was committed to investing billions of dollars in building facilities outside China. But the company also said it had been forced to adapt to changing conditions in markets like Brazil, where the economy had stagnated.

“This and the changing needs of our customers that our proposed investments were designed to serve have resulted in scaled down operations in the country at this time,” the company said in its statement.

With regard to the Wisconsin project, Foxconn has said it plans to build one of the world’s largest manufacturing campuses in the southeastern part of the state. The company expects the buildings that will make up the campus to total 20 million square feet — about three times the size of the Pentagon — and to help transform the region into a major production center for flat-panel display screens.

Speaker Paul D. Ryan, Republican of Wisconsin, called the Foxconn deal a “game changer” that could help spur a manufacturing revival in the Midwest. At the White House in July, President Trump hailed the agreement as a great one for American manufacturing, American workers and “everybody who believes in the concept, in the label, Made in the U.S.A.” Gov. Scott Walker of Wisconsin officially approved the deal on Monday.

Foxconn has good reason to diversify its manufacturing operations. About 95 percent of the company’s 1.1 million employees work in China. Building a large work force elsewhere could reduce the company’s reliance on a single locale, lowering its risk if countries imposed tariffs or other trade barriers on Chinese exports.

“The closer they get to big markets like the U.S. or Brazil, the less they have to worry about import taxes or other barriers,” said Gary Gereffi, director of the Center on Globalization, Governance, & Competitiveness at Duke University. “Getting outside of China to supply these markets is like jumping over any potential tariff wall.”

But exporting Foxconn’s Chinese strategy is virtually impossible.

The global supply chain for electronics remains firmly rooted in Asia, where advantages like low-cost labor and an abundance of skilled engineers have been crucial to the region’s development as a manufacturing base.

What makes Foxconn’s Chinese operations really hum are the extraordinary level of government subsidies and support, and the sheer scale of those operations. Local governments often finance and build the company’s factories, manage its dormitories and recruit tens of thousands of workers. Some government officials have gone door to door in small counties to recruit workers.

The government aid can reach into the billions of dollars.

Foxconn began to shift large-scale production operations beyond China in about 2009, when it opened plants elsewhere in Asia, including Vietnam and India. The company now has factories in the Czech Republic, Hungary and Slovakia, and a large plant in Mexico that employs 18,000 workers.

When several countries began to require that some components be made locally as a way of encouraging production at home, Foxconn stepped up its efforts to build outside China. And company executives essentially followed the same playbook they had used inside China.

Foxconn’s chairman, Terry Gou, met with high-ranking leaders, including Brazil’s president at the time, Dilma Rousseff, and Prime Minister Narendra Modi of India. Mr. Gou made pledges; won tax breaks and government concessions; and announced plans to spend billions of dollars to create tens of thousands of jobs in multiple countries. Brazil called one of the planned Foxconn sites the “City of the Future.”

Then reality set in.

Labor strikes in India and Vietnam prompted Foxconn’s operations in those countries to be shut down temporarily. Political and economic turmoil in Brazil led the authorities there to scale back some of tax breaks it had offered the company. A plan to invest $1 billion in the construction of a plant in Jakarta, Indonesia, collapsed, partly because Foxconn could not develop the supply chain it had hoped to, according to analysts and government officials.

Foxconn’s plans also fizzled in Pennsylvania. In 2013, the company, which has a small office in Harrisburg, said it intended to build a $30 million factory in the state that could employ 500 workers. The plant has yet to be built.

Pennsylvania officials declined to comment on why the factory had not been built, but said that they had not given up hope. (Foxconn also did not comment.)

“We do not believe Pennsylvania is out of the running for any particular project,” David Smith, a spokesman for the Pennsylvania Department of Community and Economic Development in Harrisburg, said about Foxconn’s commitment in the state.

For Foxconn, the move to Wisconsin offers political benefits.

On the campaign trail, Mr. Trump skewered China over what he deemed its unfair trade practices. He vowed to force Apple to make its products in the United States and said his administration might impose a border tax on imports, raising the prospect of a trade war.

After the election, Foxconn joined a parade of global companies bearing promises.

Jack Ma, the executive chairman of the Chinese internet giant Alibaba, arrived at Trump Tower in New York and pledged to create one million jobs in America. Masayoshi Son, the founder of SoftBank of Japan, said his company would invest $50 billion in the United States. And at around the same time, Foxconn said it was planning to build production facilities in the United States.

The Trump administration helped start some of the talks between Foxconn and officials in Wisconsin, including teams led by Mr. Ryan and Mr. Walker. Negotiations began in June and an agreement was reached a month later, with Wisconsin pledging $3 billion in tax breaks and other subsidies over a 15-year period.

Democrats in the state questioned whether the price tag was justified and whether the jobs would materialize. A state analysis, by the nonpartisan Legislative Fiscal Bureau, found that taxpayers would not recoup the state’s investment until at least 2042.

Wisconsin lawmakers pushed it through nonetheless, and when Mr. Walker approved the deal on Monday, he called it “a truly transformational step for our state.”

Vinod Sreeharsha contributed reporting from Brazil, and Joe Cochrane from Indonesia.
https://www.nytimes.com/2017/09/20/b...wisconsin.html

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

September 16th, September 9th, September 2nd, August 26th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - November 24th, '12 JackSpratts Peer to Peer 0 21-11-12 09:20 AM
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 10:46 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)