P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 16-03-16, 07:18 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - March 19th, '16

Since 2002


































"Today’s AG Opinion further strengthens the consensus that copyright enforcement measures must be balanced with fundamental rights." – EuroISPA Intermediary Liability committee chair Malcolm Hutty


"No one should have a key that turns a billion locks." – Tim Cook






































March 19th, 2016




Spotify Reaches Settlement With Publishers in Licensing Dispute
Ben Sisario

Spotify will pay more than $20 million to music publishers to settle a long-running and complex dispute over licensing, according to an agreement announced on Thursday between the streaming service and the National Music Publishers’ Association, a trade group.

Exact terms of the deal were not disclosed. But according to several people involved with the settlement on both sides, who spoke on the condition of anonymity to discuss confidential financial terms, Spotify will pay publishers between $16 million and $25 million in royalties that are already owed but unpaid — the exact amount, these people said, is still undetermined — as well as a $5 million penalty. In exchange, the publishers will refrain from filing copyright infringement claims against Spotify.

The settlement concerns mechanical licensing rights, which refer to a copyright holder’s control over the ability to reproduce a musical work. The rule goes back to the days of player-piano rolls, but in the digital era mechanical rights have joined the tangle of licensing deals that streaming services need to operate legally.

Over the last year, it emerged that Spotify — which has long trumpeted itself to the music industry as a law-abiding partner — had failed to properly obtain the mechanical licenses for large numbers of songs.

The lack of mechanical licenses looms as a major liability for streaming services, and the publishers’ association estimates that as much as 25 percent of the activity on these platforms is unlicensed. Several musicians filed class-action suits seeking as much as $200 million in damages for copyright infringement.

Spotify countered that it lacked the data to sort out which publishers had legitimate claims over songs, or even how to locate all the parties, because no central and authoritative database existed covering all music rights. Late last year the company committed to developing an administrative system to solve the problem, but by then its settlement talks with the publishers were already underway.

The gaps between what Spotify, Apple Music and others offer have been getting bigger and more complicated as artists have wielded more power in withholding their music from one outlet or another.

“I am thrilled that through this agreement, both independent and major publishers and songwriters will be able to get what is owed to them,” David M. Israelite, the president of the publishers’ association, said in a statement.

Jonathan Prince, a spokesman for Spotify, added, “As we have said many times, we have always been committed to paying songwriters and publishers every penny.”

According to the terms of the deal, participating publishers will use an online portal to register their claims over songs, and then receive their share of royalties owed as well as their portion of the bonus fund. Any unclaimed money would be divided according to the publishers’ market share.

The deal is likely to serve as a template for settlements with other services. In recent months, infringement suits have been filed against Rhapsody, Google, Tidal and others.

While the deal takes some pressure off Spotify, it has already been criticized as details of the settlement talks — ostensibly private and confidential — were leaked widely in recent weeks.

“The N.M.P.A. settlement does not address the problem and it does not fix it,” said Jeff Price, the chief executive of Audiam, a company that specializes in tracking down unpaid royalties online and has tussled with Spotify in the past over publishing data.

The settlement will most likely reduce the size of the class-action suits filed against Spotify. Representatives of several publishers said that they expected many publishers, large and small, to participate in the settlement. Mr. Price said that Audiam would pursue its own settlement talks on behalf of its clients.

The low penalty from the National Music Publishers’ Association settlement, Mr. Price said, “rewards bad behavior.”
http://www.nytimes.com/2016/03/18/bu...g-dispute.html





EU Court: Public Wi-Fi Owners Cannot Be Liable For Piracy On Unsecured Hotspots
Steve McCaskill

Advocate General of the European Court of Justice says it is unreasonable to expect all public Wi-Fi to be secured

Operators of free public Wi-Fi networks, such as those found in shops, hotels and bars, are not liable for any copyright infringement committed by users of these networks, according to a preliminary ruling by an Advocate General of the European Court of Justice.

The ruling follows a case of alleged copyright infringement between Tobias McFadden, owner of a lighting and sound equipment retailer in Germany, and Sony Music Entertainment Germany.

Sony alleged that a network operated by McFadden was used to illegally download music for which it owned the rights to on the basis that the hotspot should have been secured.

However Advocate General Maciej Szpunar said this is not reasonable.

Free Wi-Fi protection

“Although an injunction may be issued against that operator in order to bring the infringement to an end, it is not possible to require termination or password protection of the Internet connection or the examination of all communications transmitted through it,” said the ruling.

“In today’s Opinion, Advocate General Maciej Szpunar takes the view that that limitation of liability also applies to a person such as Mr Mc Fadden who, as an adjunct to his principal economic activity, operates a Wi-Fi network with an Internet connection that is accessible to the public free of charge.”

Szpunar’s decision is not binding and judges will now debate the final outcome. Regardless, EuroISPA, which represents the interests of Internet providers on the continent, has welcomed his decision.

The association says that if the ruling is confirmed, the ongoing expansion of public Wi-Fi can continue unhindered.

“Today’s AG Opinion further strengthens the consensus that copyright enforcement measures must be balanced with fundamental rights,” said EuroISPA Intermediary Liability committee chair Malcolm Hutty. “It says that restricting the availability of Wi-Fi access would be a disadvantage for society as a whole, that cannot be justified by benefits to copyright holders. I agree: the economic future of Europe depends on the widespread availability of Internet access, wherever you go, whenever you need it.”

There have been a number of moves to make small business owners password protect their networks in the past, and at one point it appeared as though Brits could be fined for not securing their home routers. However such calls have diminished in recent years.
http://www.techweekeurope.co.uk/netw...-piracy-188084





The Cord Cutting The Pay TV Sector Keeps Saying Isn't Happening -- Keeps Happening
Karl Bode

Several cable operators managed to eek out some modest subscriber gains in the fourth quarter of last year, prompting some renewed claims by the industry that cord cutting was "on the ropes" or was otherwise an unfair hallucination of the media. After all, Comcast saw a net gain of 89,000 pay TV users during the fourth quarter. Time Warner Cable similarly saw its best year since 2006 with a net gain of 54,000 TV subscribers. Charter also saw a net gain in the fourth quarter of 29,000 video subscribers. For some of these companies, this was the best performance they've seen since 2006.

As such, cord cutting is clearly yesterday's news, right?

Not so much. Full analysis of the fourth quarter and full year numbers by analysts like Leichtman Research indicates that while cable operators might have had a solid fourth quarter, most of them still saw a net loss on the year. As a whole, the pay TV sector as a whole lost 385,000 pay TV subscribers in 2015, according to Leichtman:

And Leichtman's analysis, as a firm that has traditionally downplayed cord cutting, appears to be conservative. Full year analysis by SNL Kagan indicates the pay TV sector as a whole lost 1.1 million subscribers last year, four times the drop seen the year before:

"SNL Kagan estimates the combined cable, DBS and telecommunications (telco) sectors lost more than 1 million video customers in 2015. The 12-month decline was more than 4x the 2014 decline, and marked the third consecutive overall annual drop for the industry. That said, the waning months of 2015 carried signs of stabilization after steep losses for the better part of the year. The industry dipped by only 15,000 total customers in the period ended Dec. 31, 2015, essentially matching the losses of fourth quarter 2014."

So why did cable have a better-than-usual fourth quarter? Charter, Time Warner Cable and Comcast have all been deploying faster speeds and new cable set top boxes, which appear to be luring back some customers that had previously fled to satellite and telcoTV alternatives. So what we're seeing is a lateral move of some customers between different types of legacy TV (deck chairs, Titanic, etc.), while cord cutting continues unabated in the background. Cable companies have also ramped up promotions in which they offer pay TV and broadband for significantly less than a customer can buy broadband alone -- meaning many of these tallied customers may not even have wanted (or even use) television service.

And it's worth noting these numbers may be even worse than they appear. For one thing, companies like Dish and Comcast have started including streaming video customer numbers in their legacy TV numbers to try and sooth investor worries that the legacy cable cash cow has caught a nasty case of pneumonia. You'll also note that as the housing recovery accelerates, broadband subscribers aren't growing in parallel, meaning there are millions of new home owners and renters that aren't signing up for traditional television service.

So, no, despite some analysis you'll read, cord cutting isn't "on the ropes," "overhyped," or the rogue opinion of a few mean old bloggers and journalists. It's a continued, very real consumer response to an industry that simply refuses to seriously compete on price.
https://www.techdirt.com/articles/20...appening.shtml





Comcast Offers Gigabit Cable for $70 a Month with Contract, $140 Without

Uploads are only 35Mbps, and there's a data cap unless you sign the contract.
Jon Brodkin

Comcast has begun selling its new gigabit cable service in parts of Atlanta, and the company is heavily pushing customers toward three-year contracts as it tries to fend off a challenge from Google Fiber.

Customers will be able to buy the Internet service for $70 a month and not face any data caps if they sign a three-year deal that has an early termination fee. Without a contract, customers would have to pay $139.95 a month and face a 300GB-per-month data cap.

Customers on the no-contract option can upgrade to unlimited data for an extra $35 a month. Thus, Comcast's gigabit Internet service with unlimited data costs $70 per month with a contract and about $175 without. (A DSLReports article described the data cap details today, and Comcast confirmed them to Ars.)

While there are no activation or installation fees for the gigabit cable service, the early termination fee for contract customers is $350 "but would drop monthly on a sliding scale for the duration of the three-year contract," Comcast said.

Comcast is using DOCSIS 3.1 technology to deliver gigabit download speeds over cable, but the upload speed is only 35Mbps, Comcast told Ars. By contrast, Comcast's fiber-to-the-home service offers 2Gbps in both directions but is also a lot more expensive.
Comcast's DOCSIS 3.1 deployment comes shortly after Google Fiber went live in apartments and condos in Atlanta. Comcast has also been distributing flyers urging customers not to "fall for the hype" of Google Fiber.

Google Fiber charges $70 per month for symmetrical gigabit Internet. A $100 installation fee is waived if customers sign a one-year commitment. Google is also offering Atlanta customers 100Mbps service for $50 a month.

Comcast has deployed residential fiber in a bunch of cities, but Atlanta is the first to get Comcast's gigabit cable. The company's announcement today described the Atlanta deployment as a trial in "a number of neighborhoods," with customers being encouraged to offer feedback about the service. With Comcast planning to deploy DOCSIS 3.1 throughout its US territory, the feedback "will be used to help ensure future market rollouts deliver the best possible customer experience," the company said.

Pricing could also be different when gigabit cable launches in other cities. "Once this advanced consumer trial is complete, Comcast plans to roll Gigabit service out at additional price points in other markets to gauge consumer interest in Gigabit speeds," Comcast said.
http://arstechnica.com/business/2016...year-contract/





AT&T, Comcast Kill Local Gigabit Expansion Plans in Tennessee
Karl Bode

For some time now municipal broadband operator EPB Broadband (see our user reviews) has been saying that a state law written by AT&T and Comcast lobbyists have prevented the organization from expanding its gigabit broadband offerings (and ten gigabit broadband offerings) throughout Tennessee. These state laws currently exist in more than twenty states, and prohibit towns from deploying their own broadband -- or often even striking public/private partnerships -- even in cases of obvious market failure.

A proposal that would have recently lifted this statewide restriction in Tennessee was recently shot down thanks to AT&T and Comcast lobbying.

Even a new compromise proposal (which would have simply let EPB expand slightly in the same county where it is headquartered as well as one adjoining county) was shot down, after 27 broadband industry lobbyists -- most of whom belonging to AT&T and Comcast -- fought in unison to kill the proposal.

The proposal was shot down by a 5-3 vote, with Rep. Patsy Hazlewood, a former AT&T executive, being one of the votes against.

"It's a testament to the power of lobbying against this bill and not listening to our electorate," said Rep. Kevin Brooks, who spearheaded the original, full proposal. "We have thousands of petitions that were signed [and placed] in everybody s office. And the voice of the people today was not heard. And that's unfortunate."

Last year the FCC voted to dismantle broadband protectionist bills in both Tennessee and North Carolina, though those efforts remain bogged down in court. ISP-loyal lawmakers in the states have argued that the FCC's attempt to shoot down these laws violates their states rights, though letting Comcast and AT&T write awful state telecom law doesn't appear to generate the same disdain.
https://www.dslreports.com/shownews/...nnessee-136503





TP-Link Blocks Open Source Router Firmware to Comply with New FCC Rule

Rules for limiting interference could prevent use of DD-WRT and OpenWRT.
Jon Brodkin

Networking hardware vendor TP-Link says it will prevent the loading of open source firmware on routers it sells in the United States in order to comply with new Federal Communications Commission requirements.

The FCC wants to limit interference with other devices by preventing user modifications that cause radios to operate outside their licensed RF (radio frequency) parameters. The FCC says it doesn't intend to ban the use of third-party firmware such as DD-WRT and OpenWRT; in theory, router makers can still allow loading of open source firmware as long as they also deploy controls that prevent devices from operating outside their allowed frequencies, types of modulation, power levels, and so on.

But open source users feared that hardware makers would lock third-party firmware out entirely, since that would be the easiest way to comply with the FCC requirements. The decision by TP-Link—described by the company in this FAQ—shows that those fears were justified. (Thanks to Electronic Frontier Foundation Staff Attorney Nate Cardozo for bringing it to our attention.)

TP-Link's FAQ acknowledges that the company is "limiting the functionality of its routers."

"The FCC requires all manufacturers to prevent user[s] from having any direct ability to change RF parameters (frequency limits, output power, country codes, etc.)," TP-Link says.

TP-Link says that it distributes devices with country-specific firmware and that "devices sold in the United States will have firmware and wireless settings that ensure compliance with local laws and regulations related to transmission power."

TP-Link says the change will go into effect for routers produced on and after June 2, 2016, a date set by the FCC in guidance issued in November. "All devices partially or completely approved under the old rules cannot be marketed starting June 2, 2016 unless they meet the requirements of the new rules in all the bands of operation," the FCC guidance says. The new rules are from June 2014, but the requirements were phased in gradually.

TP-Link says the changes it is making mean that "users are not able to flash the current generation of open-source, third-party firmware." The company added one caveat, saying that it is "excited to see the creative ways members of the open-source community update the new firmware to meet their needs." But TP-Link did not say exactly what open source firmware makers must do to use their firmware on new routers, and it said the company "does not offer any guarantees or technical support for customers attempting to flash any third-party firmware to their devices."

TP-Link's FAQ appears to be the most explicit public statement a router maker has made about how it will comply with the new FCC requirements. "This is the clearest statement, but it's what we've been hearing for a while off the record. Part of the problem is that the router manufacturers won't talk on the record," Cardozo told Ars.

TP-Link's FAQ points out that "the regulation affects all manufacturers marketing routers in the US."

Eric Schultz, a free and open source software advocate who is involved with the Save Wi-Fi coalition, is not optimistic about open source developers being able to rewrite their software. "As for whether the open source community can meet the requirements, not without moving the entire radio controlling software to a separate processor like on a cell phone," Schultz told Ars today. "Doing so eliminates legal ways a user can use the radio such as most mesh networking research and use, ham radio usage such as in disaster recovery, Wi-Fi protocol experimentation, moving to different countries with different rules and, given the near-total insecurity of IOT, vital security research on the radio software."

We've asked the FCC for comment and will provide an update if we get one.

The FCC began instituting its changes after the FAA discovered "illegally modified equipment interfering with terrestrial doppler weather radar (TDWR) at airports," Public Knowledge Senior VP Harold Feld has said.

In addition to the requirement taking effect in June, the FCC has proposed new rules that would further clarify how router makers should treat user modifications. Final rules haven't been issued, but the initial proposal says that hardware makers should “implement well-defined measures to ensure that certified equipment is not capable of operating with RF-controlling software for which it has not been approved."

Cisco argues that open source software could be consistent with the FCC's goals. "There is nothing in the Commission's existing or proposed rules that would limit or eliminate the ability of a developer to use Open Source software, including software that controls radio emissions," Cisco said in an FCC filing in November.

But this would require a more locked-down approach than one in which users can modify the firmware, Cisco said. "The ability to review source code is not inherently incompatible with the notion of locking the integrity of a product against modification or tampering," Cisco wrote. "It is perfectly possible for a product to have source code that is capable of review by the public while that same code is secured inside the device against change by the end-users."

"TP-Link has not blocked the firmwares in any useful way," Gottschall told Ars. "Just the firmware header has been a little bit changed and a region code has been added. This has been introduced in September 2015. DD-WRT for instance does still provide compatible images... in fact it's no lock."

But as we noted earlier, TP-Link's FAQ says the new regulation does not apply to routers produced before June 2016, so the company may be planning further restrictions.
http://arstechnica.com/information-t...-new-fcc-rule/





Why Apple’s Co-Founder Loves the Amazon Echo
Michael Newberg

Apple co-founder Steve Wozniak is fired up about a hot new tech gadget — and it isn't an Apple product.

"I'm excited right now about the Amazon Echo, oddly enough," Wozniak told CNBC this week. "I think it's the next big platform for the near future if I'm right."

Wozniak, who is hosting Silicon Valley's first-ever Comic Con starting March 18 in San Jose, California, sat down with CNBC to talk about the event and the ways technology and culture influence one another. In a wide-ranging interview on everything from virtual reality's mass-market potential to Apple's battle with the FBI, Wozniak revealed that Amazon's voice-activated personal assistant has become a key fixture in his home.

"It's just become such a wonderful part of our life, not having to lift anything up and speak to things, and just speak to it anywhere across a room," he said. "That is such a luxury and freedom."

Amazon describes the Echo, which launched to the public last June, as "hands-free and always on." The device, which sells for $179.99, can be used to get "information, music, news and weather from across the room," according to Amazon.

Wozniak's passion for the Echo, and for Apple's virtual assistant Siri, centers on his frustration over the constantly changing nature of apps. "I fell in love with speaking because I hate to memorize … because you know how apps change, they change it this year and things you memorize don't work the same."

Rather than navigating a confusing app landscape, Wozniak said that when he has a question or needs to complete a task, he finds it easier to just speak it out loud to his Echo.

You can just say, 'Get me an Uber,' and it does ... you can say, 'Get me some paper towels,' and if it knows your last order on Amazon, it orders it right there," he said. Wozniak said he ordered his second Echo by telling his first Echo to "order me an Echo."

Wozniak isn't the only person excited about the device. In December, Amazon said the Echo was the best-selling item across all $100-plus products on Black Friday.

When asked whether he's an investor in Amazon, Wozniak said, "I don't do stocks," adding, "I just like technology when it's useful for people."
http://www.cnbc.com/2016/03/11/why-a...azon-echo.html





In the Apple Case, a Debate Over Data Hits Home
Michael D. Shear, David E. Sanger and Katie Benner

Three years ago, reeling from Edward J. Snowden’s disclosure of the government’s vast surveillance programs and uncertain how to respond, President Obama said he welcomed a vigorous public debate about the wrenching trade-offs between safeguarding personal privacy and tracking down potential terrorists.

“It’s healthy for our democracy,” he told reporters at the time. “I think it’s a sign of maturity.”

But the national debate touched off this winter by the confrontation between the Justice Department and Apple over smartphone security is not exactly the one Mr. Obama had in mind.

Mr. Snowden’s revelations produced modest changes and a heightened suspicion of the government’s activities in cyberspace. Because the issue now centers on a device most Americans carry in their pockets, it is concrete and personal in a way that surveillance by the National Security Agency never was.

The trade-offs seem particularly stark because they have been framed around a simple question: Should Apple help the F.B.I. hack into an iPhone used by a gunman in the massacre last December in San Bernardino, Calif.?

Law enforcement officials have been adamant they must be able to monitor the communications of criminals. They received a vote of confidence from Mr. Obama on Friday, when he said the “absolutist” position taken by companies like Apple is wrong. But the pushback has been enormous.

In the month since a judge ordered Apple to comply with the F.B.I., the debate has jumped from the tech blogs to the front pages of daily newspapers and nightly newscasts. Supporters of the company’s position have held rallies nationwide. Late-night comedians have lampooned government snoopers. Timothy D. Cook, the usually publicity-shy Apple chief executive, pleaded his case on “60 Minutes” last December. On Twitter, “#encryption” fills the screen with impassioned debate on both sides.

“Discussing the case with my friends has become a touchy subject,” said Matthew Montoya, 19, a computer science major at the University of Texas, El Paso. “We’re a political bunch with views from all across the spectrum.”

Like many of her friends, Emi Kane, a community organizer in Oakland, Calif., recently found herself arguing via Facebook with a family friend about the case. Ms. Kane thought Apple was right to refuse to hack the phone; her friend, a waitress in Delaware, said she was disgusted by Apple’s lack of patriotism.

After exchanging several terse messages, they agreed to disagree. “It was a hard conversation,” Ms. Kane said.

The novelist Russell Banks, who signed a letter to Attorney General Loretta Lynch on behalf of Apple, said he had spoken with more than a dozen people about the case just in the last week.

“It’s not just people in the tech industry talking about this,” Mr. Banks, the author of “Affliction” and “The Sweet Hereafter,” said. “It’s citizens like myself.”

That may be because the Apple case involves a device whose least interesting feature is the phone itself. It is a minicomputer stuffed with every detail of a person’s life: photos of children, credit card purchases, texts with spouses (and nonspouses), and records of physical movements.

Mr. Obama warned Friday against “fetishizing our phones above every other value.” After avoiding taking a position for months, he finally came down on the side of law enforcement, saying that using technology to prevent legal searches of smartphones was the equivalent of preventing the police from searching a house for evidence of child pornography.

“That can’t be the right answer,” he said at the South by Southwest festival in Texas, even as he professed deep appreciation for civil liberties and predicted both sides would find a way to cooperate. “I’m confident this is something that we can solve.”

But polls suggest the public is nowhere near as certain as Mr. Obama. In surveys, Americans are deeply divided about the legal struggle between the government and one of the nation’s most iconic companies. The polls show that Americans remain anxious about both the threat of terrorist attacks and the possible theft of personal digital information.

A Wall Street Journal/NBC News survey released last week found that 42 percent of Americans believed Apple should cooperate with law enforcement officials to help them gain access to the locked phone, while 47 percent said Apple should not cooperate. Asked to weigh the need to monitor terrorists against the threat of violating privacy rights, the country was almost equally split, the survey found.

That finding may have seemed unlikely in the wake of terrorist attacks last year in Paris and San Bernardino. In December, eight in 10 people said in a New York Times/CBS News survey that it was somewhat or very likely that there would be a terrorist attack in the United States in the coming months. A CNN poll the same month found that 45 percent of Americans were somewhat or very worried that they or someone in their family would become a victim of terrorism.

But despite the fears about terrorism, the public’s concern about digital privacy is nearly universal. A Pew Research poll in 2014 found more than 90 percent of those surveyed felt that consumers had lost control over how their personal information was collected and used by companies.

The Apple case already seems to have garnered more public attention than the Snowden revelations about “metadata collection” and programs with code names like Prism and XKeyscore. The comedian John Oliver once mocked average Americans for failing to know whether Mr. Snowden was the WikiLeaks guy or the former N.S.A. contractor (he was the latter).

Now, people are beginning to understand that their smartphones are just the beginning. Smart televisions, Google cars, Nest thermostats and web-enabled Barbie dolls are next. The resolution of the legal fight between Apple and the government may help decide whether the information in those devices is really private, or whether the F.B.I. and the N.S.A. are entering a golden age of surveillance in which they have far more data available than they could have imagined 20 years ago.

“It’s an in-your-face proposition for lots more Americans than the Snowden revelation was,” said Lee Rainie, director of Internet, science and technology research at Pew Research Center.

Cindy Cohn, executive director of the Electronic Frontier Foundation, said: “Everyone gets at a really visceral level that you have a lot of really personal stuff on this device and if it gets stolen it’s really bad. They know that the same forces that work at trying to get access to sensitive stuff in the cloud are also at work attacking the phones.”

For the F.B.I. and local law enforcement agencies, the fight has become a high-stakes struggle to prevent what James B. Comey, the bureau’s director, calls “warrant-free zones” where criminals can hide evidence out of reach of the authorities.

Officials had hoped the Apple case involving a terrorist’s iPhone would rally the public behind what they see as the need to have some access to information on smartphones. But many in the administration have begun to suspect that the F.B.I. and the Justice Department may have made a major strategic error by pushing the case into the public consciousness.

Many senior officials say an open conflict between Silicon Valley and Washington is exactly what they have been trying to avoid, especially when the Pentagon and intelligence agencies are trying to woo technology companies to come back into the government’s fold, and join the fight against the Islamic State. But it appears it is too late to confine the discussion to the back rooms in Washington or Silicon Valley.

The fact that Apple is a major consumer company “takes the debate out of a very narrow environment — the universe of technologists and policy wonks — into the realm of consumers where barriers like the specific language of Washington or the technology industry begins to fall away,” said Malkia Cyril, the executive director of the Center for Media Justice, a grass-roots activist network.

That organization and other activist groups like Black Lives Matter have seized on the issue as important for their members. In February the civil liberties group Fight for the Future organized the day of protest against the government order that resulted in rallies in cities nationwide.

“When we heard the news and made a call for nationwide rallies, one happened in San Francisco that same day,” said Tiffiniy Cheng, co-founder of Fight for the Future. “Things like that almost never happen.”

Ms. Cyril says the public angst about the iPhone case feels more urgent than did the discussion about government surveillance three years ago.

“This is one of those moments that defines what’s next,” she said. “Will technology companies protect the privacy of their users or will they do work for the U.S. government? You can’t do both.”

Michael D. Shear and David E. Sanger reported from Washington, and Katie Benner from San Francisco.
http://www.nytimes.com/2016/03/14/te...hits-home.html





No One, Tim Cook Declares, ‘Should have a Key that Turns a Billion Locks.’

Tim Cook is right about privacy and encryption- We shouldn’t give them up for Google
Malarie Gokey

As Apple continues to wrestle with the FBI over iPhone encryption, its soft-spoken CEO has stepped up from gadget pitchman to privacy activist. In a wide-ranging interview published Thursday in Time magazine, Cook granted further insight into the company’s unyielding stance on encryption, and his own.

Far from backing off or waffling, Cook reserved his strongest statements yet for his interview, doubling down on his dedication to privacy and suggesting that the FBI’s request violates the U.S. Constitution and Americans’ civil liberties. We’ve collected the most powerful and revealing quotations from the transcript, and one thing’s for sure: The Feds have a formidable foe, and he’s not going anywhere.

Apple debated the issue internally

At the start of the interview, Cook explained how the FBI approached Apple about getting information from the San Bernardino shooter’s iPhone 5C. After fielding and resisting some FBI requests for data, Apple was sued by the government. It asked Apple to create code to help it break into the shooter’s iPhone and potentially other phones from different cases, as well. Apple refused and openly defied a court order to do so. Cook says that the decision wasn’t made on a whim or overnight, but it was something that he and other Apple leaders had discussed at length previously.

“When we saw the government going not only to this extent, but now going to a greater degree and asking us to develop … a new product that takes out all the security aspects, and makes it a simple process for them to guess many thousand different combinations of passwords or passcodes, we said you know, that case is now important,” Cook said. “Lots of people internally were involved. It wasn’t just me sitting in a room somewhere deciding that way, it was a labored decision. We thought about all the things you would think we would think about.”

The crux of Apple’s argument is that creating a backdoor for the FBI to access is not only dangerous, but a violation of civil liberties and American values. Cook referenced the Constitution and the Bill of Rights in his arguments.

“When I think of civil liberties I think of the founding principles of the country,” Cook began. “The freedoms that are in the First Amendment. But also the fundamental right to privacy.”

Cook argues that removing encryption will not make Americans more secure, nor will it protect their privacy: “I believe that if you took privacy and you said, I’m willing to give up all of my privacy to be secure. So you weighted it as a zero. My own view is that encryption is a much better, much better world. … And to me it is so clear that even if you discount the importance of privacy, that encryption is the way to go.”

You can’t have privacy without security

Another key point Cook raised during the interview, was that this case isn’t about having privacy or security — it’s about having both.

“I know everybody wants to paint it as privacy versus security, as if you can give up one and get more of the other,” Cook said. “I think it’s very simplistic and incorrect. I don’t see it that way at all.”

Cook believes that when you view it as an either-or issue, you’re saying that we have to trade privacy for security. And if we do that, then you may have stopped one bad guy, but “you’ve exposed 99 percent of good people.”

“I think it’s privacy and security or privacy and safety versus security,” he added. “It’s not that people’s wellbeing, their physical wellbeing is not a part of privacy. It is. It very much is.”

The dangers of backdoors

Cook reiterated his fears about creating a backdoor for the government, painting a frightening scenario in which an anti-encryption law is passed.

“Let’s say you just pulled encryption. Let’s ban it. Let’s you and I ban it tomorrow. And so we sit in Congress and we say, thou shalt not have encryption. What happens then?” he asked. “Well, I would argue that the bad guys will use encryption from non-American companies, because they’re pretty smart.”

“So are we really safer then? I would say no. I would say we’re less safe, because now we’ve opened up all of the infrastructure for people to go wacko at.”

Same goes for putting in a backdoor to get at your messages or digital communication, Cook claims. In the case of messaging, Cook says Apple doesn’t want to read or store your messages. He explained that Apple’s job is just to send the message — not to read it or store it somewhere so it can be read later.

“I’m the FedEx guy. I’m taking your package and I’m delivering it,” Cook said. “My job isn’t to open it up, make a copy of it, put it over in my cabinet in case somebody later wants to come say, ‘I’d like to see your messages.’ That’s not a role that I play … No one should have a key that turns a billion locks. It shouldn’t exist.”

More tech equals more data and more danger

Cook called the technology era “the golden age of surveillance,” because “there is more information about all of us, so much more than 10 years ago, or five years ago. It’s everywhere.”

He takes issue with the argument that Apple’s push for end-to-end encryption is as good as “going dark.”

“No one’s going dark,” he said. “I mean really, it’s fair to say that if you send me a message and it’s encrypted, it’s fair to say they can’t get that without going to you or to me, unless one of us has it in our cloud at this point. That’s fair to say. But we shouldn’t all be fixated just on what’s not available. We should take a step back and look at the total that’s available.”

“Because there’s a mountain of information about us. I mean there’s so much. Anyway, I’m not an intelligence person. But I just look at it and it’s a mountain of data.”

America will do the right thing

At the end of the day, Cook believes that America will do the right thing and stand up for the right to privacy and encryption.

“I think there’s too much evidence to suggest that [backdoors are] bad for national security,” he said. “It means we’re really throwing out founding principles on the side of the road. So I think there’s so many things to suggest that they wouldn’t do that. That’s not something I lose sleep over. I’m very optimistic, I have got to be, that in a debate, a public debate, all of these things will rise up and you’ll see sanity take over.”

This case has the potential to set a dangerous precedent, Cook has said, but it also has the potential to set America on the right side of history in the encryption debate.

“We’re going to fight the good fight not only for our customers but for the country,” Cook said. “We’re in this bizarre position where we’re defending the civil liberties of the country against the government. Who would have ever thought this would happen? I never expected to be in this position. The government should always be the one defending civil liberties. And there’s a role reversal here. I mean I still feel like I’m in another world a bit, that I’m in this bad dream in some wise.”

To read the full transcript, head over to Time.
http://www.digitaltrends.com/mobile/...fbi-vs-apple/?





White House Begins To Realize It May Have Made A Huge Mistake In Going After Apple Over iPhone Encryption
Mike Masnick

One of the key lines that various supporters of backdooring encryption have repeated in the last year, is that they "just want to have a discussion" about the proper way to... put backdoors into encryption. Over and over again you had the likes of James Comey insisting that he wasn't demanding backdoors, but really just wanted a "national conversation" on the issue (despite the fact we had just such a conversation in the 90s and concluded: backdoors bad, let's move on.):

“My goal today isn’t to tell people what to do. My goal is to urge our fellow citizens to participate in a conversation as a country about where we are, and where we want to be, with respect to the authority of law enforcement.”

And, yet, now we're having that conversation. Very loudly. And while the conversation really has been going on for almost two years, in the last month it moved from a conversation among tech geeks and policy wonks into the mainstream, thanks to the DOJ's decision to force Apple to write some code that would undermine security features on the work iPhone of Syed Farook, one of the San Bernardino attackers. According to some reports, the DOJ and FBI purposely chose this case in the belief that it was a perfect "test" case for its side: one that appeared to involve "domestic terrorists" who murdered 14 people. There were reports claiming that Apple was fine fighting this case under seal, but that the DOJ purposely chose to make this request public.

However, now that this has resulted in just such a "national conversation" on the issue, the DOJ, FBI and others in the White House are suddenly realizing that perhaps the public isn't quite as with them as they had hoped. And now there are reports that some in the White House are regretting the decision to move forward and are experiencing this well known feeling:

According to the NY Times:
Officials had hoped the Apple case involving a terrorist’s iPhone would rally the public behind what they see as the need to have some access to information on smartphones. But many in the administration have begun to suspect that the F.B.I. and the Justice Department may have made a major strategic error by pushing the case into the public consciousness.

Many senior officials say an open conflict between Silicon Valley and Washington is exactly what they have been trying to avoid, especially when the Pentagon and intelligence agencies are trying to woo technology companies to come back into the government’s fold, and join the fight against the Islamic State. But it appears it is too late to confine the discussion to the back rooms in Washington or Silicon Valley.


While the various public polling on the issue has led to very mixed results, it's pretty clear that the public did not universally swing to the government's position on this. In fact, it appears that the more accurately the situation is described to the public, the more likely they are to side with Apple over the FBI. Given that, John Oliver's recent video on the subject certainly isn't good news for the DOJ.

Either way, the DOJ and FBI insisted they wanted a conversation on this, and now they're getting it. Perhaps they should have been more careful what they wished for.
https://www.techdirt.com/articles/20...cryption.shtml





‘Chilling Effect’ of Mass Surveillance Is Silencing Dissent Online, Study Says
Nafeez Ahmed

Thanks largely to whistleblower Edward Snowden’s revelations in 2013, most Americans now realize that the intelligence community monitors and archives all sorts of online behaviors of both foreign nationals and US citizens.

But did you know that the very fact that you know this could have subliminally stopped you from speaking out online on issues you care about?

Now research suggests that widespread awareness of such mass surveillance could undermine democracy by making citizens fearful of voicing dissenting opinions in public.

A paper published last week in Journalism and Mass Communication Quarterly, the flagship peer-reviewed journal of the Association for Education in Journalism and Mass Communication (AEJMC), found that "the government’s online surveillance programs may threaten the disclosure of minority views and contribute to the reinforcement of majority opinion.”

The NSA’s “ability to surreptitiously monitor the online activities of US citizens may make online opinion climates especially chilly” and “can contribute to the silencing of minority views that provide the bedrock of democratic discourse," the researcher found.

The paper is based on responses to an online questionnaire from a random sample of 255 people, selected to mimic basic demographic distributions across the US population.

Participants were asked to answer questions relating to media use, political attitudes, and personality traits. Different subsets of the sample were exposed to different messaging on US government surveillance to test their responses to the same fictional Facebook post about the US decision to continue airstrikes against the Islamic State of Iraq and Syria (ISIS).

They were then asked about their willingness to express their opinions about this publicly—including how they would respond on Facebook to the post; how strongly they personally supported or opposed continued airstrikes; their perceptions of the views of other Americans; and whether they supported or opposed online surveillance.

The study used a regression model—a statistical method to estimate the relationships between different variables—to test how well a person’s decisions to express their opinion could be predicted based on the nature of their opinion, their perceptions of prevailing viewpoints, and their attitude to surveillance.

This sort of model doesn’t produce simple percentages, but provides a statistical basis to explain variances in the factors being tested. In this case, the study found that “35% of the variance in an individuals’ willingness to self-censor” could be explained by their perceptions of whether surveillance is justified.

For the majority of respondents, the study concluded, being aware of government surveillance “significantly reduced the likelihood of speaking out in hostile opinion climates.”

Although more nuanced than a blanket silencing, the study still concluded that “knowing one’s online activities are subject to government interception and believing these surveillance practices are necessary for national security play important roles in influencing conformist behavior.”

Perhaps unsurprisingly, the most significant conformist effect was from people who supported surveillance. They turned out to be more likely to conceal other dissenting opinions, which they felt strayed from the majority view.

When such individuals “perceive they are being monitored, they readily conform their behavior—expressing opinions when they are in the majority, and suppressing them when they’re not,” the paper concluded. These findings suggest that a person’s “fear of isolation from authority or government” adds new “chilling effects” to public discourse.

“What this research shows is that in the presence of surveillance, our country's most vulnerable voices are unwilling to express their beliefs online,” said Elizabeth Stoycheff, associate professor of journalism and new media at the Department of Communication, Wayne State University, and lead author of the paper. “This finding is problematic because it may enable a domineering, majority opinion to take control of online deliberative spaces, thus negating deliberation.”

But, she added, the increasing complexity of surveillance, and its use in tandem with private industry, means that more research is essential to understand how surveillance is altering the way people interact online, with content, and with one another.

The study happens to confirm recent comments by Snowden himself last Saturday, during a live video address to a gathering of whistleblowers, journalists and technologists in Berlin.

“It’s the minorities who are most at risk” from the impact of mass surveillance, Snowden said. “Without privacy there is only society, only the collective, which makes them all be and think alike. You can’t have anything yourself, you can’t have your own opinions, unless you have a space that belongs only to you.”
https://motherboard.vice.com/read/ch...ine-study-says





US Government Pushed Tech Firms to Hand Over Source Code

Obtaining a company's source code makes it radically easier to find security flaws and vulnerabilities for surveillance and intelligence-gathering operations.
Zack Whittaker

The US government has made numerous attempts to obtain source code from tech companies in an effort to find security flaws that could be used for surveillance or investigations.

The government has demanded source code in civil cases filed under seal but also by seeking clandestine rulings authorized under the secretive Foreign Intelligence Surveillance Act (FISA), a person with direct knowledge of these demands told ZDNet. We're not naming the person as they relayed information that is likely classified.

With these hearings held in secret and away from the public gaze, the person said that the tech companies hit by these demands are losing "most of the time."

When asked, a spokesperson for the Justice Dept. acknowledged that the department has demanded source code and private encryption keys before. In a recent filing against Apple, the government cited a 2013 case where it won a court order demanding that Lavabit, an encrypted email provider said to have been used by whistleblower Edward Snowden, must turn over its source code and private keys. The Justice Dept. used that same filing to imply it would, in a similar effort, demand Apple's source code and private keys in its ongoing case in an effort to compel the company's help by unlocking an iPhone used by the San Bernardino shooter.

Asked whether the Justice Dept. would demand source code in the future, the spokesperson declined to comment.

It's not uncommon for tech companies to refer to their source code as the "crown jewel" of their business. The highly sensitive code can reveal future products and services. Source code can also be used to find security vulnerabilities and weaknesses that government agencies could use to conduct surveillance or collect evidence as part of ongoing investigations.

Given to a rival or an unauthorized source, the damage can be incalculable.

We contacted more than a dozen tech companies in the Fortune 500. Unsurprisingly, none would say on the record if they had ever received such a request or demand from the government.

Cisco said in an emailed statement: "We have not and we will not hand over source code to any customers, especially governments."

IBM referred to a 2014 statement saying that the company does not provide "software source code or encryption keys to the NSA or any other government agency for the purpose of accessing client data." A spokesperson confirmed that the statement is still valid, but did not comment further on whether source code had been handed over to a government agency for any other reason.

Microsoft, Juniper Networks, and Seagate declined to comment.

Dell and EMC did not comment at the time of publication. Lenovo, Micron, Oracle, Texas Instruments, and Western Digital did not respond to requests for comment. (If this changes, we will provide updates.)

Apple's software chief Craig Federighi said in a sworn court declaration this week alongside the company's latest bid to dismiss the government's claims in the San Bernardino case that Apple has never revealed its source code to any government.

"Apple has also not provided any government with its proprietary iOS source code," wrote Federighi.

"While governmental agencies in various countries, including the United States, perform regulatory reviews of new iPhone releases, all that Apple provides in those circumstances is an unmodified iPhone device," he said.

The declaration was in part to allay fears (and the US government's claims) that it had modified iPhone software to agree to China's security checks, which include turning over source code to its inspectors.

But even senior tech executives may not know if their source code or proprietary technology had been turned over to the government, particularly if the order came from the Foreign Intelligence Surveillance Court (FISC).

The secretive Washington DC-based court, created in 1979 to oversee the government's surveillance warrants, has authorized more than 99 percent of all surveillance requests. The court has broad-sweeping powers to force companies to turn over customer data via clandestine surveillance programs and authorize US intelligence agencies to record an entire foreign country's phone calls, as well as conduct tailored hacking operations on high-value targets.

FISA orders are generally served to a company's general counsel, or a "custodian of records" within the legal department. (Smaller companies that can't afford their own legal departments often outsource their compliance to third-party companies.) These orders are understood to be typically for records or customer data.

These orders are so highly classified that simply acknowledging an order's existence is illegal, even a company's chief executive or members of the board may not be told. Only those who are necessary to execute the order would know, and would be subject to the same secrecy provisions.

Given that Federighi heads the division, it would be almost impossible to keep from him the existence of a FISA order demanding the company's source code.

It would not be the first time that the US government has reportedly used proprietary code and technology from American companies to further its surveillance efforts.

Top secret NSA documents leaked by whistleblower Edward Snowden, reported in German magazine Der Spiegel in late-2013, have suggested some hardware and software makers were compelled to hand over source code to assist in government surveillance.

The NSA's catalog of implants and software backdoors suggest that some companies, including Dell, Huawei, and Juniper -- which was publicly linked to an "unauthorized" backdoor -- had their servers and firewall products targeted and attacked through various exploits. Other exploits were able to infiltrate firmware of hard drives manufactured by Western Digital, Seagate, Maxtor, and Samsung.

Last year, antivirus maker and security firm Kaspersky later found evidence that the NSA had obtained source code from a number of prominent hard drive makers -- a claim the NSA denied -- to quietly install software used to eavesdrop on the majority of the world's computers.

"There is zero chance that someone could rewrite the [hard drive] operating system using public information," said one of the researchers.
http://www.zdnet.com/article/us-gove...r-source-code/





Apple Encryption Engineers, if Ordered to Unlock iPhone, Might Resist
John Markoff, Katie Benner and Brian X. Chen

If the F.B.I. wins its court fight to force Apple’s help in unlocking an iPhone, the agency may run into yet another roadblock: Apple’s engineers.

Apple employees are already discussing what they will do if ordered to help law enforcement authorities. Some say they may balk at the work, while others may even quit their high-paying jobs rather than undermine the security of the software they have already created, according to more than a half-dozen current and former Apple employees.

Among those interviewed were Apple engineers who are involved in the development of mobile products and security, as well as former security engineers and executives.

The potential resistance adds a wrinkle to a very public fight between Apple, the world’s most valuable company, and the authorities over access to an iPhone used by one of the attackers in the December mass killing in San Bernardino, Calif.

It also speaks directly to arguments Apple has made in legal documents that the government’s demand curbs free speech by asking the company to order people to do things that they consider offensive.

“Such conscription is fundamentally offensive to Apple’s core principles and would pose a severe threat to the autonomy of Apple and its engineers,” Apple’s lawyers wrote in the company’s final brief to the Federal District Court for the Central District of California.

The employees’ concerns also provide insight into a company culture that despite the trappings of Silicon Valley wealth still views the world through the decades-old, anti-establishment prism of its co-founders Steven P. Jobs and Steve Wozniak.

“It’s an independent culture and a rebellious one,” said Jean-Louis Gassée, a venture capitalist who was once an engineering manager at Apple. “If the government tries to compel testimony or action from these engineers, good luck with that.”

Timothy D. Cook, Apple’s chief executive, last month telegraphed what his employees might do in an email to customers: “The same engineers who built strong encryption into the iPhone to protect our users would, ironically, be ordered to weaken those protections and make our users less safe,” Mr. Cook wrote.

Apple declined to comment.

The fear of losing a paycheck may not have much of an impact on security engineers whose skills are in high demand. Indeed, hiring them could be a badge of honor among other tech companies that share Apple’s skepticism of the government’s intentions.

“If someone attempts to force them to work on something that’s outside their personal values, they can expect to find a position that’s a better fit somewhere else,” said Window Snyder, the chief security officer at the start-up Fastly and a former senior product manager in Apple’s security and privacy division.

Apple said in court filings last month that it would take from six to 10 engineers up to a month to meet the government’s demands. However, because Apple is so compartmentalized, the challenge of building what the company described as “GovtOS” would be substantially complicated if key employees refused to do the work.

Inside Apple, there is little collaboration among teams — for example, hardware engineers usually work in different offices from software engineers.

But when the company comes closer to releasing a product, key members from different teams come together to apply finishing touches like bug fixes, security audits and polishing the way the software looks and behaves.

A similar process would have to be created to produce the iPhone software for the Federal Bureau of Investigation. A handful of software engineers with technical expertise in writing highly secure software — the same people who have designed Apple’s security system over the last decade — would need to be among the employees the company described in its filing.

That team does not exist, and Apple is unlikely to make any moves toward creating it until the company exhausts its legal options. But Apple employees say they already have a good idea who those employees would be.

They include an engineer who developed software for the iPhone, iPad and Apple TV. That engineer previously worked at an aerospace company. Another is a senior quality-assurance engineer who is described as an expert “bug catcher” with experience testing Apple products all the way back to the iPod. A third likely employee specializes in security architecture for the operating systems powering the iPhone, Mac and Apple TV.

“In the hierarchy of civil disobedience, a computer scientist asked to place users at risk has the strongest claim that professional obligations prevent compliance,” said Marc Rotenberg, executive director of the Electronic Privacy Information Center. “This is like asking a doctor to administer a lethal drug.”

There are ways an employee could resist other than quitting, such as work absences. And it is a theoretical discussion. It could be a long time before employees confront such choices as the case moves through the legal system.

The security-minded corner of the technology industry is known to draw “healthfully paranoid” people who tend to be more doctrinaire about issues like encryption, said Arian Evans, vice president for product strategy at RiskIQ, an Internet security company. But that resolve can wither when money gets involved, he said.

An employee rebellion could throw the F.B.I’s legal fight with Apple into uncharted territory.

“If — and this is a big if — every engineer at Apple who could write the code quit and, also a big if, Apple could demonstrate that this happened to the court’s satisfaction, then Apple could not comply and would not have to,” said Joseph DeMarco, a former federal prosecutor. “It would be like asking my lawn guy to write the code.”

Mr. DeMarco, who filed a friend of the court brief on behalf of law enforcement groups that supported the Justice Department, also noted that if the engineers refused to write the code, rather than outright quit, “then I think that the court would be much more likely to find Apple in contempt,” he said.

Rather than contempt, Riana Pfefferkorn, a cryptography fellow at the Stanford Center for Internet and Society, said Apple could incur daily penalties if a judge thought it was delaying compliance.

The government has cracked down on tech companies in the past. A judge imposed a $10,000-a-day penalty on the email service Lavabit when it did not give its digital encryption keys to investigators pursuing information on Edward J. Snowden, the former intelligence contractor who leaked documents about government surveillance.

The small company’s response could be indicative of how individual Apple employees reacted to a court order. When Lavabit was held in contempt, its owner shut down the company rather than comply.
http://www.nytimes.com/2016/03/18/te...ht-resist.html





AceDeceiver: First iOS Trojan Exploiting Apple DRM Design Flaws to Infect Any iOS Device
Claud Xiao

We’ve discovered a new family of iOS malware that successfully infected non-jailbroken devices we’ve named “AceDeceiver”.

What makes AceDeceiver different from previous iOS malware is that instead of abusing enterprise certificates as some iOS malware has over the past two years, AceDeceiver manages to install itself without any enterprise certificate at all. It does so by exploiting design flaws in Apple’s DRM mechanism, and even as Apple has removed AceDeceiver from App Store, it may still spread thanks to a novel attack vector.

AceDeceiver is the first iOS malware we’ve seen that abuses certain design flaws in Apple’s DRM protection mechanism — namely FairPlay — to install malicious apps on iOS devices regardless of whether they are jailbroken. This technique is called “FairPlay Man-In-The-Middle (MITM)” and has been used since 2013 to spread pirated iOS apps, but this is the first time we’ve seen it used to spread malware. (The FairPlay MITM attack technique was also presented at the USENIX Security Symposium in 2014; however, attacks using this technique are still occurring successfully.)

Apple allows users purchase and download iOS apps from their App Store through the iTunes client running in their computer. They then can use the computers to install the apps onto their iOS devices. iOS devices will request an authorization code for each app installed to prove the app was actually purchased. In the FairPlay MITM attack, attackers purchase an app from App Store then intercept and save the authorization code. They then developed PC software that simulates the iTunes client behaviors, and tricks iOS devices to believe the app was purchased by victim. Therefore, the user can install apps they never actually paid for, and the creator of the software can install potentially malicious apps without the user’s knowledge.

Three different iOS apps in the AceDeceiver family were uploaded to the official App Store between July 2015 and February 2016, and all of them claimed to be wallpaper apps. These apps successfully bypassed Apple’s code review at least seven times (including the first time each was uploaded and then four rounds of code updates, which require an additional review by Apple for each instance) using a method similar to that used by ZergHelper, where the app tailors its behavior based on the physical geographic region in which it’s being executed. In this case, AceDeceiver only displays malicious behaviors when a user is located in China, but that would be easy for the attacker to change in any time. Apple removed these three apps from the App Store after we reported them in late February 2016. However, the attack is still viable because the FairPlay MITM attack only requires these apps to have been available in the App Store once. As long as an attacker could get a copy of authorization from Apple, the attack doesn’t require current App Store availability to spread those apps.

To carry out the attack, the author created a Windows client called ”爱思助手 (Aisi Helper)” to perform the FairPlay MITM attack. Aisi Helper purports to be software that provides services for iOS devices such as system re-installation, jailbreaking, system backup, device management and system cleaning. But what it’s also doing is surreptitiously installing the malicious apps on any iOS device that is connected to the PC on which Aisi Helper is installed. (Of note, only the most recent app is installed on the iOS device(s) at the time of infection, not all three at the same time.) These malicious iOS apps provide a connection to a third party app store controlled by the author for user to download iOS apps or games. It encourages users to input their Apple IDs and passwords for more features, and provided these credentials will be uploaded to AceDeceiver’s C2 server after being encrypted. We also identified some earlier versions of AceDeceiver that had enterprise certificates dated March 2015.

As of this writing, it looks as though AceDeceiver only affects users in mainland China. The bigger issue, however, is that AceDeceiver is evidence of another relatively easy way for malware to infect non-jailbroken iOS devices. As a result, it’s likely we’ll see this start to affect more regions around the world, whether by these attackers or others who copy the attack technique. In addition, the new attack technique is more dangerous than previous ones for the following reasons:

1. It doesn’t require an enterprise certificate, hence this kind of malware is not under MDM solutions’ control, and its execution doesn’t need user’s confirmation of trusting anymore.
2. It hasn’t been patched and even when it is, it’s likely the attack would still work on older versions of iOS systems.
3. Even though these apps have been removed from the App Store, that doesn’t affect the attack. Attackers do not need the malicious apps to be always available in App Store for them to spread – they only require the apps ever available in App Store once, and require the user to install the client to his or her PC. However, ZergHelper and AceDeceiver have shown how easy it can be to bypass Apple’s code review process and get malicious apps into the App Store.
4. The attack doesn’t require victims to manually install the malicious apps; instead, it does that for them. That’s why they can be only available in a few regions’ App Store without affecting the success of the attack. This also makes them much harder to be discovered by Apple or by security firms researching iOS vulnerabilities.
5. While the attack requires a user’s PC to be infected by malware first, after that, the infection of iOS devices is completed in the background without the user’s awareness. The only indication is that the new malicious app does appear as an icon in the user’s home screen, so the user may notice a new app he or she won’t recall downloading.

Our analysis of AceDeceiver leads us to believe FairPlay MITM attack will become another popular attack vector for non-jailbroken iOS devices – and thus a threat to Apple device users worldwide. Palo Alto Networks has released IPS signatures (38914, 38915) and has updated URL filtering and Threat Prevention to protect customers from the AceDeceiver Trojan as well as the FairPlay MITM attack technique.

In the rest of this blog we will take a detailed look at AceDeceiver’s method of spreading, attacking, and implementation. We will also cover how the FairPlay MITM attack works and discuss the security flaws in Apple’s DRM technology.
http://researchcenter.paloaltonetwor...ny-ios-device/





Apple's Response To DOJ: Your Filing Is Full Of Blatantly Misleading Claims And Outright Falsehoods
Mike Masnick

As expected, Apple has now responded to the DOJ in the case about whether or not it can be forced to write code to break its own security features to help the FBI access the encrypted work iPhone of Syed Farook, one of the San Bernardino attackers. As we noted, the DOJ's filing was chock-full of blatantly misleading claims, and Apple was flabbergasted by just how ridiculous that filing was. That comes through in this response.

The government attempts to rewrite history by portraying the Act as an all-powerful magic wand rather than the limited procedural tool it is. As theorized by the government, the Act can authorize any and all relief except in two situations: (1) where Congress enacts a specific statute prohibiting the precise action (i.e., says a court may not “order a smartphone manufacturer to remove barriers to accessing stored data on a particular smartphone,” ... or (2) where the government seeks to “arbitrarily dragoon[]” or “forcibly deputize[]” “random citizens” off the street.... Thus, according to the government, short of kidnapping or breaking an express law, the courts can order private parties to do virtually anything the Justice Department and FBI can dream up. The Founders would be appalled.

The Founders would be appalled. That's quite a statement.

Apple also slams the DOJ for insisting that this really is all about the one iPhone and that the court should ignore the wider precedent, citing FBI Director James Comey's own statements:

It has become crystal clear that this case is not about a “modest” order and a “single iPhone,” Opp. 1, as the FBI Director himself admitted when testifying before Congress two weeks ago. Ex. EE at 35 [FBI Director James Comey, Encryption Hr’g] (“[T]he broader question we’re talking about here goes far beyond phones or far beyond any case. This collision between public safety and privacy—the courts cannot resolve that.”). Instead, this case hinges on a contentious policy issue about how society should weigh what law enforcement officials want against the widespread repercussions and serious risks their demands would create. “Democracies resolve such tensions through robust debate” among the people and their elected representatives, Dkt. 16-8 [Comey, Going Dark], not through an unprecedented All Writs Act proceeding.

Apple then, repeatedly, points out where the DOJ selectively quoted, misquoted or misleadingly quoted arguments in its favor. For example:

The government misquotes Bank of the United States v. Halstead,..., for the proposition that “‘[t]he operation of [the Act]’” should not be limited “‘to that which it would have had in the year 1789.’” ... (misquoting Halstead, 23 U.S. (10 Wheat.) at 62) (alterations are the government’s). But what the Court actually said was that the “operation of an execution”—the ancient common law writ of “venditioni exponas”—is not limited to that “which it would have had in the year 1789.” ... see also... (“That executions are among the writs hereby authorized to be issued, cannot admit of a doubt . . . .”). The narrow holding of Halstead was that the Act (and the Process Act of 1792) allowed courts “to alter the form of the process of execution.” ... (courts are not limited to the form of the writ of execution “in use in the Supreme Courts of the several States in the year 1789”). The limited “power given to the Courts over their process is no more than authorizing them to regulate and direct the conduct of the Marshal, in the execution of the process.”

The authority to alter the process by which courts issue traditional common law writs is not authority to invent entirely new writs with no common law analog. But that is precisely what the government is asking this Court to do: The Order requiring Apple to create software so that the FBI can hack into the iPhone has no common law analog.


The filing then goes step by step in pointing out how the government is wrong about almost everything. The DOJ, for example, kept insisting that CALEA doesn't apply at all to Apple, but Apple points out that the DOJ just seems to be totally misreading the law:

Contrary to the government’s assertion that its request merely “brush[es] up against similar issues” to CALEA..., CALEA, in fact, has three critical limitations—two of which the government ignores entirely—that preclude the relief the government seeks.... First, CALEA prohibits law enforcement agencies from requiring “electronic communication service” providers to adopt “any specific design of equipment, facilities, services, features, or system configurations . . . .” The term “electronic communication service” provider is broadly defined to encompass Apple. ... (“any service which provides to users thereof the ability to send or receive wire or electronic communications”). Apple is an “electronic communication services” provider for purposes of the very services at issue here because Apple’s software allows users to “send or receive . . . communications” between iPhones through features such as iMessage and Mail....

The government acknowledges that FaceTime and iMessage are electronic communication services, but asserts that this fact is irrelevant because “the Court’s order does not bear at all upon the operation of those programs.” ... Not so. The passcode Apple is being asked to circumvent is a feature of the same Apple iOS that runs FaceTime, iMessage, and Mail, because an integral part of providing those services is enabling the phone’s owner to password-protect the private information contained within those communications. More importantly, the very communications to which law enforcement seeks access are the iMessage communications stored on the phone.... And, only a few pages after asserting that “the Court’s order does not bear at all upon the operation of” FaceTime and iMessage for purposes of the CALEA analysis..., the government spends several pages seeking to justify the Court’s order based on those very same programs, arguing that they render Apple “intimately close” to the crime for purposes of the New York Telephone analysis.

Second, the government does not dispute, or even discuss, that CALEA excludes “information services” providers from the scope of its mandatory assistance provisions.... Apple is indisputably an information services provider given the features of iOS, including Facetime, iMessage, and Mail....

Finally, CALEA makes clear that even telecommunications carriers (a category of providers subject to more intrusive requirements under CALEA, but which Apple is not) cannot be required to “ensure the government’s ability” to decrypt or to create decryption programs the company does not already “possess.”... If companies subject to CALEA’s obligations cannot be required to bear this burden, Congress surely did not intend to allow parties specifically exempted by CALEA (such as Apple) to be subjected to it. The government fails to address this truism.


Next, Apple rebuts the DOJ saying that since CALEA doesn't address this specific situation, that means Congress is just leaving it up to the courts to use the All Writs Act. As Apple points out, in some cases, Congress not doing something doesn't mean it rejected certain positions, but in this case, the legislative history is quite clear that Congress did not intend for companies to be forced to help in this manner.

Here, Congress chose to require limited third-party assistance in certain statutes designed to aid law enforcement in gathering electronic evidence (although none as expansive as what the government seeks here), but it has declined to include similar provisions in other statutes, despite vigorous lobbying by law enforcement and notwithstanding its “prolonged and acute awareness of so important an issue” as the one presented here.... Accordingly, the lack of statutory authorization in CALEA or any of the complementary statutes in the “comprehensive federal scheme” of surveillance and telecommunications law speaks volumes.... To that end, Congress chose to “greatly narrow[]” the “scope of [CALEA],” which ran contrary to the FBI’s interests but was “important from a privacy standpoint.” ... Indeed, CALEA’s provisions were drafted to “limit[] the scope of [industry’s] assistance requirements in several important ways.”....

That the Executive Branch recently abandoned plans to seek legislation expanding CALEA’s reach... provides renewed confirmation that Congress has not acceded to the FBI’s wishes, and belies the government’s view that it has possessed such authority under the All Writs Act since 1789.


In fact, in a footnote, Apple goes even further in not just blasting the DOJ's suggestion that Congress didn't really consider a legislative proposal to update CALEA to suck in requirements for internet communications companies, but also highlighting the infamous quote from top intelligence community lawyer Robert Litt about how they'd just wait for the next terrorist attack and get the law passed in their favor at that point.

The government’s attempts to minimize CALEA II, saying its plans consisted of “mere[] vague discussions” that never developed into a formal legislative submission ..., but federal officials familiar with that failed lobbying effort confirmed that the FBI had in fact developed a “draft proposal” containing a web of detailed provisions, including specific fines and compliance timelines, and had floated that proposal with the White House..... As The Washington Post reported, advocates of the proposal within the government dropped the effort, because they determined they could not get what they wanted from Congress at that time: “Although ‘the legislative environment is very hostile today,’ the intelligence community’s top lawyer, Robert S. Litt, said to colleagues in an August [2015] e-mail, which was obtained by The Post, ‘it could turn in the event of a terrorist attack or criminal event where strong encryption can be shown to have hindered law enforcement.’ There is value, he said, in ‘keeping our options open for such a situation.’”

Next Apple goes through the arguments for saying that, even if the All Writs Act does apply, and even if the court accepts the DOJ's made up three factor test, Apple should still prevail. It notes, again, that it is "far removed" from the issue and reminds the court that the order sought here is very different from past cases where Apple has cooperated:

The government argues that “courts have already issued AWA orders” requiring manufacturers to “unlock” phones ... but those cases involved orders requiring “unlocking” assistance to provide access through existing means, not the extraordinary remedy sought here, i.e., an order that requires creating new software to undermine the phones’ (or in the Blake case, the iPad’s) security safeguards.

It also mocks that weird argument from the DOJ that said because Apple "licenses" rather than "sells" its software, that means Apple is more closely tied to the case:

The government discusses Apple’s software licensing and data policies at length, equating Apple to a feudal lord demanding fealty from its customers (“suzerainty”). ... But the government does not cite any authority, and none exists, suggesting that the design features and software that exist on every iPhone somehow link Apple to the subject phone and the crime. Likewise, the government has cited no case holding that a license to use a product constituted a sufficient connection under New York Telephone. Indeed, under the government’s theory, any ongoing postpurchase connection between a manufacturer or service provider and a consumer suffices to connect the two in perpetuity—even where, as here, the data on the iPhone is inaccessible to Apple.

From there, Apple dives in on the question of how much of a "burden" this would be. This is the issue that Judge Pym has indicated she's most interested in, and Apple goes deep here -- again and again focusing on how the DOJ was blatantly misleading in its motion:

Forcing Apple to create new software that degrades its security features is unprecedented and unlike any burden ever imposed under the All Writs Act. The government’s assertion that the phone companies in Mountain Bell and In re Application of the U.S. for an Order Authorizing the Installation of a Pen Register or Touch-Tone Decoder and a Terminating Trap ..., were conscripted to “write” code, akin to the request here... mischaracterizes the actual assistance required in those cases. The government seizes on the word “programmed” in those cases and superficially equates it to the process of creating new software..... But the “programming” in those cases—back in 1979 and 1980—consisted of a “technician” using a “teletypewriter” in Mountain Bell ..., and “t[ook] less than one minute” in Penn Bell... Indeed, in Mountain Bell, the government itself stated that the only burden imposed “was a large number of print-outs on the teletype machine”—not creating new code..... More importantly, the phone companies already had and themselves used the tracing capabilities the government wanted to access.... And although relying heavily on Mountain Bell, the government neglects to point out the court’s explicit warning that “[t]his holding is a narrow one, and our decision today should not be read to authorize the wholesale imposition upon private, third parties of duties pursuant to search warrants.” ...This case stands light years from Mountain Bell. The government seeks to commandeer Apple to design, create, test, and validate a new operating system that does not exist, and that Apple believes—with overwhelming support from the technology community and security experts—is too dangerous to create.

Seeking to belittle this widely accepted policy position, the government grossly mischaracterizes Apple’s objection to the requested Order as a concern that “compliance will tarnish its brand”..., a mischaracterization that both the FBI Director and the courts have flatly rejected. [See Comey] (“I don’t question [Apple’s] motive”);... (disagreeing “with the government’s contention that Apple’s objection [to being compelled to decrypt an iPhone] is not ‘conscientious’ but merely a matter of ‘its concern with public relations’”). As Apple explained in its Motion, Apple prioritizes the security and privacy of its users, and that priority is reflected in Apple’s increasingly secure operating systems, in which Apple has chosen not to create a back door.


Apple also calls out the DOJ's technical ignorance.

The government’s assertion that “there is no reason to think that the code Apple writes in compliance with the Order will ever leave Apple’s possession” ... simply shows the government misunderstands the technology and the nature of the cyber-threat landscape. As Apple engineer Erik Neuenschwander states:

I believe that Apple’s iOS platform is the most-attacked software platform in existence. Each time Apple closes one vulnerability, attackers work to find another. This is a constant and never-ending battle. Mr. Perino’s description of third-party efforts to circumvent Apple’s security demonstrates this point. And the protections that the government now asks Apple to compromise are the most security-critical software component of the iPhone—any vulnerability or back door, whether introduced intentionally or unintentionally, can represent a risk to all users of Apple devices simultaneously.

... The government is also mistaken in claiming that the crippled iOS it wants Apple to build can only be used on one iPhone:

Mr. Perino’s characterization of Apple’s process . . . is inaccurate. Apple does not create hundreds of millions of operating systems each tailored to an individual device. Each time Apple releases a new operating system, that operating system is the same for every device of a given model. The operating system then gets a personalized signature specific to each device. This personalization occurs as part of the installation process after the iOS is created.

Once GovtOS is created, personalizing it to a new device becomes a simple process. If Apple were forced to create GovtOS for installation on the device at issue in this case, it would likely take only minutes for Apple, or a malicious actor with sufficient access, to perform the necessary engineering work to install it on another device of the same model.

. . . [T]he initial creation of GovtOS itself creates serious ongoing burdens and risks. This includes the risk that if the ability to install GovtOS got into the wrong hands, it would open a significant new avenue of attack, undermining the security protections that Apple has spent years developing to protect its customers.


And, not surprisingly, Apple angrily attacks the DOJ's bogus misleading use of Apple's transparency report statements about responding to lawaful requests for government information in China, by pointing out how that's quite different than this situation:
Finally, the government attempts to disclaim the obvious international implications of its demand, asserting that any pressure to hand over the same software to foreign agents “flows from [Apple’s] decision to do business in foreign countries . . . .”. Contrary to the government’s misleading statistics ..., which had to do with lawful process and did not compel the creation of software that undermines the security of its users, Apple has never built a back door of any kind into iOS, or otherwise made data stored on the iPhone or in iCloud more technically accessible to any country’s government.... The government is wrong in asserting that Apple made “special accommodations” for China, as Apple uses the same security protocols everywhere in the world and follows the same standards for responding to law enforcement requests.

Apple also points out that the FBI appears to be contradicting itself as well:

Moreover, while they now argue that the FBI’s changing of the iCloud passcode—which ended any hope of backing up the phone’s data and accessing it via iCloud—“was the reasoned decision of experienced FBI agents”, the FBI Director himself admitted to Congress under oath that the decision was a “mistake”.... The Justice Department’s shifting, contradictory positions on this issue—first blaming the passcode change on the County, then admitting that the FBI told the County to change the passcode after the County objected to being blamed for doing so, and now trying to justify the decision in the face of Director Comey’s admission that it was a mistake—discredits any notion that the government properly exhausted all viable investigative alternatives before seeking this extraordinary order from this Court.

On the Constitutional questions, again Apple points out that the DOJ doesn't appear to understand what it's talking about:

The government begins its First Amendment analysis by suggesting that “[t]here is reason to doubt that functional programming is even entitled to traditional speech protections” ... , evincing its confusion over the technology it demands Apple create. Even assuming there is such a thing as purely functional code, creating the type of software demanded here, an operating system that has never existed before, would necessarily involve precisely the kind of expression of ideas and concepts protected by the First Amendment. Because writing code requires a choice of (1) language, (2) audience, and (3) syntax and vocabulary, as well as the creation of (4) data structures, (5) algorithms to manipulate and transform data, (6) detailed textual descriptions explaining what code is doing, and (7) methods of communicating information to the user, “[t]here are a number of ways to write code to accomplish a given task.”... As such, code falls squarely within the First Amendment’s protection, as even the cases cited by the government acknowledge...

Later it points out that the DOJ's claim that since Apple can write such code however it wants it's not compelled speech, Apple points out that their argument says the exact opposite:

The government attempts to evade this unavoidable conclusion by insisting that, “[t]o the extent [that] Apple’s software includes expressive elements . . . the Order permits Apple to express whatever it wants, so long as the software functions” by allowing it to hack into iPhones.... This serves only to illuminate the broader speech implications of the government’s request. The code that the government is asking the Court to force Apple to write contains an extra layer of expression unique to this case. When Apple designed iOS 8, it consciously took a position on an issue of public importance.... The government disagrees with Apple’s position and asks this Court to compel Apple to write new code that reflects its own viewpoint—a viewpoint that is deeply offensive to Apple.

The filing is basically Apple, over and over again, saying, "uh, what the DOJ said was wrong, clueless, technically ignorant, or purposely misleading." Hell, they even attack the DOJ's claim that the All Writs Act was used back in 1807 to force Aaron Burr's secretary to decrypt one of Burr's cipher-protected letters. Apple points out that the DOJ is lying.

The government contends that Chief Justice Marshall once ordered a third party to “provide decryption services” to the government.... He did nothing of the sort, and the All Writs Act was not even at issue in Burr. In that case, Aaron Burr’s secretary declined to state whether he “understood” the contents of a certain letter written in cipher, on the ground that he might incriminate himself.... The Court held that the clerk’s answer as to whether he understood the cipher could not incriminate him, and the Court thus held that “the witness may answer the question now propounded”—i.e., whether he understood the letter.... The Court did not require the clerk to decipher the letter.

If anything, to be honest, I'm surprised that Apple didn't go even harder on the DOJ for misrepresenting things. Either way, Apple is pretty clearly highlighting just how desperate the DOJ seems in this case.
https://www.techdirt.com/articles/20...lsehoods.shtml





Facebook, Google and WhatsApp Plan to Increase Encryption of User Data

Spurred on by Apple’s battles against the FBI, some of tech’s biggest names are to expand encryption of user data in their services, the Guardian can reveal
Danny Yadron

Silicon Valley’s leading companies – including Facebook, Google and Snapchat – are working on their own increased privacy technology as Apple fights the US government over encryption, the Guardian has learned.

The projects could antagonize authorities just as much as Apple’s more secure iPhones, which are currently at the center of the San Bernardino shooting investigation. They also indicate the industry may be willing to back up their public support for Apple with concrete action.

Within weeks, Facebook’s messaging service WhatsApp plans to expand its secure messaging service so that voice calls are also encrypted, in addition to its existing privacy features. The service has some one billion monthly users. Facebook is also considering beefing up security of its own Messenger tool.

Snapchat, the popular ephemeral messaging service, is also working on a secure messaging system and Google is exploring extra uses for the technology behind a long-in-the-works encrypted email project.

Engineers at major technology firms, including Twitter, have explored encrypted messaging products before only to see them never be released because the products can be hard to use – or the companies prioritized more consumer-friendly projects. But they now hope the increased emphasis on encryption means that technology executives view strong privacy tools as a business advantage – not just a marketing pitch.

These new projects began before Apple entered a court battle with the Department of Justice over whether it should help authorities hack into a suspected terrorist’s iPhone. Apple is due to appear in a federal court in California later this month to fight the order.

Polling has shown public opinion is divided over the case. And any new encyrption efforts by tech firms put them on a collision course with Washington. Two US senators, the Democrat Dianne Feinstein of California and the Republican Richard Burr of North Carolina, say they have written draft legislation that would create penalties for companies that aren’t able to provide readable user data to authorities. Barack Obama has also made it clear he thinks some technology companies are going too far. “If government can’t get in, then everyone’s walking around with a Swiss bank account in their pocket, right?” he said 11 March at the SXSW technology conference in Austin, Texas.

FBI 'could force Apple to hand over private key'

WhatsApp has been rolling out strong encryption to portions of its users since 2014, making it increasingly difficult for authorities to tap the service’s messages. The issue is personal for founder Jan Koum, who was born in Soviet-era Ukraine. When Apple CEO Tim Cook announced in February that his company would fight the government in court, Koum posted on his Facebook account: “Our freedom and our liberty are at stake.”

His efforts to go further still are striking as the app is in open confrontation with governments. Brazil authorities arrested a Facebook executive on 1 March after WhatsApp told investigators it lacked the technical ability to provide the messages of drug traffickers. Facebook called the arrest “extreme and disproportionate”.

WhatsApp already offers Android and iPhone users encrypted messaging. In the coming weeks, it plans to offer users encrypted voice calls and encrypted group messages, two people familiar with the matter said. That would make WhatsApp, which is free to download, very difficult for authorities to tap.

Unlike many encrypted messaging apps, WhatsApp hasn’t pushed the security functions of the service as a selling point to users. Koum, its founder, has said users should be able to expect that security is a given, not a bonus feature.

It’s unclear if that will change. In the coming weeks, WhatsApp plans to make a formal announcement about its expanded encryption offerings, sources said.

The efforts come at a crossroads for Silicon Valley. Google, Facebook, Snapchat, Amazon, Microsoft and Twitter have all signed on to legal briefs supporting Apple in its court case. At the same time, some of the companies have shown an increased willingness to help the government in its efforts to fight the spread of Islamic extremist propaganda online – often using their services.

Facebook’s chief operating officer, Sheryl Sandberg, has talked publicly about how tech companies can help the west combat Isis online and Eric Schmidt, executive chairman of Google’s parent company, Alphabet, recently joined a Defense Department advisory group on how tech can aid in future battles.

Those matters may seem separate, but US national security officials view the increasing availability of encryption technology as a major aid to Islamic State’s online recruitment efforts. At some point, tech firms may have to choose whether they care more about being seen as helping the west to fight terrorism or standing as privacy advocates.

Some technology executives think one middle path would be to encourage the use of encryption for the content of messages while maintaining the ability to hand over metadata, which reveals who is speaking to whom, how often and when. That is why the specifics of the new products will be key to determining both their security and Washington’s reaction to them.

The Guardian couldn’t immediately determine the specific details of Snapchat’s and Facebook’s projects. All the companies declined to comment.

In 2014, Google announced a project called End to End, which would make it easier to send encrypted emails in such a way that only the sender and recipient could decode them. The project, once a collaboration with Yahoo, has been slow-going.

That appears to have changed in recent months, though, sources familiar with the project said, and other Google employees have shown in renewed interest in the idea. At a February internal town hall at Google, one engineer stood up and asked vice-president of security and privacy engineering Gerhard Eschelbeck why Google wasn’t doing more to support encrypted communications, according to two people familiar with the exchange.

Gerhard countered the company increasingly was putting effort behind such projects. Some Google employees are discussing whether the technology behind End to End can be applied to other products, though no final determinations have been made.

“This has been an ongoing effort for a long time at Google,” one person briefed on the project said. One of the challenges for the search giant is that there are some types of data for which it remains challenging to offer end-to-end security, both for usability and business model reasons.

Google sells targeted ads by scanning users’ email, a process that gets tricky if the contents remain encrypted. Many consumers also use Gmail accounts, which include large amounts of free storage, as a sort of online file system, sometimes dating back more than a decade.

“There are lots of difficulties at Google that aren’t same at Apple,” the person briefed on the project said. “The business models are just different.”

In the meantime, WhatsApp’s encryption is based on code developed by a well-known privacy evangelist, Moxie Marlinspike, whose secure messaging app Signal is used by security hawks. One advantage of Marlinspike’s encryption tools is that they have been tested repeatedly by outside security experts.

Apple, the company behind the two-year debate over encryption, is also taking steps to beef up privacy. The company has been in discussions with outside security experts about ways to make it technically harder still for investigators to force the company to hand over data from customers’ iPhones, according to sources. The New York Times earlier reported on those conversations.

Last month, Frederic Jacobs, an accomplished cryptographer and one of the coders behind Signal, announced he had accepted a job at Apple. It’s a summer internship with the security team for the iPhone’s core software.
http://www.theguardian.com/technolog...tion-fbi-apple





What ISPs Can See

Clarifying the technical landscape of the broadband privacy debate
Harlan Yu, David Robinson, Aaron Rieke

In 2015, the Federal Communications Commission (FCC) reclassified broadband Internet service providers (ISPs) as common carriers under Title II of the Communications Act.1 This shift triggered a statutory mandate for the FCC to protect the privacy of broadband Internet subscribers’ information.2 The FCC is now considering how to craft new rules to clarify the privacy obligations of broadband providers.3

Last week, the Institute for Information Security & Privacy at Georgia Tech released a working paper whose senior author is Professor Peter Swire, entitled “Online Privacy and ISPs.”4 The paper describes itself as a “factual and descriptive foundation” for the FCC as the Commission considers how to approach broadband privacy.5 The paper suggests that certain technical factors limit ISPs’ visibility into their subscribers’ online activities. It also highlights the data collection practices of other (non-ISP) players in the Internet ecosystem.6

We believe that the Swire paper, although technically accurate in most of its particulars, could leave readers with some mistaken impressions about what broadband ISPs can see. We offer this report as a complement to the Swire paper, and an alternative, technically expert assessment of the present and potential future monitoring capabilities available to ISPs.

We observe that:

1. Truly pervasive encryption on the Internet is still a long way off. The fraction of total Internet traffic that’s encrypted is a poor proxy for the privacy interests of a typical user. Many sites still don’t encrypt: for example, in each of three key categories that we examined (health, news, and shopping), more than 85% of the top 50 sites still fail to encrypt browsing by default. This long tail of unencrypted web traffic allows ISPs to see when their users research medical conditions, seek advice about debt, or shop for any of a wide gamut of consumer products.

2. Even with HTTPS, ISPs can still see the domains that their subscribers visit. This type of metadata can be very revealing, especially over time. And ISPs are already known to look at this data — for example, some ISPs analyze DNS query information for justified network management purposes, including identifying which of their users are accessing domain names indicative of malware infection.

3. Encrypted Internet traffic itself can be surprisingly revealing. In recent years, computer science researchers have demonstrated that network operators can learn a surprising amount about the contents of encrypted traffic without breaking or weakening encryption. By examining the features of network traffic — like the size, timing and destination of the encrypted packets — it is possible to uniquely identify certain web page visits or otherwise obtain information about what the traffic contains.

4. VPNs are poorly adopted, and can provide incomplete protection. VPNs have been commercially available for years, but they are used sparsely in the United States, for a range of reasons we describe below.

We agree that public policy needs to be built on an accurate technical foundation, and we believe that thoughtful policies, especially those related to Internet technologies, should be reasonably robust to foreseeable technical developments.

We intend for this report to assist policymakers, advocates, and the general public as they consider the technical capabilities of broadband ISPs, and the broader technical context within which this policy debate is happening. This paper does not, however, take a position on any question of public policy.

Four Key Technical Clarifications

1. Truly pervasive encryption on the Internet is still a long way off.

Today, a significant portion of Internet activity remains unencrypted. When a web site uses the unencrypted Hypertext Transfer Protocol (HTTP), an ISP can see the full Uniform Resource Locator (URL) and the content for any web page requested by the user. Although many popular, high-traffic web sites have adopted encryption by default,7 a “long tail” of web sites have not.

The fraction of total traffic that is encrypted on the Internet is a poor guide to the privacy interests of a typical user. The Swire paper argues that “the norm has become that deep links and content are encrypted on the Internet,” basing its claim on the true observation that “an estimated 70 percent of traffic will be encrypted by the end of 2016.”8 However, this number includes traffic from sites like Netflix, which itself accounts for about 35% of all downstream Internet traffic in North America.9

Sensitivity doesn’t depend on volume. For instance, watching the full Ultra HD stream of The Amazing Spider-Man could generate more than 40GB of traffic, while retrieving the WebMD page for “pancreatic cancer” generates less than 2MB. The page is 20,000 times less data by volume, but likely far more sensitive than the movie. (WebMD has yet to offer users the option of secure HTTPS connections, much less to make that option the sole or default choice.)

We conducted a brief survey of the 50 most popular web sites in the each of three categories — health, news and shopping — as ranked by Alexa.10

The Long Tail of Unencrypted Web Traffic
Alexa Top 50 Sites, by Category
Category Percent of Sites that Do Not Encrypt
Browsing by Default Example URLs for Unencrypted Web Sites

Health 86%

http://webmd.com/hiv-aids/guide/…
http://mayoclinic.org/…cancer…
http://medicinenet.org/…eczema…
http://health.com/sexual-health
http://who.int/…childhood-hearing-loss…
News 90%

http://nytimes.com/…tax-tips…
http://huffingtonpost.com/divorce
http://video.foxnews.com/…sex-after-50…
http://time.com/…gay-rights…
http://bankrate.com/debt-management…
Shopping 86%

http://ikea.com/…bathroom…
http://target.com/…study-bible…
http://macys.com/…maternity-clothes…
http://bedbathandbeyond.com/…acne-wash…
http://toysrus.com/…Toddler-Toys…

We found that the vast majority of these web sites — more than 85% of sites in each of the three areas — still do not fully support encrypted browsing by default.11 These sites included references on a full range of medical conditions, advice about debt management, and product listings for hundreds of millions of consumer products. For these unencrypted pages, ISPs can see both the full web site URLs and the specific content on each web page. Many sites are small in data volume, but high in privacy sensitivity. They can paint a revealing picture of the user’s online and offline life, even within a short period of time.

Sites struggle to adopt encryption. From the perspective of one of these unencrypted web sites, it can be very challenging to migrate to HTTPS, especially when the site relies on a wide range of third-party partners for services including advertising, analytics, tracking, or embedded videos. In order for a site to migrate to HTTPS without triggering warnings in its users’ browsers, each one of the third-party partners that site uses on its pages must support HTTPS.12

Getting third-party partners to support HTTPS is a serious hurdle, even for sites that want to make the switch.13 For example, in a 2015 survey of 2,156 online advertising services, more than 85% did not support HTTPS.14 Moreover, as of early 2015, only 38% of the 123 services in the Digital Advertising Alliance’s own database supported HTTPS.15 In the figure above, describing the top 100 news sites, each unit of red or burgundy indicates a third-party partner that does not support HTTPS. In order for any one of these news sites to provide its content to users securely (without creating warning or error messages) the publisher must either wait for all of its red and burgundy partners to turn green, or else abandon those partners on any secure parts of its site. The online advertising industry is working to improve its security posture,16 but clearly there remains a long road ahead.

Internet of Things devices often transmit data without encryption. It’s not only web sites that fail to encrypt traffic transmitted over broadband connections. Many Internet of Things (IoT) devices, such as smart thermostats, home voice integration systems, and other appliances, fail to encrypt at least some of the traffic that they send and receive.17 For example, researchers at the Center for Information Technology Policy at Princeton recently found a range of popular devices — from the Nest thermostat to the Ubi voice system, to the PixStar photo frame — transmitting unencrypted data across the network.18 “Investigating the traffic to and from these devices turned out to be much easier than expected,” observed Professor Nick Feamster.19

As more users adopt mobile devices, they communicate with a greater number of ISPs. Use of mobile devices is growing rapidly as a portion of users’ overall Internet activity. The Swire paper observes that today’s ISPs face a more “fractured world” in which they have a “less comprehensive view of a user’s Internet activity.”20 It is true that today, many consumers’ personal Internet activities are spread out over several connections: a home provider, a workplace provider, and a mobile provider. However, a user often has repeated, ongoing, long-term interactions with both her mobile and her wireline provider. Over time, each ISP can see a substantial amount of that user’s Internet traffic. There’s plenty of activity to go around: The amount of time U.S. consumers spend on connected devices has increased every year since 2008.21

2. Even with HTTPS, ISPs can still see the domains that their subscribers visit.

The increased use of encryption on the Web is a substantial privacy improvement for users. When a web site does use HTTPS, an ISP cannot see URLs and content in unencrypted form. However, ISPs can still almost always see the domain names that their subscribers visit.

DNS queries are almost never encrypted.22 ISPs can see the visited domains for each subscriber by monitoring requests to the Domain Name System (DNS). DNS is a public directory that translates a domain name (like bankofamerica.com) into a corresponding IP addresses (like 171.161.148.150). Before the user visits bankofamerica.com for the first time, the user’s computer must first learn the site’s IP address, so the computer automatically sends a background DNS query about bankofamerica.com.

Even if connections to bankofamerica.com are encrypted, DNS queries about bankofamerica.com are not. In fact, DNS queries are almost never encrypted. ISPs could simply monitor what queries its users are making over the network.

Collection and use of DNS queries by ISPs is practical, is cost effective, and happens today on ISP networks. Because the user’s computer is assigned by default to use the ISP’s DNS server, the ISP is generally capable of retaining and analyzing records of the queries, which the users themselves send to the ISP in the normal course of their browsing. The Swire paper asserts that it “appears to be impractical and cost-prohibitive” to collect and use DNS queries, but cites no technical or other authority for that assessment.23 Our technical experience indicates that logging is both feasible and relatively cheap to do: Modern networking equipment can easily log these requests for later analysis. Moreover, even if the user’s computer is specially configured to use an external DNS server (not operated by the user’s ISP), the DNS queries must still reach that external server unencrypted, and those queries must still travel over the ISP’s network, creating the opportunity to inspect them.

In fact, ISPs already do monitor user DNS queries for valid network management purposes, including to detect potential infections of malicious software on user devices.24 Certain domain names are used solely by malicious software tools, and real user traffic can be analyzed to identify and block such domains.25 Moreover, when an individual user visits a compromised domain, this is a strong sign that one or more of that user’s devices is infected, and commercially available tools allow ISPs to notify the user about the potential infections.26 According to literature from a network equipment vendor, Comcast currently deploys this security-focused, per-subscriber DNS monitoring functionality on its network.27

Researchers in 2011 also found that several small ISPs were already leveraging their role as DNS providers to not only monitor, but actively interfere with, DNS resolution for their users.28 To be clear, we are not aware of any evidence that large ISPs have yet begun to use DNS queries in privacy-invasive ways, much less to interfere with subscribers’ queries along the lines detected in 2011. We observe here only that it is technologically feasible today for ISPs both to monitor and to interfere with DNS queries.

Although network security is not substantially impacted by a modest to moderate amount of VPN usage, there are meaningful engineering downsides to a future in which most or all DNS queries are cryptographically concealed from the end user’s ISP. (Such a future could, for example, make it more difficult for ISPs to provide early and detection and swift response for some kinds of malware attacks.) At the same time, as long as the user’s DNS queries are visible to the ISP for network management purposes, the ISP will also have a technologically feasible option to analyze those queries in ways that would compromise user privacy.

Even a short series of visited domains from one subscriber can be sensitive. A pivotal moment in a user’s life, for example, could generate the following log at the user’s ISP (assuming the user hasn’t invested in special privacy tools):

[2015/03/09 18:34:44] abortionfacts.com
[2015/03/09 18:35:23] plannedparenthood.org
[2015/03/09 18:42:29] dcabortionfund.org
[2015/03/09 19:02:12] maps.google.com

Over a longer period of time, metadata can paint a revealing picture about a subscriber’s habits and interests. As other policy discussions have made clear in recent years, metadata is very revealing over time.29 For example, in the context of telephony metadata, the President’s Review Group on Intelligence and Communications Technologies found that “the record of every telephone call an individual makes or receives over the course of several years can reveal an enormous amount about that individual’s private life.”30 The Group went on to note that “[i]n a world of ever more complex technology, it is increasingly unclear whether the distinction between ‘meta-data’ and other information carries much weight.”31

This reasoning applies with equal strength to domain names, which we believe are likely to be even more revealing than telephone records. Such a list of domains could also indicate the presence of various “smart” devices in the subscriber’s home, based on the known domains that these devices automatically connect to.32

3. Encrypted Internet traffic itself can be surprisingly revealing.

Encryption stops ISPs from simply reading content and URL information directly off the wire. However, it is important to understand that encryption still leaves open a wide variety of other, less direct methods for ISPs to monitor their users if they chose.

A growing body of computer science research demonstrates that a network operator can learn a surprising amount about the contents of encrypted traffic without breaking or weakening encryption. By examining the features of the traffic — like the size, timing and destination of the encrypted packets — it is possible to uniquely identify certain web page visits or otherwise reveal information about what those packets likely contain. In the technical literature, inferences reached in this way are called “side channel” information.

Some of these methods are already in use in the field today: in countries that censor the Internet, government authorities are able to identify and disrupt targeted data access based on its secondary traits even when access is encrypted. Concerningly, such nations often rely on Western technology vendors, whose advanced products allow censors increasingly to analyze and act on traffic at “line speed” (that is, in real time, as the data passes through a network).33

The side channel methods that we describe below are likely not used (or at least not widely used) by ISPs today. But as encryption spreads, these techniques might become much more compelling. Policymakers should have a clear understanding of what’s possible for ISPs to learn, both now and in the future.

Identifying specific sites and pages. Web site fingerprinting is a well-known technique that allows an ISP to potentially identify the specific encrypted web page that a user is visiting.34 This technique leverages the fact that different web sites have different features: they send differing amounts of content, and they load different third-party resources, from different locations, in different orders. By examining these features, it’s often possible to uniquely identify the specific web page that the user is accessing, despite the use of strong encryption when the web site is in transit.

Researchers have published numerous studies on the topic of web site fingerprinting. In one early study using a relatively basic technique, researchers found that approximately 60% of the web pages they studied were uniquely identifiable based on such unconcealed features.35 Later studies have introduced more advanced techniques, as well as possible countermeasures. But even with various defenses in place, researchers were still able to distinguish precisely which out of a hundred different sites a user was visiting, more than 50% of the time.36

This body of research illustrates that decrypting a communication isn’t necessarily the only way to “see” it. The Swire paper asserts that “[w]ith encrypted content, ISPs cannot see detailed URLs and content even if they try.”37 To be fully accurate, however, that claim requires qualification: ISPs generally cannot decrypt detailed URLs and content. But, this class of research demonstrates that with some amount of effort, it would indeed be feasible for ISPs to learn detailed URLs (and through those URLs, in some instances, the actual content of web pages) in a range of real-world situations.

Deriving search queries. Popular search engines — like Google, Yahoo and Bing — provide a user-friendly feature called auto-suggest: after the user enters each character, the search engine suggests a list of popular search queries that match the current prefix, in an attempt to guess what the user is searching for. By analyzing the distinctive size of these encrypted suggestion lists that are transmitted after each key press, researchers were able to deduce the individual characters that the user typed into the search box, which together reveal the user’s entire search query.38

Inferring other “hidden” content. Researchers have applied similar methods to infer the medical condition of users of a personal health web site, and the annual family income and investment choices of users of a leading financial web site — even though both of those sites are only reachable via encrypted, HTTPS connections.39 (Again, the researchers obtained these results without decrypting the encrypted traffic.) Other researchers of side-channel methods have been able to reconstruct portions of encrypted VoIP conversations,40 and user actions from within encrypted Android apps.41

Such examples have led researchers to conclude that side-channel information leaks on the web are “a realistic and serious threat to user privacy.”42 These types of leaks are often difficult or expensive to prevent. There has been significant computer science research into practical defenses to defeat these side-channel methods. But as one group of researchers concluded, “in the context of website identification, it is unlikely that bandwidth-efficient, general-purpose [traffic analysis] countermeasures can ever provide the type of security targeted in prior work.”43

These methods are in the lab today — not yet in the field, as far as we know. But the path from computer science research to widespread deployment of a new technology can be short.

4. VPNs are poorly adopted, and can provide incomplete protection.

One way that subscribers can protect their Internet traffic in transit is to use a virtual private network (VPN). VPNs are often found in business settings, enabling employees who are away from the office to connect securely over the Internet to their company’s internal network (often with setup help from the employer’s IT department). When using a VPN, the user’s computer establishes an encrypted tunnel to the VPN server (say, the one operated by the employee’s company) and then, depending on the VPN configuration, sends some or all of the user’s Internet traffic through the encrypted tunnel.

The Swire paper presents VPNs (and other encrypted proxy services) as an up-and-coming source of protection for subscribers.44 However, there are reasons to question whether VPNs will in fact have a significant impact on personal Internet use in the United States.

U.S. subscribers rarely make personal use of VPNs. VPNs have been commercially available for years, but they are used sparsely in the United States. According to a 2014 survey cited by the Swire paper, only 16% of North American users have used a VPN (or a proxy service) to connect to the Internet.45 This figure describes the percent of users who have ever used a VPN or a proxy before — not those who use such services on a consistent or daily basis, which is what protection from persistent ISP monitoring would actually require. Moreover, many of the 16% of users who have used a VPN are likely business users, rather than personal users looking to protect their privacy. It is fair to conclude that only a very small number of U.S. users actually use a VPN or proxy service on a consistent basis for personal privacy purposes.

Moreover, several adoption hurdles are likely to deter unsophisticated users. Reliable VPNs can be costly, requiring an additional paid monthly subscription on top of the user’s Internet service. They also slow down the user’s Internet speeds, since they route traffic through an intermediate server. (There are free VPN services available, but subscribers generally get what they pay for.46)

Relative to other countries, the rate of VPN use in the U.S. is among the lowest in the world.47 VPN use is much more pronounced in other countries like Indonesia, Thailand and China, where Internet users turn to VPNs a way to circumvent online censorship, and to actively gain access to restricted content.48

VPNs are not a privacy silver bullet. The use of VPNs and encrypted proxies merely shifts user trust from one intermediary (the ISP) to another (the VPN or proxy operator). In order to more thoroughly protect their traffic from their ISP, a subscriber must entrust that same traffic to another network operator.

Furthermore, VPNs may not protect users as well as the Swire paper might lead readers to believe. The paper states that “Where VPNs are in place, the ISPs are blocked from seeing . . . the domain name the user visits.”49 But this is not always true: whether ISPs can see the domain names that users visit depends entirely on the user’s VPN configuration — and it would be quite difficult for non-experts to tell whether their configuration is properly tunneling their DNS queries, let alone to know that this is a question that needs to be asked. This is particularly common for Windows users.50

Conclusion

Today, ISPs can see a significant amount of their subscribers’ Internet activity, and have the ability to infer substantial amounts of sensitive information from it. This is especially true when that traffic is unencrypted. However, even when Internet traffic is encrypted using HTTPS, ISPs generally retain visibility into their subscribers’ DNS queries. Detailed analysis of DNS query information on a per-subscriber basis is not only technically feasible and cost-effective, but actually takes place in the field today. Moreover, ISPs and the vendors that serve them have clear opportunities to develop methods of inferring important information even from encrypted data flows. VPNs are one tool that subscribers can use to protect their online activities, but VPNs are poorly adopted, can be difficult to use, and often provide incomplete protections.

We hope that this report will contribute to a more complete understanding of the technical capabilities of broadband ISPs, and the broader technical context within which the broadband privacy debate is happening.
https://www.teamupturn.com/reports/2...t-isps-can-see





Microsoft Cloud in Germany
Ralf Wi

Now it’s public: Starting in 2016 Microsoft will offer their cloud services Microsoft Azure, Office 365 and Dynamics CRM Online from within German datacenters - in addition to the more than 100 worldwide datacenters. That alone wouldn’t be really surprising or innovative, but the unique thing about this is that the keys (physical and logical) that control access to customer data in this cloud are held by a German company, Deutsche Telekom’s subsidiary, T-Systems, which will act as a Data Trustee. So Microsoft will have no access to customer data without approval and supervision by the Data Trustee. How this works? Well, let’s have a closer look at this…

All access rights are handled by a role based access model, better known as RBAC. Those roles are based on functions (Reader, Owner etc.) and/or on realms (server, mailboxes, resource groups etc.). Let’s say you have defined a resource group in Azure and filled with 2 VMs, some storage, a network and an external IP, you can assign a user the administrator role for that particular resource group. These rights will only affect the resources inside the group, not your whole subscription or other resource groups, servers or even mailboxes.

Microsoft has – in this new model – no rights at all to access customer data. Only for special purpose like a support call from a customer a temporary access will be granted by the Data Trustee to the Microsoft engineer, and only for the specified area. After that time (using a technology similar to what you might know as JIT) all access is revoked automatically. So to repeat: Access is granted to the Microsoft engineer only by the Data Trustee. Microsoft has no way to grant that access to itself. And of course there is a logging of this process to an area where Microsoft has no access, too. In addition the Data Trustee is escorting the session and watching the engineer at work.

That RBAC is also in place for physical access to the datacenters. The Data Trustee has to approve the visit and will escort Microsoft or any of its subcontractors at any time during the visit.

For all those cases where Microsoft could come in contact with customer data, it needs a reason related to operation of the services (incident, support case etc.), a well-defined area of access, and a well-defined time period, and only then the trustee will grant access.

So to wrap it up in two simple questions:

• Does Microsoft have access to customer data? Yes, but only with a valid reason like incident, support call etc., only to specified areas and only for a limited timeframe. And it’s the German Data Trustee that makes the decision.
• Can Microsoft access customer data without approval by the German Data Trustee or the customer? No!

So much for that. Now to the more technical questions you might have, for example: Where is customer data stored? Well, this is a simple question, and the answer is: Only in the German datacenters. Data exchange between those two datacenters (or better start talking of regions instead of datacenters, so Germany Central and Germany Northeast) is handled by a dedicated network line leased from a German provider, just to make sure that no data is accidently routed outside of Germany. There is no additional replication or backup to other datacenters (ah, sorry, I mean regions), even the AAD is only replicated between those two German datacenters. Only a small kind of index table is replicated through all regions to make sure that the German regions are not a standalone solution but still part of the global Microsoft Azure cloud platform. This index table is there for Azure to find the region your subscription lives (based on the domain part of your login), and to redirect your browser etc. to the corresponding datacenter. No user data, no passwords, not even hashes or hashhashes. For example: A login with “max@contoso.de” finds “contoso.de” in the index and the region “Germany Central”, and will redirect the browser (even before the user enters the password) to a German datacenter in that region. Only there can the user data inside AAD be accessed, and of course this portal is already inside the German cloud and therefore under the custody of the German Data Trustee. By the way, this scenario makes it clear why a domain like contoso.de can currently only exist inside the Microsoft Cloud in Germany or outside, but not in both at the same time.

Certificates? Well, that would be a way to grant access, but we even thought about that. To explain: all communication inside Azure cloud infrastructure is encrypted with SSL/TLS based on certificates. So who can prevent Microsoft from simply creating a certificate and access data? Well, here we go: For all SSL certificates issued in the Microsoft Cloud in Germany the Certification Authority (CA) was handed over to an external Certification Authority. That means: Whenever Microsoft requests a new SSL certificate, let’s say for a new service, the external CA has to approve it.
Sounds good? Right. Sounds really good, and since I’ve been part of the team that builds this solution for the last half year I can tell you that it does not only sound good, but it is good.

I’m sure this won’t be the last blog article to the new Microsoft Cloud in Germany, so stay tuned!
http://blogs.technet.com/b/ralfwi/ar...n-germany.aspx





The FBI has a New Plan to Spy on High School Kids Across the Country

New surveillance program urges teachers to report students critical of government policies and "Western corruption"
Sarah Lazare

The FBI has a new plan to spy on high school kids across the country

AlterNet Under new guidelines, the FBI is instructing high schools across the country to report students who criticize government policies and “western corruption” as potential future terrorists, warning that “anarchist extremists” are in the same category as ISIS and young people who are poor, immigrants or travel to “suspicious” countries are more likely to commit horrific violence.

Based on the widely unpopular British “anti-terror” mass surveillance program, the FBI’s “Preventing Violent Extremism in Schools” guidelines, released in January, are almost certainly designed to single out and target Muslim-American communities. However, in its caution to avoid the appearance of discrimination, the agency identifies risk factors that are so broad and vague that virtually any young person could be deemed dangerous and worthy of surveillance, especially if she is socio-economically marginalized or politically outspoken.

This overwhelming threat is then used to justify a massive surveillance apparatus, wherein educators and pupils function as extensions of the FBI by watching and informing on each other.

The FBI’s justification for such surveillance is based on McCarthy-era theories of radicalization, in which authorities monitor thoughts and behaviors that they claim to lead to acts of violent subversion, even if those people being watched have not committed any wrongdoing. This model has been widely discredited as a violence prevention method, including by the U.S. government, but it is now being imported to schools nationwide as official federal policy.

Schools as hotbeds of extremism

The new guidelines depict high schools as hotbeds of extremism, where dangers lurk in every corner. “High school students are ideal targets for recruitment by violent extremists seeking support for their radical ideologies, foreign fighter networks, or conducting acts of violence within our borders,” the document warns, claiming that youth “possess inherent risk factors.” In light of this alleged threat, the FBI instructs teachers to “incorporate a two-hour block of violent extremism awareness training” into the core curriculum for all youth in grades 9 through 12.

According to the FBI’s educational materials for teenagers, circulated as a visual aide to their new guidelines, the following offenses constitute signs that “could mean that someone plans to commit violence” and therefore should be reported: “Talking about traveling to places that sound suspicious”; “Using code words or unusual language”; “Using several different cell phones and private messaging apps”; and “Studying or taking pictures of potential targets (like a government building).”

Under the category of domestic terrorists, the educational materials warn of the threat posed by “anarchist extremists.” The FBI states, “Anarchist extremists believe that society should have no government, laws, or police, and they are loosely organized, with no central leadership… Violent anarchist extremists usually target symbols of capitalism they believe to be the cause of all problems in society—such as large corporations, government organizations, and police agencies.”

Similarly, “Animal Rights Extremists and Environmental Extremists” are placed alongside “white supremacy extremists”, ISIS and Al Qaeda as terrorists out to recruit high school students. The materials also instruct students to watch out for extremist propaganda messages that communicate criticisms of “corrupt western nations” and express “government mistrust.”

If you “see suspicious behavior that might lead to violent extremism,” the resource states, consider reporting it to “someone you trust,” including local law enforcement officials like police officers and FBI agents.

This terrorist threat does not stay within the geographic bounds of high schools, but extends to the Internet, which the FBI guidelines describe as a “playground” for extremism. The agency warns that online gaming “is sometimes used to communicate, train, or plan terrorist activities.” Encryption, ominously referred to as “going dark,” is often used to facilitate “extremism discussions,” the agency states. In reality, encryption is a commonly used form of protection against government spying and identity theft and is often employed to safeguard financial transactions.

Young Muslims are the real targets

At the surface level, the FBI’s new guidelines do not appear to single out Muslim students. The document and supplementary educational materials warn of a broad array of threats, including anti-abortion and white supremacist extremists. The Jewish Defense League is listed alongside Hizbollah and Al Qaeda as an imminent danger to young people in the United States.

But a closer read reveals that the FBI consistently invokes an Islamic threat without naming it. Cultural and religious differences, as well as criticisms of western imperialism, are repeatedly mentioned as risk factors for future extremism. “Some immigrant families may not be sufficiently present in a youth’s life due to work constraints to foster critical thinking,” the guidelines state.

“The document aims to encourage schools to monitor their students more carefully for signs of radicalization but its definition of radicalization is vague,” said Arun Kundnani, author of The Muslims are Coming! Islamophobia, extremism, and the domestic War on Terror and an adjunct professor at New York University. “Drawing on the junk science of radicalization models, the document dangerously blurs the distinction between legitimate ideological expression and violent criminal actions.”

“In practice, schools seeking to implement this document will end up monitoring Muslim students disproportionately,” Kundnani told AlterNet. “Muslims who access religious or political material will be seen as suspicious, even though there is no reason to think such material indicates a likelihood of terrorism.”

The Obama administration’s Countering Violent Extremism (CVE) program is heavily influenced by its British counterpart, which exclusively focuses on spying on Muslim communities and has been deeply controversial from the onset.

Launched in the wake of the 2005 London bombings, the British the “Preventing Violent Extremism” (Prevent) program monitors and surveils Muslim communities and people, including mosque-goers and members of community organizations who have committed no wrongdoing. The iniative has been broadly criticized as oppressive and stigmatizing of British Muslims, including by a committee of British lawmakers in 2010.

Yet Prevent has expanded since implementation, and as of summer 2015, British public schools are now mandated to report students for supposed early warning signs of extremism. According to the advocacy organization CAGE, this program has led to the wide-scale interrogation of children without parental consent. Just last month, a Luton high school student was questioned by police for wearing a “Free Palestine” badge.

The first public iteration of the U.S. counterpart to this program emerged five years ago to “address ideologically inspired violence in the Homeland,” uniting a broad array of government agencies, including the FBI and Department of Homeland Security. In 2015, Attorney General Eric Holder announced a CVE summit at the White House and unrolled three “pilot programs” in Boston, Minneapolis and Los Angeles. According to the Council on American-Islamic Relations, these initiatives solely target Muslims in each of these cities.

Muslim communities and human rights campaigners have raised profound concerns about civil rights violations. “Past injustices have taught us to be wary when the government redefines its moral and legal authority in response to overbroad national security concerns,” reads a statement from nearly 50 Muslim organizations in the Minneapolis area. “It is our recommendation that the government stop investing in programs that will only stigmatize, divide and marginalize our communities further.”

But instead, the government is expanding CVE programming into high schools across the country.

Using discredited science to identify danger everywhere

“The whole concept of CVE is based on the conveyor belt theory – the idea that ‘extreme ideas’ lead to violence,” Michael German, a fellow with the Brennan Center for Justice’s Liberty and National Security Program, told AlterNet. “These programs fall back on the older ‘stages of radicalization’ models, where the identified indicators are the expression of political grievances and religious practices.”

The lineage of this model can be traced to the first red scare in America, as well as J. Edgar Hoover’s crackdown on civil rights and anti-war activists. In the post-9/11 era, the conveyor-belt theory has led to the mass surveillance of Muslims communities by law enforcement outfits ranging from the FBI to the New York Police Department.

U.S. government agencies continue to embrace this model despite the fact that it has been thoroughly debunked by years of scholarly research, Britain’s M15 spy agency and an academic study directly supported by the Department of Homeland Security.

Even the FBI’s new guidelines claim that the agency “does not advocate the application of any psychological or demographic ‘profiles’ or check lists of indicators to identify students on a pathway to radicalization.”

Yet in the same breath, the FBI freely lists “concerning behaviors” that indicate an individual is “progressing on a trajectory to radicalization and/or future violent action in furtherance of an extremist cause.” In other words, the FBI is using new terminology to call for students to be profiled as potential future terrorists.

As Hugh Handeyside, staff attorney for the ACLU’s national security project, told Alternet, “Broadening the definition of violent extremism to include a range of belief-driven violence underscores that the FBI is diving head-first into community spying. Framing this conduct as ‘concerning behavior’ doesn’t conceal the fact that the FBI is policing students’ thoughts and trying to predict the future based on those thoughts.”

If the FBI’s criteria are to be believed, children who exhibit “development delay or disorders, resulting from low quality supportive environments” are at greater risk. So too are the “disenfranchised – student feeling lost, lonely, hopeless, or abandoned.” The FBI calls for greater scrutiny of students with mental health disorders and identifies neighborhoods families, and socio-economic status as factors to watch out for.

There are already reasons to be concerned about who will be most vulnerable under this mass surveillance plan. In what is popularly known as the “school-to-prison pipeline”, students of color and young people with disabilities are already disproportionately suspended, expelled, arrested and funneled into juvenile prisons for alleged behavioral infractions at school.

The FBI’s instructions to surveil and report young people not for wrong they have committed, but for violence they supposedly might enact in the future, is likely to promote an intensification of this draconian practice. Using a program initiated to spy on Muslim-American communities, the government is calling for sanctuaries of learning to be transformed into panopticons, in which students and educators are the informers and all young people are suspect.
https://www.salon.com/2016/03/06/the...untry_partner/





Major Sites Including New York Times and BBC Hit by 'Ransomware' Malvertising

Adverts hijacked by malicious campaign that demands payment in bitcoin to unlock user computers
Alex Hern

A number of major news websites have seen adverts hijacked by a malicious campaign that attempts to install “ransomware” on users computers, according to a warning from security researchers Malwarebytes.

The attack, which was targeted at US users, hit websites including the New York Times, the BBC, AOL and the NFL over the weekend. Combined, the targeted sites have traffic in the billions of visitors.

The malware was delivered through multiple ad networks, and used a number of vulnerabilities, including a recently-patched flaw in Microsoft’s former Flash competitor Silverlight, which was discontinued in 2013.

When the infected adverts hit users, they redirect the page to servers hosting the malware, which includes the widely-used (amongst cybercriminals) Angler exploit kit. That kit then attempts to find any back door it can into the target’s computer, where it will install cryptolocker-style software, which encrypts the user’s hard drive and demands payment in bitcoin for the keys to unlock it.

Such software, known as ransomware, is fast becoming the most popular kind of malware for criminals to install on compromised computers, beating out lesser threats such as adware or trojans. Earlier this month, the first Mac OS X ransomware appeared, as part of an infected installation of BitTorrent client Transmission.

While “drive-by” installations tend to only demand one or two bitcoins as a ransom, worth a few hundred pounds, more targeted ransomware attacks have demanded much more in payment. An LA hospital was revealed to have paid $17,000 (£12,000) in ransom to an attacker in February.

The vector of attack, through compromised ad networks, will also serve to inflame the debate around adblockers. The browser plugins have been attacked as a “modern-day protection racket” and criticised for harming the business model of free online publications, but users counter that they protect their devices from attacks of this sort, as well as making the web surfing experience faster, more pleasant, and less draining on mobile devices’ batteries.
http://www.theguardian.com/technolog...e-malvertising





Michelle Fields, Ben Shapiro Resign from Breitbart
Hadas Gold

Breitbart reporter Michelle Fields and editor-at-large Ben Shapiro have both resigned from the company, the two announced on Sunday evening. And more are expected in the coming days.

"Nobody wants to stand with [Breitbart Chairman Stephen] Bannon," said one source at the company on Sunday evening. "Besides the senior management and his loyal reporters that provide pro-Trump stenography."

In her statement, Fields cited the company's lack of support over the past week after she became the center of a media and political firestorm following an incident on Tuesday when Donald Trump campaign manager Corey Lewandowski forcibly grabbed Fields' arm to move her away from the candidate as she tried to ask a question. Fields, who has shown pictures of her bruises from the incident, has filed a police report in Jupiter, Florida where the altercation took place.

"Today I informed the management at Breitbart News of my immediate resignation. I do not believe Breitbart News has adequately stood by me during the events of the past week and because of that I believe it is now best for us to part ways," Fields said.

Fields told POLITICO that she has still not been contacted by Bannon since the incident on Tuesday.

BuzzFeed News first reported on the resignations on Sunday night.

Though Breitbart, known for its pro-Trump slant, released some statements saying they stand behind their reporter and asked Lewandowski to apologize, the site also published an article which initially suggested, based off of incomplete video of the interaction, that the entire incident was a case of mistaken identity. But after further video surfaced showing that Lewandowski reached for Fields, the site updated the story. Soon after, the site's spokesman Kurt Bardella dropped Breitbart as a client of his communications company, saying he could no longer 100 percent stand behind the website.

Shapiro, who has adamantly defended Fields on television and online, said the website has acted in a "disgusting" manner and has betrayed its founder, Andrew Breitbart's legacy.

"Andrew built his life and his career on one mission: fight the bullies. But Andrew’s life mission has been betrayed. Indeed, Breitbart News, under the chairmanship of Steve Bannon, has put a stake through the heart of Andrew’s legacy," Shapiro said. "In my opinion, Steve Bannon is a bully, and has sold out Andrew’s mission in order to back another bully, Donald Trump; he has shaped the company into Trump’s personal Pravda, to the extent that he abandoned and undercut his own reporter, Breitbart News’ Michelle Fields, in order to protect Trump’s bully campaign manager, Corey Lewandowski, who allegedly assaulted Michelle."

"Both Lewandowski and Trump maligned Michelle in the most repulsive fashion. Meanwhile, Breitbart News not only stood by and did nothing outside of tepidly asking for an apology, they then attempted to abandon Michelle by silencing staff from tweeting or talking about the issue. Finally, in the ultimate indignity, they undermined Michelle completely by running a poorly-evidenced conspiracy theory as their lead story in which Michelle and [Washington Post reporter Ben] Terris had somehow misidentified Lewandowski," Shapiro said

Unhappiness at Breitbart over the site's overtly pro-Trump stance had been simmering for months. Those divisions erupted this past week, as some Breitbart staff were upset at what they saw as a lack of a full-throated defense of their own reporter. Staff told not to post in support of Fields on social media after the incident, and two sources with direct knowledge of the calls told POLITICO Bannon made disparaging remarks about Fields in conference calls following the incident. (Breitbart denied Bannon made those remarks).

Other staff at Breitbart have been circulating resumes, though Breitbart's strict contracts may hamper some from immediately leaving, sources at the company said. But the internal strife has even popped up in the site's own articles. One article about Trump claiming the man who rushed the stage at a rally in Ohio was connected to ISIS notes that "Breitbart News had quoted Trump in three different stories without contradicting the claim, but investigation reveals a number of disparities that obviously call that claim into question.

"This truly breaks my heart," Shapiro said at the end of his statement. "But, as I am fond of saying, facts don’t care about your feelings, and the facts are undeniable: Breitbart News has become precisely the reverse of what Andrew would have wanted. Steve Bannon and those who follow his lead should be ashamed of themselves."

Breitbart management did not immediately respond to requests for comment on Sunday evening.
http://www.politico.com/blogs/on-med...eitbart-220709





POLITICO Reporter Denied Access to Trump Event
Hadas Gold

POLITICO reporter Ben Schreckinger was denied entry to Donald Trump's press conference on Tuesday night, despite having previously been granted credentials by the campaign.

The move followed a threat last week from Trump officials to exclude POLITICO reporters from campaign events.

On Tuesday morning, Schreckinger, who has covered the campaign regularly for more than six months, received an email granting him credentials for Trump's speech and press conference at his Mar-a-Lago club in Florida that evening. But less than 10 minutes later, another email arrived saying those same credentials were denied. Upon arriving at Trump's private club, he was denied entry and escorted off of the property.

Schreckinger, whose latest story on Trump's campaign was a report on concerns about campaign manager Corey Lewandowski's temperament and behavior, never received an explanation as to why his credentials have been denied. Neither Lewandowski nor Trump spokeswoman Hope Hicks responded to requests for comment.

"I’m saddened by the personal nature of the Trump campaign’s attack on an excellent reporter, Ben Schreckinger. The campaign provided no explanation for barring our reporter from Donald Trump’s speech tonight. If this is the response to honest, fair, and sometimes critical reporting – like today’s piece on Campaign Manager Corey Lewandowski — it certainly will not intimidate POLITICO as we cover the campaign in the days ahead," POLITICO editor Susan Glasser said in a statement.

POLITICO is far from alone among media organizations being denied entry to Trump events. The Des Moines Register, Univision, Fusion, The Huffington Post, National Review, Mother Jones and BuzzFeed have all been denied credentials to Trump's events, often after publishing critical stories about the campaign. In January, New York Times reporter Trip Gabriel was ejected from an event in Iowa after writing about Trump's weak ground game in the state, which he eventually lost to Ted Cruz.
http://www.politico.com/blogs/on-med...d-trump-220836





CBS Plans to Sell Radio Station Group
Cynthia Littleton

CBS plans to sell or spin off its radio assets in the coming year, acknowledging that the business has become slow-growth and a drain on resources that can be better directed to content production and digital endeavors.

CBS chairman-CEO Leslie Moonves confirmed the plan Tuesday during CBS’ Investor Day presentation in New York.

Moonves said CBS would explore a variety of alternatives for the group include a sale, swap or spinoff of the group, just as CBS spun off its outdoor advertising group in 2014. The goal, he said, is to “unlock value for our shareholders,” but he vowed to the crowd of Wall Streeters that “we will be prudent and judicious as we go, as we are in all such endeavors.”

The Eye owns 117 stations in 26 markets, including clusters in such top markets as New York, Los Angeles, Chicago, Philadelphia, Boston, San Francisco and Washington, D.C.

In recent months Moonves has hinted that CBS was leaning toward selling some or all of its stations. CBS took a $484 million write-down on the value of its radio station group in the fourth quarter, citing “the sustained decline in industry projections for the radio advertising marketplace since 2014.”

Cumulus Media, the nation’s second-largest radio station owner behind IHeartMedia, has been mentioned as a potential buyer of the CBS group.

CBS Corp.’s roots in radio stretch back to the mid-1920s, when William S. Paley began his quest to turn a ragtag group of stations into the most powerful network in the country. In 1996 CBS’ holdings expanded dramatically with the acquisition of Mel Karmazin’s Infinity Broadcasting, which brought 44 major-market stations into the fold.

In recent years CBS has been paring down its station holdings in smaller markets and those markets where it does not own any TV stations.
https://variety.com/2016/tv/news/cbs...rs-1201730033/





A Push for Less Expensive Hearing Aids
Paula Span

I know my television is too loud.

I’m asking people to repeat themselves more often.

I’m the restaurant patron asking the manager to please turn down the music so I can hear my friends across the table.

Almost two-thirds of Americans over age 70 have meaningful hearing loss, experts say, and I probably will be among them. I should do something about it.

One reason I haven’t is the average price for hearing aids: roughly $2,500, often more — and most of us need two. That helps explain why only 20 percent of those with hearing loss use hearing aids.

Medicare declines to cover a number of products and services that older beneficiaries need. Dental care ranks high on my personal list of exclusions that make the least sense, but the fact that the 1965 Medicare law specifically prohibits the national insurance program from paying for hearing aids is also a strong contender.

So it’s heartening to notice some recent developments that might lead to more rational policies and more affordable and accessible devices.

■ An October report by the President’s Council of Advisors on Science and Technology recommended federal actions to “simultaneously decrease the cost of hearing aids, spur technology innovation and increase consumer choice options.”

The council suggested, for example, that the Food and Drug Administration permit a “basic” hearing aid, for mild to moderate age-related hearing loss, to be sold over the counter — something every state now prohibits.

The report also urged the Federal Trade Commission, whose rulings enabled consumers to comparison shop for eyeglasses and contact lenses, to treat hearing devices more like visual ones. “It should be like a prescription for eyeglasses,” said Dr. Christine Cassel, co-chairwoman of the council’s hearing technologies working group.

The hearing aid itself represents only about a third of what audiologists charge. (Medicare does cover testing with a physician’s referral.)

After an audiologist or physician provides an audiogram assessing your hearing, Dr. Cassel said, you should be able “to take it with you and shop around for the best prices” on devices.

■ In June, the Institute of Medicine will issue a report on hearing health that tackles key questions like federal regulation, insurance and price. A number of major players — among them the Centers for Disease Control and Prevention, the National Institute on Aging and the Pentagon — have sponsored the yearlong effort.

■ The F.D.A., acting on recommendations by the president’s council, will host a public workshop next month to consider whether its hearing aid regulations “may hinder innovation, reduce competition, and lead to increased cost and reduced use.”

The agency has also reopened public comments on proposed regulation of so-called personal sound amplification products and their marketing.

Reports, comments, workshops — we can be forgiven for rolling our eyes and wondering if anything useful will emerge. Still, these actions represent a greater national focus on hearing loss and rehab than we have seen in decades.

What’s driving this interest, apart from the demographic bulge that means the hearing-impaired population is about to get much larger, is a wave of new research.

Congress barred Medicare coverage of hearing aids 50 years ago because “people thought hearing loss was just a normal part of aging,” said Dr. Cassel, one of the authors of a recent JAMA editorial on hearing health policies. “They didn’t see it as a disability or a medical problem.”

But we’re learning that, however normal, hearing loss can have significant consequences.

Older adults with poor hearing report a greater number of falls than those with normal hearing, a Finnish study found. American researchers have demonstrated a similar association in those aged 40 to 69.

Older adults with hearing loss are also more apt to report periods of poor physical and mental health, and to be hospitalized.

Perhaps most disturbing, studies also show a relationship between hearing loss — mild, moderate or severe — and accelerated rates of cognitive decline. Older people with hearing loss also are more likely than those with normal hearing to develop dementia.

How can aging ears affect so many other aspects of our health? Dr. Frank Lin, an otolaryngologist and epidemiologist at Johns Hopkins University who has led many of these research efforts, points to several possible causes. With diminished hearing, “your brain is constantly having to work harder to process garbled sounds” — a concept called cognitive load — and may have less capacity for other mental tasks.

Alternatively, hearing loss may lead to changes in brain structure. In one of Dr. Lin’s studies, M.R.I.s showed greater brain atrophy among those with poor hearing.

There is a principle here. The principle is maximization of profit at the expense of peoples health. All that is required is that...

A struggle to hear can also lead to isolation, and “we’ve known for years that social connectedness is important for cognitive health,” Dr. Lin added.

Technology put the Internet in our pockets, but hasn’t done much to affordably improve our hearing.

“In every other aspect of our lives, advances in electronic technology have made things cost much, much less,” Dr. Cassel said. “That hasn’t happened with hearing aids.”

Almost annually, she noted, “some congressperson gets energized about this and tries to pass legislation” to remove Medicare’s hearing aid restriction. Last year, it was Representative Debbie Dingell, Democrat of Michigan, with six Democratic co-sponsors. The bill stalled in committee, but “we are not giving up the fight,” Representative Dingell said in an email.

Driving down the cost of the devices could make them more widely available in several ways.

“If it’s $200 instead of $2,000, more people could pay out of pocket,” Dr. Cassell said. “And that also means Medicare might cover it” — unlikely at current prices.

It’s clearly possible to provide good devices for far less than we now pay. The Department of Veterans Affairs, which negotiates with manufacturers for lower prices, provided comprehensive hearing care to more than 900,000 veterans in 2014 and dispensed almost 800,000 hearing aids without copays. The average cost per device: $400.

Price isn’t the only obstacle to wider use. In European countries where insurance does cover hearing aids, they’re still underused. Clearly, our discomfort with age-related disability plays a role.

So do the shortcomings of hearing aids. Though they’re improving, “no technology will ever correct hearing loss like glasses correct vision,” Dr. Lin said.

As hearing declines with age, the cochlea, the part of the inner ear that receives and transmits sound, sustains irreversible damage.

Still, the way we acquire hearing aids, or don’t, has costs beyond the obvious. Daunted by the multiple visits, the adjustments and especially the expense, people often delay for years while their mild or moderate hearing loss worsens.

Over that time, “you’ve lost some of the neural pathways from the ear to the brain,” Dr. Lin said. “With longstanding hearing loss, rehabilitation is much harder. The earlier you address it, the easier it is and the more successful you can be.”
http://www.nytimes.com/2016/03/15/he...ring-aids.html





Kim Kardashian, Her Selfie and What It Means for Young Fans
Katie Rogers

It would seem that for someone like Kim Kardashian West — queen of selfies, breaker of Internets, mother of two — sharing a nearly nude selfie with millions of followers on social media is a pretty ho-hum weekday activity.

But her latest racy photo, published last week on Twitter with a mundane caption (“When you’re like I have nothing to wear LOL”), quickly drew a mix of young, powerful celebrities into a debate over whether sharing such an image is a symbol of sexual empowerment, or an example of a powerful woman selling herself short.

In a climate where the harassment and bullying of young women online has reached alarming levels, experts say the encouragement of freedom of healthy sexual expression among women is increasingly necessary. But, they say, so is discussion about what, exactly, constitutes empowerment, and why.

Though Ms. Kardashian West could share a revealing nude photo and be praised (and even paid) for her efforts, her legions of young fans might be exploited, harassed or shamed if they try the same thing. Nor do they have valuable brands to burnish. Even Ms. Kardashian West is not totally protected: In addition to receiving thousands of messages that call her “queen” and “mom,” she faces a regular onslaught of abusive messages.

“When Kim does things like this, she’s ‘working,’ ” Pamela Rutledge, director of the Media Psychology Research Center, said, adding, “Young people need to understand that her celebrity comes from being out of the ordinary, but that is only sustainable by continuing to push the envelope.”

Ms. Kardashian West’s mastery of social media marketing was underscored by the debut of her Snapchat account — a new platform and a new revenue source — even as she dressed down critics like Bette Midler.

She drew support from other celebrity women who saw the criticism as an affront to self expression. “Anybody who tries to say how a woman chooses to display their own body is wrong, is severely misinformed and misguided,” the actress Abigail Breslin, 19, wrote on Twitter.

Another 19-year-old actress, Chloë Grace Moretz, was met with accusations of body shaming when she expressed a differing opinion: “I truly hope you realize how important setting goals are for young women, teaching them we have so much more to offer than just our bodies,” Ms. Moretz said in a message to Ms. Kardashian West.

Ms. Kardashian West, who is 35, replied: “Let’s all welcome @ChloeGMoretz to twitter, since no one knows who she is.” She also referred to a Nylon magazine cover in which Ms. Moretz appeared partially nude.

As the famous women debated in public, the narrative began to shift from “why did she” to “why can’t she”? Ms. Kardashian West can, and she has, and she’ll probably do it again, all the way to the bank: She interrupted the debate to say her video game made $80 million.

Though she might flaunt her wealth and power on social media, the underlying issue, she wants the world to know, is that she is tired of being shamed for a 2003 sex tape that made her famous.

“I will not live my life dictated by the issues you have with my sexuality. You be you and let me be me,” she wrote in an essay on her personal app ($2.99 a month.)

Ms. Kardashian West has learned to harness past harassment into a moneymaking machine, and therefore control her image. She is both sexually empowered and, as she would say, liberated. Young women without the same access to power echo that behavior when their photos fall into the wrong hands, according to the author Nancy Jo Sales, who wrote “American Girls: Social Media and the Secret Lives of Teenagers.”

“It’s this kind of weird kind of feminism by defense,” Ms. Sales said, adding that many girls react by saying, “‘Well go ahead, I’ve already been exploited.’”

Part of a chapter in Ms. Sales’s book, for which she interviewed over 200 young women, ages 13 to 19, is devoted to how these girls feel about Ms. Kardashian West, who is an idol for many. (“You’ve inspired me to be hot and famous,” one of them says to her at a book signing.)

Ms. Kardashian West’s fan base (41 million on Twitter alone) is devoted, and mobilized to support her in the face of criticism. While thousands were amused by Ms. Kardashian West’s exchange with the 19-year-old Ms. Moretz, Ms. Sales said that this interaction highlighted a frequent problem girls face: If they question the sexual status quo, they are often told their opinions are invalid.

“There are a lot of ways you can respond to a teenage girl,” Ms. Sales said, “other than tell her that she simply doesn’t matter.”

The pressure on girls to “send nudes” — a request that can be sent in a Valentine’s Day-style candy heart on Ms. Kardashian West’s custom mobile keyboard app (also $2.99) — is so constant, Ms. Sales said, that sometimes the pressure to send a photo can turn into outright threats.

“For Kim it translates into money,” Ms. Sales said. “That’s not the case when you’re a middle schooler growing up in some little town somewhere where everybody knows you and can see that picture.”
http://www.nytimes.com/2016/03/15/st...lfie-fans.html





Teenage Drivers? Be Very Afraid
Bruce Feiler

Spend enough time having parenting conversations, as I’ve done personally and professionally for the last dozen years, and certain patterns emerge. In nine out of 10 cases, if you’re talking about highly motivated parents, the message to Mom and Dad is: back off, chill out, park the helicopter.

Whether you want your children to be independent, resilient, creative; whether you’re talking to teachers, psychologists, grandparents; whether you’re discussing homework, food, sports; the recommendation, time and again, is relax.

Recently, I stumbled onto a topic in which the advice was the exact opposite.

Among the people who know what they are talking about, the unanimous message to parents is: You’re not worried nearly enough. Get much more involved. Your child’s life may be in danger.

What’s the topic? Teenage driving.

“If you’re going to have an early, untimely death,” said Nichole Morris, a principal researcher at the HumanFIRST Laboratory at the University of Minnesota, “the most dangerous two years of your life are between 16 and 17, and the reason for that is driving.”

Among this age group, death in motor vehicle accidents outstrips suicide, cancer and other types of accidents, Dr. Morris said. “Cars have gotten safer, roads have gotten safer, but teen drivers have not,” she said.

In 2013, just under a million teenage drivers were involved in police-reported crashes, according to AAA. These accidents resulted in 373,645 injuries and 2,927 deaths, AAA said. An average of six teenagers a day die from motor vehicle injuries, according to the Centers for Disease Control and Prevention.

Charlie Klauer, a research scientist at the Virginia Tech Transportation Institute, said her research suggested the numbers were even higher because many teenage accidents go unreported. “We believe one in four teens is going to be in a crash in their first six months of driving,” Dr. Klauer said.

How to address this problem is not as simple as it seems, especially as technology has taken over teenagers’ lives.

One father I know bought his son a manual-transmission car because it required him to use two hands, to eliminate the option of using a cellphone. I recently overheard a conversation between my sister and her 16-year-old son in which she reminded him not to text while driving, and he replied, “But I’m using Google Maps, and the text pops up automatically on the screen.”

So what’s a parent to do, especially one who knows teenagers are always one step ahead of any rules they try to impose?

FRIENDS DON’T LET FRIENDS DRIVE WITH THEM When I asked Dr. Morris what parents should be most worried about, she answered definitively, “Other passengers.” Adding one nonfamily passenger to a teenager’s car increases the rate of crashes by 44 percent, she said. That risk doubles with a second passenger and quadruples with three or more.

Most states have what are called “graduated driver’s licenses,” meaning some combination of learner’s permit, followed by a six-month or so intermediate phase, followed by a full permit. Restricting the number of passengers who are not family members is among the most common regulations in the early phases, but Dr. Morris said most parents disregard the rule once that time expires.

That’s a huge mistake, she said. “Even if your state drops the non-familiar-passenger restriction after six months, parents should make it their own rule,” Dr. Morris said.

Distraction is highest when boys ride with other boys, she said, whereas boys actually drive safer when girls are in the car. Altogether, passengers are a greater threat than cellphones, she believes. “Your cellphone isn’t encouraging your teen to go 80 in a 50, or 100 in a 70,” she said.

TURN OFF NOTIFICATIONS Phones are still a huge problem, though.

Dr. Klauer has done three studies, in which she places video cameras in cars and monitors drivers for a year. Even when teenagers know they’re being monitored, they still use their telephones for texting, talking or checking Facebook at least once every trip, including ones only a few blocks.

“Teens’ prevalence for engaging their devices is higher than other age groups,” she said, “and their risk for being involved in a crash when they do is higher.”

Even if the phone is tucked away in a pocket or backpack, enticing beeps or ringtones make it hard to resist. Dr. Klauer recommends blocking all notifications before even getting in the car. “You’re more likely to do it if you’re sitting calmly at home,” she said. “In the moment, it’s really hard not to look at the screen.”

THE TWO-SECOND RULE If your child insists on using the phone for navigation or listening to music, the research suggests there’s only one safe place for it to be: in a dock, at eye level, on the dashboard. The worst places? The cup holder, the driver’s lap, the passenger’s seat.

“The real enemy is taking your eyes off the forward roadway,” Dr. Klauer said. “Anything more than two seconds is extremely dangerous. The longer you look away, the worse it gets.”

Though she’s skeptical young drivers actually need navigation for most trips, Dr. Klauer said audible turn-by-turn directions are preferable to paper maps, because there’s less rustling in your lap. Similarly, streaming music has advantages over flipping radio channels, as long as the driver is not selecting each individual song.

EVERY TIME IS A DANGEROUS TIME Just because technology has introduced threats doesn’t mean the old threats like drinking or driving at night have gone away. In 2013, almost a third of teenage drivers killed in crashes had been drinking, the Transportation Department found. Also, safety experts say, driving late at night is much more dangerous than during the day.

Jennifer Ryan, the director of state relations at AAA, told me the organization recommends that teenagers not be allowed to drive between 9 p.m. and 5 a.m. for the first six months of having their license. “We encourage parents should go beyond that if they don’t feel their teen is ready,” she said.

To help navigate these issues, AAA has a sample contract parents and young drivers can sign, with consequences agreed in advance.

BELLS AND WHISTLES ARE A PARENT’S BEST FRIEND Over all, teenage driving deaths have been declining in recent years, though specialists agree it’s because of improved safety features on roads, such as more impact-resistant median barriers and smarter technology in cars. These include automated brakes, airbags, forward collision warning systems and lane departure warning systems.

Dr. Morris encouraged parents to adopt as many of the safety features as possible. “I did not grow up in a wealthy family,” she said. “I drove a $3,000 car when I was in high school. But if the idea is that these bells and whistles aren’t necessary for teens, I would argue against that. I know it’s expensive for parents, but any advanced safety feature is well worth the money and peace of mind.”

BE A BACK-SEAT PARENT The most surprising thing I learned is how passionately researchers believe that parents are not doing nearly enough to supervise their children. “Our studies show that the more the parent is involved when a teen is learning, the lower their chances are for a crash,” Dr. Morris said. “That means asking questions, supervising them, giving them opportunities on different types of roads under different conditions.” The mistake parents often make, she said, is thinking, “Finally I don’t have to car-pool you everywhere!”

Dr. Klauer said that in her studies she would send video snippets to parents when their children violated the law. When parents looked at the results and discussed them with their teenagers, results improved. The only problem: Half the parents never even looked at the warnings. “I know you trust your child,” Dr. Klauer said. “But if you’re not paying attention, chances are they’re not driving as safely as you think they are.”

The bottom line: Teenage driving may be that rare outlier when it comes to parenting. As soon as you give your children the keys to the car, it may be time to pull the helicopter out of the hangar for a spell and follow them down the road.
http://www.nytimes.com/2016/03/20/fa...g-parents.html





Young People Would Rather Have An Internet Connection Than Daylight
Lucy Sherriff

The average young person in Britain think having access to the internet is more important than daylight, according to a new poll.

British youths aged between 18 and 25 were asked to identify five things which they felt were important to maintain their quality of life.

Freedom of speech topped the list, picked by 81% of the 2,465 surveyed. Nearly seven in 10 (69%) chose internet connect, followed by 64% saying daylight and 57% hot water.

@sherrifflucy Fair enough, really. My flat gets basically no daylight, but I’d rather have that than have to work in cafes all the time.
— Tom Slominski (@tomslominski) March 17, 2016

Only 37% said a welfare system - including the NHS - was important, with a measly 11% choosing a good nights' sleep.

The respondents who identified an internet connection as one of the most important aspects were asked how many times they used the internet every day. The average answer was 78 times.

The youths were also asked to identify what they would most like to change in order to improve their quality of life. The majority (34%) stated holidays, followed by more sleep (28%) and "having a bigger following on social media" (14%).

The study was, rather fittingly, conduced by blinds company Hillarys.

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

March 12th, March 5th, February 27th, February 20th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
JackSpratts is online now   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 07:15 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)