P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 20-06-18, 06:22 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,016
Default Peer-To-Peer News - The Week In Review - June 23rd, ’18

Since 2002


































"We decline to grant the state unrestricted access to a wireless carrier’s database of physical location information." – John G. Roberts Jr., Chief Justice, United States Supreme Court






































June 23rd, 2018




‘Incredibles 2’ Sells a Record-Setting $180 Million in Tickets
Brooks Barnes

Wither Pixar? Not on Elastigirl’s watch.

“Incredibles 2” arrived to a jaw-dropping $180 million in ticket sales at North American theaters over the weekend — roughly 30 percent more than box-office analysts had predicted early last week — giving Pixar a confidence boost following the forced departure of its creative chief, John Lasseter, earlier this month. “Incredibles 2” received an A-plus grade from ticket buyers in CinemaScore exit polls.

The opening total set a box-office record for an animated release. The touting of sales records by movie studios is usually meaningless spin; they don’t take inflation into account. But not in this case: Even after accounting for higher ticket prices, “Incredibles 2” beat Hollywood’s previous record-holder, “Shrek the Third” (DreamWorks Animation), which collected an adjusted $151 million in 2007, according to comScore data.

The thundering turnout for “Incredibles 2” reflected pent-up demand. The film returns the superheroic Mr. Incredible and his quick-thinking wife, Elastigirl, to big screens after a 14-year hiatus — this time with her in the forefront. Animated movies have also been in short supply, in part because of an ongoing retrenchment at DreamWorks, which was sold to NBCUniversal in 2016. The last animated blockbuster was Pixar’s “Coco,” which arrived in November and took in $807 million worldwide.

“Incredibles 2,” which cost Pixar’s corporate parent, the Walt Disney Company, at least $300 million to make and market worldwide, played more like a broad action film than a PG-rated cartoon. About 25 percent of the audience was over the age of 35, according to Disney, which is planning more “Incredibles” installments as part of an ongoing franchise.

“The ‘Avengers’ crowd went to see this movie — it wasn’t just 7-year-old kids,” Greg Foster, the filmed entertainment chief of Imax, the large-format theater chain, said by telephone on Sunday morning. Mr. Foster said that “Incredibles 2” sold about $14.1 million in tickets at Imax theaters in the United States and Canada over the weekend, setting an all-time Imax animation record.

Mr. Foster credited the movie’s writer-director, Brad Bird, for delivering a sequel that received a rapturous response from critics and positive word of mouth on social media. Mr. Foster also noted that the “Incredibles” characters are now favorites for multiple generations: People who saw the original film as children are now parents.

The original “Incredibles,” also directed and written by Mr. Bird, arrived in 2004 to about $96 million in today’s dollars and generated $860 million total. It won the 2005 Oscar for best animated feature.

Disney needed “Incredibles 2” to succeed. Although it has dominated the box office in recent years, Disney suffered a major setback last month, when its expensive “Solo: A Star Wars Story” crashed and burned. After four weeks of release, “Solo” has taken in about $193 million — not chump change, but the equivalent of a bomb by “Star Wars” standards.

Then Disney announced on June 8 that Mr. Lasseter would not return from a “sabbatical” that started in October, when he stepped down citing unspecified “missteps” that made some staffers feel “disrespected or uncomfortable.” Mr. Lasseter co-founded Pixar and has been the creative force behind the billion-dollar “Toy Story,” “Cars” and “Frozen” franchises.

“Incredibles 2” had little competition at the domestic box office over the weekend. (It collected a promising $51.5 million in limited release overseas, according to comScore.) Second place went to “Ocean’s 8” (Warner), which collected about $20 million, for a two-week total in North America of $79.2 million. Warner also had the third-place film, “Tag,” which arrived to $14.6 million in estimated ticket sales.

An R-rated comedy with an ensemble cast, “Tag” cost about $28 million to make and at least $30 million to market. Warner hopes the film will perform like “Game Night,” which arrived to a muted $17 million in February but quietly generated nearly $70 million over its run. Like that movie, “Tag” received a B-plus CinemaScore.

Also of box-office note: Another superhero, at least to octogenarians and the political left, has breathed life into the documentary marketplace: “RBG,” a Participant Media and Magnolia Pictures film about Ruth Bader Ginsburg, crossed the $10 million mark, one of the best results for a film of its kind in years. Participant supported “RBG” with an aggressive effort to tie the filmabout the Supreme Court justice to social issues including gender parity, ultimately engaging more than 400 organizations.

“RBG” is the first of two films about Justice Ginsburg that Participant has planned for this year. The second, a biographical drama called “On the Basis of Sex,” will arrive via Focus Features in November. It stars Felicity Jones.
https://www.nytimes.com/2018/06/17/m...animation.html





YouTube is Working to Restore Accidentally Blocked Videos from MIT and Others
Manish Singh

Can’t find your favorite lecture on YouTube? You’re not alone. Numerous videos have disappeared from YouTube over the last few days, puzzling creators and viewers alike.

A handful of popular channels, including those of MIT OpenCourseware (which has north of 1.5 million subscribers), Press Bureau of India, Jamendo Music, and Blender Foundation (with over 190,000 subscribers), are affected.

In a series of tweets posted over the last three days, the MIT OpenCourseWare team acknowledged that videos had disappeared from its YouTube channel. “Please stand by. The elves are working around the clock to fix the issue,” it said, directing impatient students to head over to its official website or Internet Archive in the meantime.

Ton Roosendaal, the chairman of Blender Foundation, tweeted, “Everything in the Foundation channel is entirely blocked, worldwide.” Blender Foundation, a nonprofit organization, offers free and open source toolsets for creating interactive 3D applications and films.

A YouTube spokesperson told VentureBeat, “Videos on a limited number of sites have been blocked as we updated our partner agreements. We are working with MITOpenCourseWare and Blender Foundation to get their videos back online.”

YouTube, which as of last month had 1.8 billion monthly logged-in viewers across its website and seven apps, is ostensibly receiving flak from some users over the incident.
https://venturebeat.com/2018/06/18/y...it-and-others/





AT&T, Comcast Try to Weaken California Net Neutrality Law
Karl Bode

AT&T, Comcast, and Verizon lobbyists are working in concern to try and weaken California's tough new net neutrality law before it can pass through the state legislature. California State Senator Scott Wiener's SB 822 was recently approved by the California Senate Energy Committee and now moves on to the Judiciary Committee. The proposal has been called the "gold standard" by groups like the EFF, and goes notably further than even the modest FCC rules did by addressing things like "zero rating" (using usage caps anti-competitively).

Federal FCC net neutrality rules expired as of June 11 after a historically unpopular lobbying power play by major ISPs, and many states are now rushing it to try and protect consumers.

Needless to say, AT&T, Verizon and Comcast lobbyists and executives don't much like that, and according to a new report by the EFF are engaged in a last-minute bid to weaken the rules before it can pass.

"California’s legislature has so far opted to ban discriminatory users of zero rating and prevent the major wireless players from picking winners and losers online," notes the EFF. "But new and increased resistance by the ISP lobby (led by AT&T and their representative organization CALinnovates) unfortunately has legislators contemplating whether discriminatory zero rating practices should remain lawful despite their harms for low-income Internet users."

"In fact, AT&T and their representatives are even going so far as to argue that their discriminatory self-dealing practices that violate net neutrality are actually good for low income Internet users," says the EFF.

The idea that usage caps, overage fees and zero rating help poor people is something AT&T lobbyists (and their friends like FCC boss Ajit Pai) have been insisting for years. Of course AT&T's goal has always been anti-competitive: especially since it exempts its own content from usage caps while penalizing competitors like Netflix, in the process driving up costs for users that veer too far away from AT&T's own content or services.

In fact the last FCC clearly stated that AT&T's usage of zero rating is anti-competitive. However, their realization that usage caps could be used anti-competitively came too late, and once Trump was elected the new FCC declared such arbitrary limits and gamesmanship was perfectly acceptable. Whether California lawmakers can be bribed cajoled into buying into AT&T's logic remains to be seen.

"Upholding S.B. 822 means upholding a free, open Internet for all Californians," the EFF notes. "Without it, ISPs may have free rein to create two Internets that will be premised on how much income you have to the benefit of their own services and partners. With AT&T's recent victory in the courts over the Department of Justice and the expiration of federal net neutrality rules, S.B. 822's net neutrality protections have become more important than ever."
http://www.dslreports.com/shownews/A...ity-Law-142017





Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So
Steve Lohr

For the past five years, the hottest thing in artificial intelligence has been a branch known as deep learning. The grandly named statistical technique, put simply, gives computers a way to learn by processing massive amounts of data. Thanks to deep learning, computers can easily identify faces and recognize spoken words, making other forms of humanlike intelligence suddenly seem within reach.

Companies like Google, Facebook and Microsoft have poured money into deep learning. Start-ups pursuing everything from cancer cures to back-office automation trumpet their deep learning expertise. And the technology’s perception and pattern-matching abilities are being applied to improve progress in fields such as drug discovery and self-driving cars.

But now some scientists are asking whether deep learning is really so deep after all.

In recent conversations, online comments and a few lengthy essays, a growing number of A.I. experts are warning that the infatuation with deep learning may well breed myopia and overinvestment now — and disillusionment later.

“There is no real intelligence there,” said Michael I. Jordan, a professor at the University of California, Berkeley, and the author of an essay published in April intended to temper the lofty expectations surrounding A.I. “And I think that trusting these brute force algorithms too much is a faith misplaced.”

The danger, some experts warn, is that A.I. will run into a technical wall and eventually face a popular backlash — a familiar pattern in artificial intelligence since that term was coined in the 1950s. With deep learning in particular, researchers said, the concerns are being fueled by the technology’s limits.

Deep learning algorithms train on a batch of related data — like pictures of human faces — and are then fed more and more data, which steadily improve the software’s pattern-matching accuracy. And while the technique has spawned successes, the results are largely confined to fields where those huge data sets are available and the tasks are well defined, like labeling images or translating speech to text.

The technology struggles in the more open terrains of intelligence — that is, meaning, reasoning and common-sense knowledge. While deep learning software can instantly identify millions of words, it has no understanding of a concept like “justice,” “democracy” or “meddling.”

Researchers have shown that deep learning can be easily fooled. Scramble a relative handful of pixels, and the technology can mistake a turtle for a rifle or a parking sign for a refrigerator.

In a widely read article published early this year on arXiv.org, a site for scientific papers, Gary Marcus, a professor at New York University, posed the question: “Is deep learning approaching a wall?” He wrote, “As is so often the case, the patterns extracted by deep learning are more superficial than they initially appear.”

If the reach of deep learning is limited, too much money and too many fine minds may now be devoted to it, said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence. “We run the risk of missing other important concepts and paths to advancing A.I.,” he said.

Amid the debate, some research groups, start-ups and computer scientists are showing more interest in approaches to artificial intelligence that address some of deep learning’s weaknesses. For one, the Allen Institute, a nonprofit lab in Seattle, announced in February that it would invest $125 million over the next three years largely in research to teach machines to generate common-sense knowledge — an initiative called Project Alexandria.

While that program and other efforts vary, their common goal is a broader and more flexible intelligence than deep learning. And they are typically far less data hungry. They often use deep learning as one ingredient among others in their recipe.

“We’re not anti-deep learning,” said Yejin Choi, a researcher at the Allen Institute and a computer scientist at the University of Washington. “We’re trying to raise the sights of A.I., not criticize tools.”

Those other, non-deep learning tools are often old techniques employed in new ways. At Kyndi, a Silicon Valley start-up, computer scientists are writing code in Prolog, a programming language that dates to the 1970s. It was designed for the reasoning and knowledge representation side of A.I., which processes facts and concepts, and tries to complete tasks that are not always well defined. Deep learning comes from the statistical side of A.I. known as machine learning.

Benjamin Grosof, an A.I. researcher for three decades, joined Kyndi in May as its chief scientist. Mr. Grosof said he was impressed by Kyndi’s work on “new ways of bringing together the two branches of A.I.”

Kyndi has been able to use very little training data to automate the generation of facts, concepts and inferences, said Ryan Welsh, the start-up’s chief executive.

The Kyndi system, he said, can train on 10 to 30 scientific documents of 10 to 50 pages each. Once trained, Kyndi’s software can identify concepts and not just words.

In work for three large government agencies that it declined to disclose, Kyndi has been asking its system to answer this typical question: Has a technology been “demonstrated in a laboratory setting”? The Kyndi program, Mr. Welsh said, can accurately infer the answer, even when that phrase does not appear in a document.

And Kyndi’s reading and scoring software is fast. A human analyst, Mr. Welsh said, might take two hours on average to read a lengthy scientific document, and perhaps read 1,000 in a year. Kyndi’s technology can read those 1,000 documents in seven hours, he said.

Kyndi serves as a tireless digital assistant, identifying the documents and passages that require human judgment. “The goal is increasing the productivity of the human analysts,” Mr. Welsh said.

Kyndi and others are betting that the time is finally right to take on some of the more daunting challenges in A.I. That echoes the trajectory of deep learning, which made little progress for decades before the recent explosion of digital data and ever-faster computers fueled leaps in performance of its so-called neural networks. Those networks are digital layers loosely analogous to biological neurons. The “deep” refers to many layers.

There are other hopeful signs in the beyond-deep-learning camp. Vicarious, a start-up developing robots that can quickly switch from task to task like humans, published promising research in the journal Science last fall. Its A.I. technology learned from relatively few examples to mimic human visual intelligence, using data 300 times more efficiently than deep learning models. The system also broke through the defenses of captchas, the squiggly letter identification tests on websites meant to foil software intruders.

Vicarious, whose investors include Elon Musk, Jeff Bezos and Mark Zuckerberg, is a prominent example of the entrepreneurial pursuit of new paths in A.I.

“Deep learning has given us a glimpse of the promised land, but we need to invest in other approaches,” said Dileep George, an A.I. expert and co-founder of Vicarious, which is based in Union City, Calif.

The Pentagon’s research arm, the Defense Advanced Research Projects Agency, has proposed a program to seed university research and provide a noncommercial network for sharing ideas on technology to emulate human common-sense reasoning, where deep learning falls short. If approved, the program, Machine Common Sense, would start this fall and most likely run for five years, with total funding of about $60 million.

“This is a high-risk project, and the problem is bigger than any one company or research group,” said David Gunning, who managed Darpa’s personal assistant program, which ended a decade ago and produced the technology that became Apple’s Siri.
https://www.nytimes.com/2018/06/20/t...elligence.html





Everything Big Data Claims to Know About You Could be Wrong
Yasmin Anwar

When it comes to understanding what makes people tick — and get sick — medical science has long assumed that the bigger the sample of human subjects, the better. But new research led by UC Berkeley suggests this big-data approach may be wildly off the mark.

That’s largely because emotions, behavior and physiology vary markedly from one person to the next and one moment to the next. So averaging out data collected from a large group of human subjects at a given instant offers only a snapshot, and a fuzzy one at that, researchers said.

The findings, published this week in the Proceedings of the National Academy of Sciences journal, have implications for everything from mining social media data to customizing health therapies, and could change the way researchers and clinicians analyze, diagnose and treat mental and physical disorders.

“If you want to know what individuals feel or how they become sick, you have to conduct research on individuals, not on groups,” said study lead author Aaron Fisher, an assistant professor of psychology at UC Berkeley. “Diseases, mental disorders, emotions, and behaviors are expressed within individual people, over time. A snapshot of many people at one moment in time can’t capture these phenomena.”

Diverse crowdMoreover, the consequences of continuing to rely on group data in the medical, social and behavioral sciences include misdiagnoses, prescribing the wrong treatments and generally perpetuating scientific theory and experimentation that is not properly calibrated to the differences between individuals, Fisher said.

That said, a fix is within reach: “People shouldn’t necessarily lose faith in medical or social science,” he said. “Instead, they should see the potential to conduct scientific studies as a part of routine care. This is how we can truly personalize medicine.”

Plus, he noted, “modern technologies allow us to collect many observations per person relatively easily, and modern computing makes the analysis of these data possible in ways that were not possible in the past.”

HOW THEY CONDUCTED THE RESEARCH

Fisher and fellow researchers at Drexel University in Philadelphia and the University of Groningen in the Netherlands used statistical models to compare data collected on hundreds of people, including healthy individuals and those with disorders ranging from depression and anxiety to post-traumatic stress disorder and panic disorder.

In six separate studies they analyzed data via online and smartphone self-report surveys, as well as electrocardiogram tests to measure heart rates. The results consistently showed that what’s true for the group is not necessarily true for the individual.

For example, a group analysis of people with depression found that they worry a great deal. But when the same analysis was applied to each individual in that group, researchers discovered wide variations that ranged from zero worrying to agonizing well above the group average.

Moreover, in looking at the correlation between fear and avoidance – a common association in group research – they found that for many individuals, fear did not cause them to avoid certain activities, or vice versa.

“Fisher’s findings clearly imply that capturing a person’s own processes as they fluctuate over time may get us far closer to individualized treatment,” said UC Berkeley psychologist Stephen Hinshaw, an expert in psychopathology and faculty member of the department’s clinical science program.

In addition to Fisher, co-authors of the study are John Medaglia at Drexel University and Bertus Jeronimus at the University of Groningen.
http://news.berkeley.edu/2018/06/18/big-data-flaws/





ACLU Wants to Keep Your Phone Safe from Sneaky Government Malware

Not everyone can stand up to demands like Apple does. The ACLU wants to change that.
Alfred Ng

The balance between security and law enforcement is often an issue for tech companies. The American Civil Liberties Union wants to tip the scales in security's favor.

On Thursday, the ACLU released its guide to developers on how to respond to government demands when the requests require companies to compromise their own security. It happens a lot more often than you probably think.

Two years ago, Apple famously fought off FBI demands to unlock an iPhone belonging to one of the San Bernardino terrorists, which would have required that the company create backdoor access, essentially installing a vulnerability that could extend across the iPhone line.

Officials in the US, Australia and the UK have also called for tech companies to build "responsible encryption," which security experts argue would create more openings for hackers to penetrate systems.

The ACLU anticipates a new threat from government requests: potentially forcing developers to install software updates with hidden surveillance tools, whether for tracking a phone's location or bypassing encryption and passcodes.

"As the engineering becomes better, and as the encryption becomes stronger, there's still always going to be this one channel into the device, which is the software update channel," said Brett Max Kaufman, an ACLU attorney. "In some sense, that's the hole that can never be closed."

As digital evidence becomes more important in investigations, governments are ramping up requests to tech companies, asking tech giants like Apple and Google to provide data that police wouldn't be able to get otherwise.

In 2017, both Apple and Google reported their highest number of government data requests ever, with Apple receiving 8,929 demands, while Google received 32,877 orders for information. Those numbers don't include government requests to weaken security, but the ACLU worries they could in the future.

A major consequence of tainted security updates, ACLU technologist Daniel Kahn Gillmor said, would be that you'd lose trust in necessary patches.

"People will likely stop wanting to run the automatic updates because they'll feel like they're under threats," Gillmor said. "We see this as a public safety issue."

The organization said the scenario was the digital equivalent of the CIA's fake vaccination drive in Pakistan, which led to public distrust of health workers and an increase in cases of polio.

If people don't trust security updates, it could lead to vulnerabilities allowing widespread malware, like the WannaCry ransomware attack that ensnared thousands of computers in hospitals, universities and financial institutions.

The ACLU's guide breaks down what developers should do across four sections, but here's the short version: understand the issue; implement privacy-minded policies; plan responses to government orders ahead of time; and lawyer up.

The US government can request companies weaken their own security through court orders demanding technical assistance. Apple's battle with the FBI in 2016 kicked off with a court order, for example. Some court orders can even have secrecy clauses, forcing companies to keep quiet about the unsecure updates, the ACLU said.

"Some of this could be happening under seal, or via informal agreements with software suppliers," Gillmor said.

The organization said developers have a right to challenge these orders in court, and that preparation will improve their chances of winning the arguments.

The guide includes policy, legal and technical advice on how companies should deal with government orders on security. The ACLU said it would be interested in helping any companies struggling to fight off these requests.
https://www.cnet.com/news/aclu-wants...l-to-security/





In Ruling on Cellphone Location Data, Supreme Court Makes Statement on Digital Privacy
Adam Liptak

In a major statement on privacy in the digital age, the Supreme Court ruled on Friday that the government generally needs a warrant to collect troves of location data about the customers of cellphone companies.

“We decline to grant the state unrestricted access to a wireless carrier’s database of physical location information,” Chief Justice John G. Roberts Jr. wrote for the majority.

The 5-to-4 ruling will protect “deeply revealing” records associated with 400 million devices, the chief justice wrote. It did not matter, he wrote, that the records were in the hands of a third party. That aspect of the ruling was a significant break from earlier decisions.

The Constitution must take account of vast technological changes, Chief Justice Roberts wrote, noting that digital data can provide a comprehensive, detailed — and intrusive — overview of private affairs that would have been impossible to imagine not long ago.

The decision made exceptions for emergencies like bomb threats and child abductions. “Such exigencies,” he wrote, “include the need to pursue a fleeing suspect, protect individuals who are threatened with imminent harm or prevent the imminent destruction of evidence.”

In general, though, the authorities must now seek a warrant for cell tower location information and, the logic of the decision suggests, other kinds of digital data that provide a detailed look at a person’s private life.

The decision thus has implications for all kinds of personal information held by third parties, including email and text messages, internet searches, and bank and credit card records. But Chief Justice Roberts said the ruling had limits.

“We hold only that a warrant is required in the rare case where the suspect has a legitimate privacy interest in records held by a third party,” the chief justice wrote. The court’s four more liberal members — Justices Ruth Bader Ginsburg, Stephen G. Breyer, Sonia Sotomayor and Elena Kagan — joined his opinion.

Each of the four other justices wrote a dissent, with the five opinions running to more than 110 pages. In one dissent, Justice Anthony M. Kennedy said the distinctions drawn by the majority were illogical and “will frustrate principled application of the Fourth Amendment in many routine yet vital law enforcement operations.”

“Cell-site records,” he wrote, “are uniquely suited to help the government develop probable cause to apprehend some of the nation’s most dangerous criminals: serial killers, rapists, arsonists, robbers and so forth.”

In a second dissent, Justice Samuel A. Alito Jr. wrote that the decision “guarantees a blizzard of litigation while threatening many legitimate and valuable investigative practices upon which law enforcement has rightfully come to rely.”

The case, Carpenter v. United States, No. 16-402, arose from armed robberies of Radio Shacks and other stores in the Detroit area starting in 2010.

Witnesses said that Timothy Ivory Carpenter had planned the robberies, supplied guns and served as lookout, typically waiting in a stolen car across the street.

“At his signal, the robbers entered the store, brandished their guns, herded customers and employees to the back, and ordered the employees to fill the robbers’ bags with new smartphones,” a court decision said, summarizing the evidence against him.

Prosecutors also relied on months of records obtained from cellphone companies to prove their case. The records showed that Mr. Carpenter’s phone had been nearby when several of the robberies happened. He was convicted and sentenced to 116 years in prison.

Mr. Carpenter’s lawyers said cellphone companies had turned over 127 days of records that placed his phone at 12,898 locations, based on information from cellphone towers. The records disclosed whether he had slept at home on given nights and whether he attended his usual church on Sunday mornings.

Chief Justice Roberts wrote that the information was entitled to privacy protection.

“Mapping a cellphone’s location over the course of 127 days provides an all-encompassing record of the holder’s whereabouts,” he wrote, going on to quote from an earlier opinion. “As with GPS information, the time-stamped data provides an intimate window into a person’s life, revealing not only his particular movements, but through them his ‘familial, political, professional, religious and sexual associations.’”

In dissent, Justice Kennedy wrote that GPS devices provide much more precise location information than do cell towers. Chief Justice Roberts responded that cell tower technology is developing quickly.

“As the number of cell sites has proliferated,” he wrote, “the geographic area covered by each cell sector has shrunk, particularly in urban areas. In addition, with new technology measuring the time and angle of signals hitting their towers, wireless carriers already have the capability to pinpoint a phone’s location within 50 meters.”

Chief Justice Roberts left open the question of whether limited government requests for location data required a warrant. But he said that access to seven days of data is enough to raise Fourth Amendment concerns.

The legal question for the justices was whether prosecutors violated the Fourth Amendment, which bars unreasonable searches, by collecting without warrant vast amounts of data from cellphone companies that showed Mr. Carpenter’s movements.

In a pair of recent decisions, the Supreme Court expressed discomfort with allowing unlimited government access to digital data. In United States v. Jones, it limited the ability of the police to use GPS devices to track suspects’ movements. And in Riley v. California, it required a warrant to search cellphones.

Chief Justice Roberts wrote that both decisions supported the result in the new case.

As his opinion in Riley pointed out, he wrote, “cellphones and the services they provide are ‘such a pervasive and insistent part of daily life’ that carrying one is indispensable to participation in modern society.”

And the Jones decision, he wrote, addressed digital privacy in the context of location information.

“The question we confront today,” he wrote, “is how to apply the Fourth Amendment to a new phenomenon: the ability to chronicle a person’s past movements through the record of his cellphone signals. Such tracking partakes of many of the qualities of the GPS monitoring we considered in Jones. Much like GPS tracking of a vehicle, cellphone location information is detailed, encyclopedic and effortlessly compiled.”

Technology companies including Apple, Facebook and Google filed a brief urging the Supreme Court to continue to bring Fourth Amendment law into the modern era. “No constitutional doctrine should presume,” the brief said, “that consumers assume the risk of warrantless government surveillance simply by using technologies that are beneficial and increasingly integrated into modern life.”

Older Supreme Court decisions offered little protection for information about businesses’ customers. In 1979, for instance, in Smith v. Maryland, the Supreme Court ruled that a robbery suspect had no reasonable expectation that his right to privacy extended to the numbers dialed from his landline phone. The court reasoned that the suspect had voluntarily turned over that information to a third party: the phone company.

Relying on the Smith decision’s “third-party doctrine,” federal appeals courts have said that government investigators seeking data from cellphone companies showing users’ movements do not require a warrant.

But Chief Justice Roberts wrote that the doctrine is of limited use in the digital age.

“While the third-party doctrine applies to telephone numbers and bank records, it is not clear whether its logic extends to the qualitatively different category of cell-site records,” he wrote. “After all, when Smith was decided in 1979, few could have imagined a society in which a phone goes wherever its owner goes, conveying to the wireless carrier not just dialed digits, but a detailed and comprehensive record of the person’s movements.”

“When the government tracks the location of a cellphone,” the chief justice wrote, “it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.”

A federal law, the Stored Communications Act, does require prosecutors to go to court to obtain tracking data, but the showing they must make under the law is not probable cause, the standard for a warrant. Instead, they must demonstrate only that there were “specific and articulable facts showing that there are reasonable grounds to believe” that the records sought “are relevant and material to an ongoing criminal investigation.”

That was insufficient, the court ruled. But Chief Justice Roberts emphasized the limits of the decision. It did not address real-time cell tower data, he wrote, “or call into question conventional surveillance techniques and tools, such as security cameras.”

“Nor do we address other business records that might incidentally reveal location information,” the chief justice wrote. “Further, our opinion does not consider other collection techniques involving foreign affairs or national security.”
https://www.nytimes.com/2018/06/22/u...e-privacy.html





‘Gaming Disorder’ is Officially Recognized by the World Health Organization
Brian Heater

Honestly, “gaming disorder” sounds like a phrase tossed around by irritated parents and significant others. After much back and forth, however, the term was just granted validity, as the World Health Organization opted to include it in the latest edition of its Internal Classification of Diseases.

The volume, out this week, diagnoses the newly minted disorder with three key telltale signs:

1. Impaired control over gaming (e.g. onset, frequency, intensity, duration, termination, context)
2. Increasing priority given to gaming to the extent that gaming takes precedence over other life interests and daily activities
3. Continuation or escalation of gaming despite the occurrence of negative consequences

I can hear the collective sound of many of my friends gulping at the sound of eerily familiar symptoms. Of course, the disorder has been criticized from a number of corners, including health professionals who have written it off as being overly broad and subjective. And, of course, the potential impact greatly differs from person to person and game to game.

The effects as specified above share common ground with other similar addictive activities defined by the WHO, including gambling disorder:

“Disorders due to addictive behaviours are recognizable and clinically significant syndromes associated with distress or interference with personal functions that develop as a result of repetitive rewarding behaviours other than the use of dependence-producing substances,” writes the WHO. “Disorders due to addictive behaviors include gambling disorder and gaming disorder, which may involve both online and offline behaviour.”

In spite of what may appear to be universal symptoms, however, the organization is quick to note that the prevalence of gaming disorder, as defined by the WHO, is actually “very low.” WHO member Dr. Vladimir Poznyak tells CNN, “Millions of gamers around the world, even when it comes to the intense gaming, would never qualify as people suffering from gaming disorder.”
https://techcrunch.com/2018/06/18/ga...-organization/





An Ode to Late Nights On LimeWire

For those who grew up in the 2000s, the file-sharing software offered a way to forge a taste in music and understand the internet.
Daisy Jones

For a certain generation, there are two versions of “online.” There’s the “online” of today, in which we exist online: our phones tracking us in our pockets, watching box-sets over Netflix in the evening, WhatsApping our friends in between bites of dinner. And then there’s the “online” that was then, before the mid-2000s, which sat alongside an “offline.” That version involved turning on the computer and hearing it click and whirr into action. It involved waiting for your mum to get off the house phone so you could access the dial-up connection. It involved plugging into a secret space that existed inside this big, grey, heated object and tap, tap, tapping on a keyboard. But, crucially, it was limited to software like MSN Messenger, Internet Explorer, AOL, and, for some of us, LimeWire.

LimeWire was a free peer-to-peer file-sharing network that existed from 2000 until ten years later when it was shut down by a federal court following a four-year legal battle with the US music industry. By then, it had already disappeared into obscurity alongside Napster, which came and went before it. Streaming apps were becoming ubiquitous and even the idea of “illegally downloading MP3s” felt like a faded way of doing things. Speak to anyone under the age of 21 about LimeWire, and they’ll probably think you’re chatting about a vape flavor or subgenre. But for me and other people now in their mid to late-20s, LimeWire represented a treasure chest. It sounds absurd now that YouTube and the streaming giants exist, but the fact you could type a song that you wanted into a box, and that song would then appear a second later, felt like a revelation.

When I was 13 or 14 there was this magic window of time after dinner and before bed in which I could escape to a brand new world that felt simultaneously anonymous and expansive. I would scroll through files upon files until the darkness behind my lids resembled The Matrix, discovering bands like My Bloody Valentine while looking for Sonic Youth (so many songs were labelled wrong) or diving into late-90s club film soundtracks (there was a lot from Human Traffic and Trainspotting on there for some reason) or unearthing deep cuts (shout out Bowie’s cover of “Sorrow”). Downloading music back then would involve wading through porn files with names like “6girlslesbianthreesomexxxgangbang.wav” and dead, virus-filled links that caused pop-ups to explode across the screen like bacteria. But once the tracks were yours they'd be burned onto a mix CD for your walkman later, each name scribbled on disc in sharpie. There was a ritualism to the whole thing that felt satisfying and novel, and which I have not been able to replicate—or even definitively pinpoint—since.

In a recent essay for Hazlitt, writer Helena Fitzgerald brilliantly describes the vibe and energy of these early internet excursions. “The whole internet had something sexual about it in its early days, and that was much of what got us on there,” she writes. “It was the place where we were allowed to talk about things we would never say out loud.” To me, the use of LimeWire wasn’t “sexual” in the way, say, early-2000s chat rooms might be for people, but there was a secrecy and intimacy to the process that was appealing. As Fitzgerald points out in her piece, your adolescence is a time when you’re figuring out your tastes, and desires, and who you want to be. On LimeWire, you could lose yourself for hours searching for the perfect thing to listen to, and because it felt like no one was watching, that could be anything. Before then, the only way to access music was physical, or through radio and music TV, but those platforms were often consumed publicly, so had the propensity to be performative. Suddenly, you didn’t have to pretend to like watching Kerrang! or Kiss or whatever was cool among your friends. You could download Britney albums until 2 AM, or get heavily into Norwegian metal. Whatever: LimeWire made your taste yours.

The world LimeWire introduced to music fans hasn’t exactly gone away—it’s just become so familiar that we barely notice it’s there, like breakfast cereal or the bus route to work. Spending hours down a late-night Internet hole—clicking through Soundcloud links, letting the YouTube algorithm pull you into a tunnel of 80s Japanese pop—has become the way things are, rather than a new mode of being. There’s a different energy to the whole process, too. Instead of being plugged into a machine after school in your bedroom and navigating through literal shit with the hyper-focus of a cyber-detective, you can just listen to music on your phone immediately, wherever. In that way, perhaps it feels less private, less like climbing alone into the darkness for the very first time to find yourself.

All of that said, I don’t miss LimeWire. It’s obviously way better that young people today can listen to their favorite track on Spotify or whatever rather than having to sift through viruses in the hope they can download a nu metal mega-mix that doesn’t actually exist, or else accidentally stumble across a video of a real-life human decapitation while searching for t.A.T.u. But I do credit the software for opening my world up. If it wasn’t for LimeWire, I probably wouldn’t have all the Placebo B-sides etched into my brain. I would never have the Virgin Suicides soundtrack burnt to disc. And I might not have ventured out the confines of what I was “supposed” to like—at least not so soon. LimeWire made me a music nerd and an Internet nerd all at once, and I’d hazard a guess it did the same for a lot of others.
https://noisey.vice.com/en_us/articl...ding-mp3s-2018





Piracy Didn’t Fade, it Just Got Cleverer
Adrian Pennington

Galvanised into action the media industry can claim some success in reducing incidents of illegal streaming. But the threat remains high as pirates turn to more sophisticated methods of attack.

This time last year the industry was in a spin. In close succession, hackers had breached Netflix, Disney and HBO, threatening to release script details or entire shows to the web unless ransoms were paid. Even then, Game of Thrones season seven was pirated more than a billion times, according to one estimate.

Euphemistically known as content redistribution, piracy was rife in sports broadcasting too.

The industry’s worst fears were confirmed shortly before IBC when ‘The Money Fight’ between boxers Floyd Mayweather and Conor McGregor haemorrhaged cash for operator Showtime as three million people watched illegally.

In recent months, though, no such high-profile incident has occurred – or at least been made public. The industry would appear to have stemmed the tide.

Massive investment pays dividends

This is at least in part due to the firepower being thrown at the problem.

Ovum estimates that the spend on TV and video anti-piracy services will touch U$1bn worldwide by the end of the year - a rise of 75% on 2017. Increasing adoption of these anti-piracy services bundled with premium content protection technology stacks such as DRM, fingerprinting, watermarking, paywalls and tokenised authentication will see losses reduce, predicts the analyst, to 13% in 2018 from 16% in 2017.

Last June, Netflix, HBO, Disney, Amazon and Sky were among more than 30 studios and international broadcasters ganging together to form the anti-piracy Alliance for Creativity and Entertainment (ACE). It shut down Florida-based SET Broadcast pending a lawsuit alleging the streaming subscription service was pirating content. ACE has also initiated legal action against Kodi set top box makers in Australia, the UK and the US (including TickBox TV and Dragon Box) for providing illicit access to copyrighted content.

In the UK, the Digital Production Partnership (DPP) unveiled its Committed to Security Programme at IBC2017 to help companies self-assess against key industry security criteria. It has since awarded the appropriate ‘committed to security’ mark to two dozen companies including Arqiva, Base Media Cloud, Dropbox, Imagen, Piksel and Signiant.

“We have seen the impact of new countermeasures and legal actions implemented in several advanced markets over the past 18 months,” reports Simon Trudelle, Senior Director of Product Marketing at content security experts, Nagra.

“For instance, ISPs and cloud platform providers in Western Europe are now better informed and are more cooperative when notified of an official takedown notice.

Trudelle says that, as a result, a large chunk of pirate infrastructure has moved to jurisdictions outside of Western Europe, where intellectual property rights are more challenging to enforce. Because this pirate infrastructure is further away from major cloud and CDN hubs in Western Europe, it reduces the quality of the pirate services.

Also, the EU’s data privacy directive, GDPR, has grown awareness in fighting illicit streaming services.

“Broad communication on data and privacy issues help consumers realise that their illegal actions could be traced, or that their personal data, including ID and payment information, could be stolen and misused by organised crime,” says Trudelle.

Previously, content theft has been a crime that couldn’t be enforced - authorities wouldn’t know what to do or how to stop it. Now, according to content security vendor Verimatrix’s CTO Petr Peterka, authorities are better equipped to understand what piracy looks like, how to find it and how to stop it - all of which makes it more difficult for pirates to hide or be anonymous.

“The most effective approach to countering threats of piracy starts with education, then moves into rights expertise, with rights enforcement being the final step,” says Peterka.

Clear and present danger

But far from receding, the security threat remains as high as ever. Even at 13%, the revenue expected to be lost this year by global online TV and video services (excluding film entertainment) amounts to U$37.4bn.

A new major case of piracy has erupted during the FIFA World Cup, proving it’s still a major issue for the media industry. FIFA is taking action against Saudi TV channel BeoutQ for alleged illegal broadcasts of the opening games of the World Cup, infringing the exclusive regional rights to the competition held by Qatar’s beIN Media Group.

The most serious threat comes from the Asia-Pacific region, which will account for roughly 40% of all revenue leakage, according to Ovum.

“[The focus of] attacks have moved – slightly - from Tier-I premium content towards Tier-II and Tier-III formats (regional and local content),” says Ovum principal consultant for Media & Broadcast Technology, Kedar Mohite. “Attackers are specifically targeting local markets… focusing on Hollywood titles distributed through local touch points in Asia-Pacific.”

Furthermore, the fragmentation of access points to content from web, devices, platforms and workgroups (a pre-launch IP theft scenario) means premium content security has to continuously evolve.

“Cybercrime is now the main source of funding for organised criminal groups,” says Ovum Research Director Maxine Holt. “These groups are extremely well funded and therefore have the time and the inclination to launch extended attacks that can lay undetected for many, many months.”

Content protection agency MUSO charted over 300 billion visits to piracy websites across music, TV and film, publishing, and software in 2017, more than a third of which were to pirate sites hosting television content (106.9 billion). It records that the nation with the worst offenders is the U.S where 27.9 billion visits were made to pirate sites in 2017 (followed by Russia with 20.6bn and India with 17bn).

“There is a belief that the rise in popularity of on-demand services – such as Netflix and Spotify – have solved piracy, but that theory simply doesn’t stack up. Our data suggests that piracy is more popular than ever,” says MUSO co-founder and CEO Andy Chatterley. “The data shows us that 53% of all piracy happens on unlicensed streaming platforms.”

More advanced content security measures may have made it more difficult to hack into the cryptographic components of the content protection system, with consequently fewer ‘traditional’ security breaches. However, even as protection mechanisms get more sophisticated, the number of vulnerabilities continues to increase.

Commercial piracy

“Content is available on many more networks, giving pirates more points of attack than just the smartcard,” says Peterka. “Pirates are now trying to go up stream all the way to content creation itself because pirating that content before it enters the conditional access/DRM domain gives them the biggest benefit. This is why content owners are now employing watermarking before it even hits movie theatres; piracy has to be addressed all the way up to the original source.”

“In some respects, piracy is actually getting worse,” Twentieth Century Fox’s SVP for Content Protection and Technology Ron Wheeler told the Pay-TV Innovation Forum. “Illicit streaming devices and associated services cost users real money and therefore target the same paying customers that legitimate broadcast and OTT services do.”

Nagra says such “commercial piracy” is a more sophisticated form that involves advanced streaming platforms, front-end marketing sites and payment servers that aim to compete with legitimate services.

“These offerings are particularly damaging in emerging markets, where consumers can hardly tell the difference between legitimate services,” says Trudelle.

No threat goes away - it morphs over time. Attackers are combining different forms of attack and even sharing codebases to circumvent the defences the cybersecurity industry puts in. At the same time, security experts have also ramped up their solutions to disrupt these threats.

Irdeto is using artificial intelligence to detect illegal streams through semantic analysis of social media advertisements or web page indexes, to identify broadcaster logos and even athletes via facial recognition. With the stream flagged as an illegal piece of content, a takedown notice is issued.

“Once pirates realise the detection techniques that are being employed they start adjusting their methods – blanking or switching out logos for example,” says Irdeto VP of Cybersecurity Services Mark Mulready. “The more mischievous ones are actually putting on other logos of other broadcasters.”

That’s where the next phase of the machine learning project comes in. “We’re trying to teach the system to recognise things like football strips so it can actually determine which game is on from seeing, for example, Barcelona’s colours.”

Nagra is introducing new watermarking solutions for OTT delivery apps at IBC2018. This will allow content and rights owners to trace leaks to their origins on a consumer streaming device, enabling operators to turn off a suspicious user and disrupt pirate services during live events. The company is also expanding its monitoring and takedown capabilities.

Verimatrix’s Peterka says: “We may never stop piracy but making it more difficult and less economical for pirates to steal can help slow it down. To stay on top of content protection, it is essential that service providers keep investing in security to discover and patch any vulnerabilities in a timely matter.”

Meanwhile, crypto currencies like bitcoin have made it easier for attackers to ‘cashout’ undetected while the emergent Internet of Things will only magnify the threat.

“We are no longer dealing with a handful of companies with closed ecosystems solely responsible for securing data on the device,” warned McAfee CEO Christopher Young recently. The cybersecurity firm tracks 600,000 unique threats a day on 300 million devices and says cybercrime drains U$600 billion from businesses a year.

“With open systems the network also connects to hundreds of billions of devices. How will we secure this large-scale connected device ecosystem without stifling growth and innovation? We stand on a precipice today.”
https://www.ibc.org/content-manageme...r/2900.article





The EU's Bizarre War on Memes is Totally Unwinnable

A proposed new European copyright law could make memes illegal and threaten the future of the internet as we know it. Time to panic?
K.G Orphanides

On June 20, the European Parliament will set in motion a process that could force online platforms like Facebook, Reddit and even 4chan to censor their users' content before it ever gets online.

A proposed new European copyright law wants large websites to use "content recognition technologies" to scan for copyrighted videos, music, photos, text and code in a move that that could impact everyone from the open source software community to remixers, livestreamers and teenage meme creators.

In an open letter to the President of the European Parliament, some of the world's most prominent technologists warn that Article 13 of the proposed EU Copyright Directive "takes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users."

The directive includes a great deal of useful legislation to update copyright law and better reflect modern technologies. But Article 13 is problematic.

Proposed EU Copyright Directive

Article 13.1
“Information society service providers that store and provide to the public access to large amounts of works or other subject-matter uploaded by their users shall, in cooperation with rightholders, take measures to ensure the functioning of agreements concluded with rightholders for the use of their works or other subject-matter or to prevent the availability on their services of works or other subject-matter identified by rightholders through the cooperation with the service providers. Those measures, such as the use of effective content recognition technologies, shall be appropriate and proportionate. The service providers shall provide rightholders with adequate information on the functioning and the deployment of the measures, as well as, when relevant, adequate reporting on the recognition and use of the works and other subject-matter.”

It's a direct threat to the established legal notion that individual users, rather than platforms, are responsible for the content they put online.

"Article 13 effectively deputizes social media and other Internet companies as copyright police, forcing them to implement a highly invasive surveillance infrastructure across their entire service offerings," says cryptographer and security specialist Bruce Schneier, one of the letter's signatories. "Aside from the harm from the provisions of Article 13, this infrastructure can be easily repurposed by government and corporations – and further entrenches ubiquitous surveillance into the fabric of the Internet."

Schneier and his fellow technologists, including figures responsible for the internet as we know it, like Tim Berners-Lee and Vint Cerf, are campaigning alongside the Electronic Frontier Foundation, Wikimedia and the Libraries and Archives Copyright Alliance, among many others.

The first Legislative Committee votes on the final form of the proposal on Wednesday, June 20. The version they vote through will be referred to the parliamentary plenary session, to be – almost certainly – voted into European law on the week of July 4 or, failing, after the European parliament returns from its summer recess in late September.

The Save Your Internet campaign is urging European internet users to contact their MEPs before the critical June 20 vote, and includes tools to facilitate communication with them via email, phone or social media.

Area of effect

Although it's primarily intended to prevent the online streaming of pirated music and video, the scope of Article 13 covers all and any copyrightable material, including images, audio, video, compiled software, code and the written word.

Internet memes— which most commonly take the form of viral images, endlessly copied, repeated and riffed on— could fall into a number of those categories, creating an improbable scenario in which one of the internet's most distinctive and commonplace forms of communication is banned.

The definitions used in Article 13 are broad by design, says writer and digital rights activist Cory Doctorow: "This system treats restrictions on free expression as the unfortunate but unavoidable collateral damage of protecting copyright. Automated systems just can't distinguish between commentary, criticism, and parody and mere copying, nor could the platforms employ a workforce big enough to adjudicate each case to see if a match to a copyrighted work falls within one of copyright's limitations and exceptions."

Meme makers don't have the kind of organised front of code-sharing platforms or the Wikimedia Foundation, but there've been a few, albeit rather muted efforts to raise a fuss among meme-making groups on Reddit, Facebook and 4chan, with leftist meme creators in particular expressing concerns that the new law "will result in blanket meme bans because they can't keep up with actually checking against parody laws".

A redditor from r/dankmemes has passionately proclaimed that "you can take our internet and our rights, but you can never take our memes." And it gets weirder the further right you go, as conspiracy theories proliferate. One denizen of 4chan's /pol/ went so far as to suggest that attempts to muster support against Article 13's content platform filtering are "a pro-Article 13 psyop meant to make the opposition look uncool", while other comment threads on 4chan and Breitbart focussed on the always fertile alt-right tactics of blaming the Jews, female MEPs, and hedge fund magnate George Soros.

The technology to filter out memes —or for that matter any copyrighted materia— would require a significant investment of time and money to develop. This means that we could see the detection of copyrighted material outsourced to companies with the means to carry it out effectively— that's likely to be US internet giants such as Amazon and Google.

A filtering system would be very likely to see European users' posts analysed by US firms, which could expose their data to the US's far less stringent privacy controls, despite the EU-US Privacy Shield framework for data protection.

The Max Planck Institute for Innovation and Competition's formal response to Article 13 also notes that this kind of automatic filtering is in breach of both the European Charter of Fundamental Rights and Article 15 of the E-Commerce Directive, which prohibits Member States from "imposing on providers that enjoy the protection of a safe harbour, general obligations to monitor the information which they transmit or store, as well as general obligations actively to seek facts or circumstances indicating illegal activity."

Article 13's statements that it concerns sites that provide "large amounts of works" and that "measures, such as the use of effective content recognition technologies, shall be appropriate and proportionate" may give leeway to smaller platforms to avoid intensive copyright filtering, while some proposed alternative versions of the article even omit reference to content recognition.

However, that's by no means certain, and the additional burden of policing copyright, Doctorow says, could stifle the the development of new platforms and technologies within the EU for years to come.

Detection tech

Google, Facebook and Amazon have advanced image recognition algorithms based on machine learning. TinEye uses hashes as unique signatures to identify specific images in whole or in part and Google uses a similar technique to spot specific screener copies of movies if they're uploaded to Drive.

Text checking is more straightforward. Plagiarism detection services for the written word are provided by companies such as Grammarly and Plagiarism Checker X, although they're limited in scope by the available content for them to check against, while CopyLeaks allows copyright holders to see if their work has been plagiarised online.

YouTube's Content ID system detects copyrighted materials by matching its users' uploaded videos with audio tracks that copyright holders can upload via the platform's content verification program portal.

However, false and inaccurate copyright claims are a frequent occurrence, while a great deal of copyrighted material goes onto the platform unnoticed, either due to clever evasion tactics such as re-editing content or because the copyright holder hasn't been in a position to upload a reference file.

"These systems are wildly imperfect and will not merely catch matches for copyrighted works, but also false positives," Doctorow says. "Big media companies – with the ear of the big platforms – will be able to pick up the phone and have someone unblock a piece of media that was falsely flagged, but the rest of us will be stuck firing off an email and crossing our fingers.

Even now, firms accidentally claim copyright over works they don't own. Doctorow highlights the example of US news programmes broadcasting public domain government footage such as Nasa launches and Congressional debates, include them in their newscasts and then claim the newscasts on YouTube. Then, he says, "when NASA or C-Span or whomever tries to upload their footage, they're blocked because a newscaster has sloppily filed a false copyright claim."

With no ability to identify context, automated copyright flagging systems are also likely to remove important content because of the incidental appearance of copyrighted material in the background, ignoring principles of fair dealing enshrined in the copyright laws of many EU countries: "something like having your protest footage blocked because of a passing motorist whose car radio was blaring a pop song – it is a match, but not one that infringes copyright."

Doctorow highlighted the potential for unanticipated abuse of any automated copyright filtering system to make false copyright claims, engage in targeted harassment and even silence public discourse at sensitive times.

"Because the directive does not provide penalties for abuse – and because rightsholders will not tolerate delays between claiming copyright over a work and suppressing its public display – it will be trivial to claim copyright over key works at key moments or use bots to claim copyrights on whole corpuses.

The nature of automated systems, particularly if powerful rightsholders insist that they default to initially blocking potentially copyrighted material and then releasing it if a complaint is made, would make it easy for griefers to use copyright claims over, for example, relevant Wikipedia articles on the eve of a Greek debt-default referendum or, more generally, public domain content such as the entirety of Wikipedia or the complete works of Shakespeare.

"Making these claims will be MUCH easier than sorting them out – bots can use cloud providers all over the world to file claims, while companies like Automattic (Wordpress) or Twitter, or even projects like Wikipedia, would have to marshall vast armies to sort through the claims and remove the bad ones – and if they get it wrong and remove a legit copyright claim, they face unbelievable copyright liability."
http://www.wired.co.uk/article/eu-me...-13-regulation





'Disastrous' Copyright Bill Vote Approved
BBC

A committee of MEPs has voted to accept major changes to European copyright law, which experts say could change the nature of the internet.

They voted to approve the controversial Article 13, which critics warn could put an end to memes, remixes and other user-generated content.

Article 11, requiring online platforms to pay publishers a fee if they link to their news content, was also approved.

One organisation opposed to the changes called it a "dark day".

The European Parliament's Committee on Legal Affairs voted by 15 votes to 10 to adopt Article 13 and by 13 votes to 12 to adopt Article 11.

It will now go to the wider European Parliament to vote on in July.

'Censorship'

Last week, 70 influential tech leaders, including Vint Cerf and Tim Berners-Lee, signed a letter opposing Article 13, which they called "an imminent threat to the future" of the internet.

Article 13 puts more onus on websites to enforce copyright and could mean that every online platform that allows users to post text, sounds, code or images will need some form of content-recognition system to review all material that users upload.

Activist Cory Doctorow has called it a "foolish, terrible idea".

Writing on online news website BoingBoing, he said: "No filter exists that can even approximate this. And the closest equivalents are mostly run by American companies, meaning that US big tech is going to get to spy on everything Europeans post and decide what gets censored and what doesn't."

Article 11 has been called the "link tax" by opponents.

Designed to limit the power over news publishers that tech giants such as Facebook and Google have, it requires online platforms to pay publishers a fee if they link to their news content.

The theory is that this would help support smaller news publishers and drive users to their homepages rather than directly to their news stories.

But critics say it fails to clearly define what constitutes a link and could be manipulated by governments to curb freedom of speech.

After the vote, US not-for-profit organisation Creative Commons, which aims to make more content free for others to share, called it a "dark day for the open web".

@EP_Legal has adopted both Article 11 (#linktax) and Article 13 (#CensorshipMachines). It’s a dark day for the open web, but the fight will continue in the upcoming plenary vote in the European Parliament. #SaveYourInternet #SaveTheLink #FixCopyright
— Creative Commons (@creativecommons) June 20, 2018


Another Twitter user tweeted: "15 MEPs voted for upload filtering. They understand the internet better than the people who invented it, apparently."

15 MEPs in the @EP_Legal Committee of the @Europarl_EN voted for upload filtering. They understand the internet better than the people who invented it, apparently. #saveyourinternet #fixcopyright
— Joe (@why0hy) June 20, 2018


Open Rights executive director Jim Killock told the BBC: "Article 13 must go. The EU parliament will have another chance to remove this dreadful law.

"The EU parliament's duty is to defend citizens from unfair and unjust laws.

"MEPs must reject this law, which would create a robo-copyright regime intended to zap any image, text, meme or video that appears to include copyright material, even when it is entirely legal material."

But publishers, including the Independent Music Companies Association (Impala) welcomed the vote.

"This is a strong and unambiguous message sent by the European Parliament," said executive chair Helen Smith.

"It clarifies what the music sector has been saying for years: if you are in the business of distributing music or other creative works, you need a licence, clear and simple. It's time for the digital market to catch up with progress."
https://www.bbc.com/news/technology-44546620





Ajit Pai Now Trying To Pretend That Everybody Supported Net Neutrality Repeal
Karl Bode

By now it's abundantly clear that the Trump FCC's repeal of net neutrality was based largely on fluff and nonsense. From easily disproved claims that net neutrality protections stifled broadband investment, to claims that the rules would embolden dictators in North Korea and Iran, truth was an early and frequent casualty of the FCC's blatant effort to pander to some of the least competitive, least-liked companies in America (oh hi Comcast, didn't see you standing there). In fact throughout the repeal, the FCC's media relations office frequently just directed reporters to telecom lobbyists should they have any pesky questions.

With the rules now passed and a court battle looming, FCC boss Ajit Pai has been making the rounds continuing his postmortem assault on stubborn facts. Like over at CNET, for example, where Ajit Pai informs readers in an editorial that he really adores a "free and open internet" despite having just killed rules supporting that very concept:

"I support a free and open internet. The internet should be an open platform where you are free to go where you want, and say and do what you want, without having to ask anyone's permission. And under the Federal Communications Commission's Restoring Internet Freedom Order, which takes effect Monday, the internet will be just such an open platform. Our framework will protect consumers and promote better, faster internet access and more competition."

'Course if you've paid attention, you know the FCC's remaining oversight framework does nothing of the sort, and is effectively little more than flimsy, voluntary commitments and pinky swears by ISPs that they promise to play nice with competitors. With limited competition, FCC regulatory oversight neutered, the FTC an ill-suited replacement, and ISPs threatening to sue states that try to stand up for consumers, there's not much left intact that can keep incumbent monopoly providers on their best behavior (barring the looming lawsuits and potential reversal of the rules).

Over in an interview with Marketplace, Pai again doubles down on repeated falsehoods, including a new claim that the repeal somehow had broad public support:

Marketplace....this is not a popular decision. Millions of people have written in opposition to it. Public opinion polling shows most Americans favor net neutrality, not your open internet rule. And I wonder why you're doing this then? If public opinion is against you, what are you doing?

Pai: First of all, public opinion is not against us. If you look at some of the polls —

Marketplace: No, it is, sir, come on.

Pai: If you look at some of the polling, if you dig down and see how these polls were constructed, it was clearly designed to reach a particular result. But even beyond that —

Marketplace: It's not just one, there are many surveys, sir.

Pai: The FCC’s job is not to put a finger in the wind and decide which way the winds are blowing, it's to look at the facts and make a sober judgment based on what the law is. And that is exactly what we've done here. Moreover, the long-term interest is in building better, faster, cheaper internet access. That is what consumers say when I travel around the country, and I’ve have spoken to consumers in Los Angeles to the reservation in South Dakota, places like Dahlonega, Georgia. That is what is on consumers’ minds. That is what this regulatory framework is going to deliver.


First Pai tries to claim that the public supported his repeal, then when pressed tries to claim that the polls that were conducted were somehow flawed. Neither is true. In fact, one recent survey out of the University of Maryland found that 82% of Republicans and 90% of Democrats opposed the FCC's obnoxiously-named "restoring internet freedom" repeal. And those numbers are higher than they were just a few years ago. That the public is overwhelmingly opposed to Pai's repeal is simply not debatable.

When discrediting the polls doesn't work, Pai then implies consumers aren't smart enough to realize that gutting oversight of indisputably terrible ISPs like Comcast will be secretly good for them. He then tries to insist that public opinion doesn't matter and that he's simply basing his policy decisions on cold, hard facts. Which, for a guy that claimed during the repeal that net neutrality aids fascist dictators, made up a DDOS attack, ignored countless widelesly respected internet experts and based his repeal entirely on debunked lobbyist data--is pretty amusing.

Whether Pai's repeated lies result in anything vaguely resembling accountability remains to be seen. But based on the volume of time Pai spends touring flyover country, it's pretty clear he's harboring some significant post-FCC political aspirations. Those ambitions are likely to run face first into very real voters (especially of the Millennial variety) harboring some very real annoyance at his gutting of a healthy and open internet.
https://www.techdirt.com/articles/20...y-repeal.shtml

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

June 16th, June 9th, June 2nd, May 26th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 01:04 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)