P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 21-11-18, 07:58 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - November 24th, ’18

Since 2002















































November 24th, 2018




The Media Industry and the “Make-Google-Pay” Fantasy

Within Google, some call into question the validity of helping the media industry. By lobbying the EU on the “link tax”, publishers are waging a rearguard battle. It could backfire.
Frederic Filloux

Google is increasingly divided over supporting the news industry, either through direct funding, or with a set of initiatives directly aimed at improving publishers’ bottom line.

At the company headquarters in Mountain View, the lingering question is: should we continue to help an industry that: (a) tends to consider us as another teller for subsidies, (b) has shown little ability to use the full extent of the toolbox we built, (c) has repeatedly turned against us by lobbying Brussels?

Right now, no one is happy with the recent turn of events. At the core of the resentment is Brussels recently adopted Copyright Directive and more specifically, two dispositions: Article 13 and Article 11.

Article 13 is essentially targeted at YouTube and similar platforms that harbor user generated contents. According to the new legislation, online services are immediately liable for any material uploaded by users that infringes copyright. The argument is that YouTube is loaded with content for which fees owed to copyright holders are rarely paid in full.

It would take a dozen of Monday Notes to expose the full extent of the arguments on both sides. The general idea is YouTube is too lax when it comes to enforcing copyrights. It is true that the abundance of content available on YouTube is staggering, to the point of being suspicious. If you are penniless, you don’t need to subscribe to a streaming service to fill your hard drive with hundreds of hours of good quality tunes, ranging from complete albums to full concerts. Does YouTube redistribute due fees on this content? At the very least, it has deployed a comprehensive arsenal to do so. When a copyright infringement is suspected, YouTube is generally swift to take down the content. Around 2007–2009, after a notorious lawsuit from Viacom that found scores of illegal pieces, YouTube implemented a system called Content ID that checks content against a database of copyrights and automatically collect payments.

In an op-ed published on November 12 by the Financial Times, YouTube CEO Susan Wojcicki emphasises the efficiency of the system:

“More than 98 percent of copyright management on YouTube takes place through Content ID. To date, we have used the system to pay rights holders more than €2.5 billion for third party use of their content. We believe Content ID provides the best solution for managing rights on a global scale”.

She also suggests that the sheer size of YouTube, with 400 hours of video uploaded every minute, makes the task of enforcing copyrights nearly impossible. More broadly, hundreds of companies that deal with UGC could be affected by a narrow interpretation of the directive: blogging platforms (including Medium), dating sites, audio-sharing systems like SoundCloud, open-source repositories, review-powered sites (cooking, lodging, travel), etc. In fact, nearly every publisher is susceptible to host copyrighted content regardless of the precaution it takes. That is the argument invoked by a group of internet architects — including luminaries such as Tim Berners-Lee and Vint Cerf, who wrote an open letter (PDF here) that basically states that Article 13 is camping down the world wide web:

“By requiring Internet platforms to perform automatic filtering of all the content that their users upload, Article 13 takes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance, and control of its users”.

Should Article 13 be applied in a restrictive manner, scores of lectures and tutorials containing fragments of copyrighted material would become technically illegal. The formidable richness of the internet is definitely at stake here. Should a YouTube channel about the techniques of cinematography be taken down because it dissects a series of unlicensed shots by David Fincher? What about this lecture about space exploration that didn’t ask NASA for permission to show clips of the ISS?

The digital news industry is marginally concerned by Article 13, even if it contributed to its adoption.

But for another piece of this controversial legislation, European media furiously lobbied Brussels for a result that could backfire spectacularly.

Article 11 contains a disposition under which the smallest fragment of a press publication should open the right to financial compensation. In the crosshair, the snippets of an article that appear in a search engine results page (SERP); since Google is the dominant player in the search engine sector, it is fair to say that it is the main target.

Publishers who favor this legislation — most of them are in Europe — defend the following rationale: Google is making tons of money by displaying links, and snippet in their search page, and they want a piece of it.

This posture disregards four significant factors:

1. Traffic of news sites depends heavily on Google. It varies widely from publication to another, but the search engine is by far the dominant referral of traffic: at the end of last year, Google search alone was accounting for 38 percent of the visits sent to a sample of 250,000 mobile and desktop sites monitored by Shareaholic. Adding up the traffic sent by Google News, for many news outlets, the reliance on Google is above 50 percent, far outpacing the benefit of social networks (Facebook, mostly) whose share is dropping at a fast pace; at the end of last year, social accounted for 18 percent in visit referrals.

According to the analytics firm Chartbeat, summed up in the excellent Axios Media Trends newsletter produced by Sara Fischer:

• Twitter and Facebook have declined in their share of traffic sent to news sites.
• Facebook traffic to publishers is down so much (nearly 40%) that according to Chartbeat, “a user is now more likely to find your content through your mobile website or app than from Facebook.”
• Google Search on mobile has grown more than 2x, helping guide users to stories on publishers’ owned and operated channels.
• Direct mobile traffic to publishers’ websites and apps has also steadily grown by more than 30 percent.

The chart below is eloquent:

https://cdn-images-1.medium.com/max/...IhXipE26RgqUoz

2. The traffic sent by Google is so precious for publishers, that collectively, they spend millions of dollars and euros in maintaining teams in charge Search Engine Optimization. We can assume that these massive investments in SEO are primarily driven by a clever cost vs.benefit calculation.

3. Publishers themselves fostered the snippets/link sharing industry by allowing RSS Feeds. Blinded by the idea of traffic-at-all-costs, they profusely distributed their content through RSS Feeds, in many cases giving the full text of articles. In doing so, they contributed to the creation of the aggregation industry, entirely initiated by data-driven, agile digital natives. That was a major train (of many) missed by the news industry. Aggregators were crucial to the explosion of mobile applications that coalesce news content in a single and convenient place.

4. Copyright laws, especially the Berne Convention, already protect copyright holders thanks to a detailed (16,000 words!) legal arsenal.
At the core of this legal push is the idea that “Google should pay”.

This tune has been heard over and over. The rationale is that Google is selling ads against the snippets and very few people click on the links display in SERPs.

The first argument is only partially true: in Google Search, if you enter a typical news-oriented query like “raqqa evacuation” or “trump california fires” the result page will not carry any ads (there is no point at buying ads against such tragedies); but if you enter a consumer-related query like “best hair-dryer”, the SERP will return a few news sites, but many product review outlets and this result page will carry lots of ads. Then the simplistic view that Google makes a lot on news snippets does not hold. For Google News, the case is settled: Google does not sell ads there.

As for the second idea — the proportion of clicks sent back to publishers — it varies widely. Research suggests that, as expected, the click-through rate (CTR) is unevenly distributed: roughly speaking, top positions (for general search) carry a CTR of around 30 percent then it falls quickly to a lower single digit. Plus, the first page of results takes it nearly all (90 percent of the clicks). Hence the battle to be at the top of the first SERP.

Having said that, publishers are right to point out that Google has failed to provide hard data on this. Except for a vague and global number often quoted (a billion dollars or euros), we don’t know precisely how much value Google Search and Google News are sending back to the publishers. The search engine has all the data down to the dollar, or the euro, for every publisher simply by looking at the traffic sent, multiplied by the average revenue per page. The reason for this numbness in communicating lies in the engineering culture is a sense of self-righteousness at the top of the company. Surely, it doesn’t help its cause. Everywhere in the US, Europe, or Asia, Google has great communication teams whose hands are tied.

Until today, Google had maintained a tough “we-will-never-pay-for-snippets” stance. Except that now, the obligation is carved in European law, reigniting the fantasy of European publishers of a looming windfall.

In France, the hazy anti-Google feeling combines with a deep-rooted culture of subsidies — which contributed to severely hinder innovation in the French digital media industry. Now the prospect of yet another bonanza arouses everybody. Publishers fantasize of amounts of 50 to 60 million euros per year coming from the “link tax”. They might be disappointed.

First, Google is likely to resist. Much less for the amount of money in play than the precedent it creates in Europe and across the world, which could easily translate into billions of dollars.

Second, in practical terms, making Google pay for snippets is not easy to implement. Who should be eligible? Legacy media? Any news outlet that maintains a newsroom whatever its size? We are in for a long and laborious discussion. And that’s just for France, the gold standard of public and private subsidies. How to adapt the concept to the other 27 members of the European Union? And what about other territories that will ask for the inevitable Most Favored Nation Clause?

Hence the emergence of hard-liners who toy with the idea of killing Google News altogether and de-indexing news content in search. It’s a credible threat: Google did just that in 2014 when Spain passed similar legislation, as expected, traffic plummeted. In Germany, a comparable anti-snippet law was rendered powerless by a group of publishers who did the math and offered their content for free in exchange for the usual stream of clicks sent back to them.

More broadly, many see the EU legislation as harmful for the entire ecosystem. Last April, a group of 229 academics in the field of law and intellectual property have signed an appeal to the European Parliament to denounce what they see as “a bad piece of legislation”, to no avail.

By pushing to the “link tax”, publishers are shooting themselves in the foot three times over. One, there is a tangible risk that Google opt for the Spanish/German jurisprudence. Two, the optics will look terrible: by persisting to collect a small revenue from snippets, publishers will seem to wage a rearguard battle. Three, the news publishing world has more appealing options when it comes to working with Google at improving the economics of their ecosystem. The search giant is already investing hundreds of millions of dollars for technologies that could directly, or indirectly, benefit to the news media. So far, publishers haven’t used the full extent of it. It’s time to “think different”.
https://mondaynote.com/the-media-ind...y-1b4de36e3b04





ESPN Lost 2 Million Subscribers to Cord Cutting this Year

Disney's cable channels suffered losses across the board.
Saqib Shah

We're getting a clearer picture of the devastation cord-cutting has wrought to cable with the release of Disney's annual earnings report. It shows that ESPN lost 2 million subscribers in the past 12 months alone, with its base declining from 88 million in 2017 to 86 million.

Though it wasn't the only Disney mainstay that took a hit this year -- the Disney Channel, Disney Junior and Disney XD all lost lost 3 million subs, while Freeform shed 2 million -- ESPN's decline is more of a metric of how many people have quit cable, since it's in nearly every package. As Variety notes, Disney isn't the only media behemoth to be burned by cord-cutting: the entirety of cable's leading lights have suffered subscriber losses.

But Disney already has a contingency plan in the form of ESPN+, which offers live and archived sports streams from regional networks, excluding any local blackouts, for a $5 monthly fee. The service nabbed 1 million subs in just five months, helping to offset some of Disney's cable losses. There's also NHL.TV (the online home of out-of-market ice hockey games) and skinny bundles that carry ESPN (like Disney's part-owned Hulu Live TV, and YouTube TV, Sling TV, and DirecTV Now, among others).

If you're a sports fan thinking of cutting the cord, check out our guide to live sports streaming.
https://www.engadget.com/2018/11/22/...n-subscribers/





This SIM Card Forces All of Your Mobile Data Through Tor

"This is about sticking a middle finger up to mobile filtering, mass surveillance."
Joseph Cox

We all are constantly on our phones, but maybe you want to visit that website or check that social media account without revealing more information on where you are.

Using the Tor anonymity network on a mobile phone, which would mask your IP address from the site you’re browsing, is fairly painless nowadays, with a connection being simply an app away. But that sort of software is typically designed for web browsing, and not for use with other apps such as Twitter, which still could leak your IP address.

With that in mind, one UK grassroots internet service provider is currently testing a data only SIM card that blocks any non-Tor traffic from leaving the phone at all, potentially providing a more robust way to use Tor while on the go.

“This is about sticking a middle finger up to mobile filtering, mass surveillance,” Gareth Llewelyn, founder of Brass Horn Communications, told Motherboard in an online chat. Brass Horn is a non-profit internet service provider with a focus on privacy and anti-surveillance services.

Got a tip? You can contact Joseph Cox securely on Signal on +44 20 8133 5190, OTR chat on jfcox@jabber.ccc.de, or email joseph.cox@vice.com.

Tor is a piece of software and a related network run by volunteers. When someone runs Tor on their computer or phone, it routes their traffic through multiple servers before reaching its final destination, such as a website. That way, the website owner can’t tell who is visiting; only that someone is connecting from Tor. The most common way people access Tor is with the Tor Browser Bundle on desktop, or with the Orbot app on Android.

But, in some cases, neither of these totally guarantee that all of your device’s traffic will be routed through Tor. If you’re using the Tor Browser Bundle on a laptop, and then go to use another piece of software, that app is probably not going to use Tor. The same might stand for Orbot running on older iterations of Android. Nathan Freitas, from The Guardian Project which maintains Orbot, said with newer versions of Android, you can lock down device traffic to only work if a specific VPN is activated, including Orbot’s.

This SIM card, however, is supposed to provide a more restricted solution in the event that other approaches don’t quite work.

“The key point is that it is a failsafe, if you don’t have Tor up then nothing can get to the internet,” Llewelyn said.

Brass Horn has previously offered customers a Tor-only service, but at the ISP level, designed to make it impossible for Brass Horn to keep any logs of a subscriber’s web browsing. This was largely in response to the UK’s recently passed mass surveillance legislation The Investigatory Powers Act, part of which compels ISPs to keep so-called internet connection records—browsing and usage data—of their customers for 12 months.

The new SIM card, which is still in a beta testing stage, takes that idea mobile. It requires some setup; users need to create a new access point name on their device—essentially so the device can connect to the new network—but Brass Horn provides some instructions to do this. The SIM also requires Orbot to be installed and running on the device itself, and it currently only works in the UK (Llewelyn provided Motherboard with one of the SIM cards for testing purposes; Motherboard confirmed that the SIM does transfer data).

“At a high level, I think a Tor-only SIM card is a great idea,” Freitas added. “If Facebook can sell SIM-cards that only connect to their approved 'Zero rated' sites, then why not have a privacy-oriented alternative that only allows Tor?”

“Technically, this is also the correct approach—don't auto-tunnel all connections through Tor, but instead ensure non-Tor traffic doesn't leak. Unfortunately, this would only provide that assurance on a mobile data connection, and not WiFi,” he added.

It won’t be for everyone—as Freitas also points out, some users may need to use some apps through a non-Tor connection (Twitter, for instance, could block a user connecting from Tor, mistaking it for suspicious activity). But for those who still want to mask their traffic, and essentially movements, while using data on the move, it may be useful.
https://motherboard.vice.com/en_us/a...communications





'Infuriating': Trump FCC Refusing to Release Data Showing If Telecom Industry Being Truthful About Internet Speeds

"Without this information, consumers who are lucky enough to have a choice of broadband providers won't be able to make informed decisions about which broadband provider to choose."
Jessica Corbett

Under Trump-appointee Ajit Pai, the Federal Communications Commission (FCC) has continued a program to track whether major companies like AT&T, Comcast, Spectrum, and Verizon are providing their promised internet speeds, but has failed to publish any of its findings—concealment that has raised alarm among tech reporters and former agency officials.

"The only reason I can think of is that the data doesn't promote the chairman's narrative that broadband industry investment and performance allegedly suffered when it was subject to net neutrality rules grounded in Title II of the Communications Act."
—Gigi Sohn, former FCC lawyer

"The only reason I can think of is that the data doesn't promote the chairman's narrative that broadband industry investment and performance allegedly suffered when it was subject to net neutrality rules grounded in Title II of the Communications Act," former agency lawyer and adviser Gigi Sohn told Motherboard, referencing Pai's defense of a party-line vote that repealed the rules last year.

As Ars Technica pointed out Monday, from when the FCC launched the Measuring Broadband America program in 2011 until 2016, the agency monitored the in-house internet of thousands of customers across the country and released annual reports comparing actual speeds to those advertised by internet service providers (ISPs).

However, since Pai—an ex-lobbyist who worked for Verizon—became FCC chairman nearly two years ago and started building a reputation for prioritizing industry interests over the common good, the agency not only has failed to release a report, it also allegedly has dodged Freedom of Information Act (FOIA) requests about the program's status or recent findings.

Motherboard: Former Staffers Say FCC May Be Hiding Data Showing Broadband Industry Problems https://t.co/W8oZxROUMZ pic.twitter.com/GWXRofUWHP

— FOIA // FEED (@FOIAFeed) November 20, 2018

Detailing the run-around that the FCC has given Ars Technica in its quest for more information, the outlet reported that the agency failed to meet its self-imposed, repeatedly-delayed deadline of Oct. 25—at which point an FCC staffer claimed via email that "at this time, we do not know how long this process will take and cannot give you a due date."

Meanwhile, Alex Salter, the chief executive of SamKnows, the company the FCC uses to take broadband measurements, confirmed that the program still exists, monitoring speeds in 6,000 to 10,000 American households, and "there's no particular reason that [he] could identify" for why a report hasn't been released under the Trump administration.

Salter added that there is a report currently awaiting FCC approval, which could be released next month—but, he said, "obviously we don't control that because it has to go through a whole series of approvals."

Former FCC official Blair Levin, who oversaw development of the National Broadband Plan, told Ars Technica, "I don't want to speculate on why the FCC has not produced the data, as I can think of many potential reasons and don't know enough to say what is most likely to be true," but many possible motivators are political.

"There is always a danger that a government agency moves forward with a preset agenda and uses cherry-picked data to try to justify the decision," Levin noted. "The fact that the FCC is not reporting, on a regular basis, the data is evidence—not conclusive but evidence nonetheless—that the FCC could be moving away from being a data-driven expert agency."

Such a move could have significant consequences for consumers, Sohn warned. "Without this information, consumers who are lucky enough to have a choice of broadband providers won't be able to make informed decisions about which broadband provider to choose," she said. "This information is also vital for policy makers seeking to enforce federal and state laws protecting consumers from unfair and deceptive trade practices."

As Lauren Goode, a senior writer at Wired, concluded on Twitter, "This is infuriating."

This is infuriating.

— Lauren Goode (@LaurenGoode) November 20, 2018
https://www.commondreams.org/news/20...industry-being





Ajit Pai Wants to Raise Rural Broadband Speeds from 10Mbps to 25Mbps

FCC-funded rural broadband currently requires download speed of just 10Mbps.
Jon Brodkin

The Federal Communications Commission is planning to raise the rural broadband standard from 10Mbps to 25Mbps in a move that would require faster Internet speeds in certain government-subsidized networks.

The FCC's Connect America Fund (CAF) distributes more than $1.5 billion a year to AT&T, CenturyLink, and other carriers to bring broadband to sparsely populated areas. Carriers that use CAF money to build networks must provide speeds of at least 10Mbps for downloads and 1Mbps for uploads. The minimum speed requirement was last raised in December 2014.

Today, FCC Chairman Ajit Pai said he's proposing raising that standard from 10Mbps/1Mbps to 25Mbps/3Mbps. "[W]'re recognizing that rural Americans need and deserve high-quality services by increasing the target speeds for subsidized deployments from 10/1 Mbps to 25/3 Mbps," Pai wrote in a blog post that describes agenda items for the FCC's December 12 meeting.

"[T]he program should support high-quality services; rural Americans deserve services that are comparable to those in urban areas," Pai also wrote.

CAF (also known as the "high-cost program") is part of the Universal Service Fund, which is paid for by Americans through fees on their phone bills.

The new 25Mbps/3Mbps standard will apply to future projects but won't necessarily apply to broadband projects that are already receiving funding. For ongoing projects, the FCC will use incentives to try to raise speeds. More money will be offered to carriers that agree to upgrade speeds to 25Mbps/3Mbps, a senior FCC official said in a conference call with reporters.

FCC will offer “guaranteed revenue stream”

Pai also said that carriers accepting CAF money will have the option of receiving a guaranteed revenue stream for 10 years. Pai wrote:

First, we're working to promote efficiency by moving away from simply telling rate-of-return carriers what their allowable costs and return on investment will be and toward setting broad goals for deployment and rewarding companies for being efficient in meeting those goals (what's called an "incentive-based" model). Specifically, we're offering rate-of-return carriers another opportunity to opt in to model-based support, which would give them a guaranteed revenue stream for a decade in exchange for meeting specified buildout requirements. Second, we're ensuring support is sufficient by offering additional funding to carriers that currently receive model-based support and who agree to meet increased buildout requirements. We're also increasing funding for carriers who do not receive model-based support... we're [also] making the program more predictable by setting a new long-term budget for rate-of-return carriers who choose not to opt in to model-based support and ending arbitrary funding cuts.

To provide the guaranteed revenue stream, Pai's proposal would reverse scheduled budget cuts and adjust the program budget in future years, an FCC official said.

Pai said his proposals will "stretch taxpayer dollars as far as possible" and make sure that subsidies are "sufficient to build out networks; after all, these are areas where the business case for private investment is lacking." FCC subsidies should also be more predictable than in previous years, because "building networks is a serious long-term proposition, not a one-time whim," Pai wrote.

When Democrat Tom Wheeler was FCC chair, Pai supported the commission's 2014 decision to raise the speed benchmark from 4Mbps/1Mbps to 10Mbps/1Mbps but said that the FCC should have also provided carriers with more years of funding to account for the upgrade.

Pai opposed Wheeler's 2015 decision to raise a nationwide broadband standard to 25Mbps/3Mbps. Pai said at the time that 25/3Mbps was too high and criticized the Wheeler-led majority for using different standards, namely the 25Mbps/3Mbps standard for judging nationwide broadband deployment progress and the lower standard in rural projects subsidized by the government. As chair, Pai in 2017 floated a proposal that would lower broadband standards, but he changed course after a backlash.

Despite Pai's claim that repealing net neutrality rules and other regulations will spur broadband deployment, Charter and Verizon both said this year that they're reducing capital expenditures. Broadband lobby groups USTelecom and NTCA recently argued that Internet service is similar to utilities such as electricity and gas distribution. The lobby groups also said that the government should provide more money to private companies to close the rural broadband gap. They complained that "US broadband infrastructure has been financed largely by the private sector without assurance that such costs can be recovered through increased consumer rates."
https://arstechnica.com/tech-policy/...bps-to-25mbps/





US Wireless Data Prices Are Among the Most Expensive on Earth

US consumers pay “excessive” prices for mobile data, research firm warns, adding that with looming mergers—it could soon get worse.
Karl Bode

A new study has found that US wireless consumers pay some of the highest prices for mobile data in the developed world. According to a new study from Finnish research firm Rewheel, the US mobile data market has the fifth most expensive price per gigabyte smartphone plans among developed nations, and was the most expensive for mobile data overall.

While the report notes that mobile data prices have dropped 11 percent during the last six months in the States, US mobile data pricing remained significantly higher than 41 countries in the European Union and the Organization for Economic Co-operation and Development.

Normally, having four major wireless carriers helps boost competition, in turn lowering prices. But the Rewheel report was quick to note that the often stunted level of competition seen in US wireless is more akin to countries where there’s just three major players.

“Even though there are 4 mobile network operators present in the market, US gigabyte prices are not competitive,” the researchers said. “The US is an outlier four mobile network operator market with much higher prices that are typical to three mobile network operator tight oligopoly markets.”

While competition from T-Mobile recently helped drive carriers like AT&T and Verizon back to “unlimited” data plans and away from more tightly metered options, genuine price competition in the US market tends to sometimes be theatrical in nature.

Meanwhile, a monopoly over business data connectivity generally keeps consumer mobile prices high. According to the FCC's own data, 73 percent of the special access market (which feeds everything from ATMs to cellular towers) is controlled by one ISP. This varies depending on the market, but it’s usually AT&T, Verizon, or CenturyLink.

These high prices to connect to cellular towers then impact pricing for the end user and smaller competitors, those same competitors and consumer groups have long argued.

These critics have also argued that Ajit Pai’s FCC recently made these problems worse by lifting price caps on this uncompetitive sector, something he justified by literally weakening the very definition of competition. Monopolies nobody wants to fix and regulators beholden to an industry they’re supposed to hold accountable go a long way toward explaining the US ranking.

All told, the study found that US mobile data pricing was four times more expensive than prices in many four-competitor European Union countries, and sixteen times more expensive than large, competitive four-competitor European markets.

The firm noted that while smartphone data pricing was high, mobile hotspot pricing was significantly worse. For example, Rewheel noted that while Verizon charges users $710 per month for its 100 gigabyte mobile hotspot plan, that same plan costs between €10 and €20 (between $11 and $23) per month in several European countries.

The group was quick to note that the problem could actually get worse as the country’s third and fourth largest carriers (T-Mobile and Sprint, respectively) push a merger nobody really asked for.

While both Sprint and T-Mobile have claimed the merger will somehow increase jobs, boost competition, and lower rates, consumer groups have charged that historically the opposite happens. Redundant jobs are quickly eliminated, and any incentive to actually compete on price is reduced proportionally as the market drops from four to three primary carriers.

As such, the research firm argued the United States might want to be wary about any potential mega merger “synergy” promises being bandied about by Sprint and T-Mobile executives.

“Judging from the excessive gigabyte prices, US operators are charging today for 4G mobile broadband...merger promises concerning affordable 5G home broadband should be critically reviewed and if verified must be made binding,” the research firm warned.

With the government currently taking a more rubber stamp approach to telecom oversight in the Ajit Pai era, that isn’t likely to happen.

Pai’s recent repeal of net neutrality—if it survives next February’s court battle—is likely to open the door to entirely new, creative surcharges and penalties on what’s already some of the most expensive mobile data plans in the world.
https://motherboard.vice.com/en_us/a...nsive-on-earth





How a Small French Privacy Ruling Could Remake Adtech for Good
Natasha Lomas

A ruling in late October against a little-known French adtech firm that popped up on the national data watchdog’s website earlier this month is causing ripples of excitement to run through privacy watchers in Europe who believe it signals the beginning of the end for creepy online ads.

The excitement is palpable.

Impressively so, given the dry CNIL decision against mobile “demand side platform” Vectaury was only published in the regulator’s native dense French legalese.

Here is the bombshell though: Consent through the @IABEurope framework is inherently invalid. Not because of a technical detail. Not because of an implementation aspect that could be fixed. No.
You cannot pass consent to another controller through a contractual relationship. BOOM pic.twitter.com/xMlNHJTKwl

— Robin Berjon (@robinberjon) November 16, 2018


Digital advertising trade press AdExchanger picked up on the decision yesterday.

Here’s the killer paragraph from CNIL’s ruling — translated into “rough English” by my TC colleague Romain Dillet:

The requirement based on the article 7 above-mentioned isn’t fulfilled with a contractual clause that guarantees validly collected initial consent. The company VECTAURY should be able to show, for all data that it is processing, the validity of the expressed consent.

In plainer English, this is being interpreted by data experts as the regulator stating that consent to processing personal data cannot be gained through a framework arrangement which bundles a number of uses behind a single “I agree” button that, when clicked, passes consent to partners via a contractual relationship.

CNIL’s decision suggests that bundling consent to partner processing in a contract is not, in and of itself, valid consent under the European Union’s General Data Protection Regulation (GDPR) framework.

Consent under this regime must be specific, informed and freely given. It says as much in the text of GDPR.

But now, on top of that, the CNIL’s ruling suggests a data controller has to be able to demonstrate the validity of the consent — so cannot simply tuck consent inside a contractual “carpet-bag” that gets passed around to everyone else in their chain as soon as the user clicks “I agree.”

This is important, because many widely used digital advertising consent frameworks rolled out to websites in Europe this year — in claimed compliance with GDPR — are using a contractual route to obtain consent, and bundling partner processing behind often hideously labyrinthine consent flows.

The experience for web users in the EU right now is not great. But it could be leading to a much better internet down the road.
Where’s the consent for partner processing?

Even on a surface level the current crop of confusing consent mazes look problematic.

But the CNIL ruling suggests there are deeper and more structural problems lurking and embedded within. And as regulators dig in and start to unpick adtech contradictions it could force a change of mindset across the entire ecosystem.

As ever, when talking about consent and online ads the overarching point to remember is that no consumer given a genuine full disclosure about what’s being done with their personal data in the name of behavioral advertising would freely consent to personal details being hawked and traded across the web just so a bunch of third parties can bag a profit share.

This is why, despite GDPR being in force (since May 25), there are still so many tortuously confusing “consent flows” in play.

The longstanding online T&Cs trick of obfuscating and socially engineering consent remains an unfortunately standard playbook. But, less than six months into GDPR we’re still very much in a “phoney war” phase. More regulatory rulings are needed to lay down the rules by actually enforcing the law.

And CNIL’s recent activity suggests more to come.

In the Vectaury case, the mobile ad firm used a template framework for its consent flow that had been created by industry trade association and standards body, IAB Europe.

It did make some of its own choices, using its own wording on an initial consent screen and pre-ticking the purposes (another big GDPR no-no). But the bundling of data purposes behind a single opt in/out button is the core IAB Europe design. So CNIL’s ruling suggests there could be trouble ahead for other users of the template.

IAB Europe’s CEO, Townsend Feehan, told us it’s working on a statement reaction to the CNIL decision, but suggested Vectaury fell foul of the regulator because it may not have implemented the “Transparency & Consent Framework-compliant” consent management platform (CMP) framework — as it’s tortuously known — correctly.

So either “the ‘CMP’ that they implemented did not align to our Policies, or choices they could have made in the implementation of their CMP that would have facilitated compliance with the GDPR were not made,” she suggested to us via email.

Though that sidesteps the contractual crux point that’s really exciting privacy advocates — and making them point to the CNIL as having slammed the first of many unbolted doors.

The French watchdog has made a handful of other decisions in recent months, also involving geolocation-harvesting adtech firms, and also for processing data without consent.

So regulatory activity on the GDPR+adtech front has been ticking up.

Its decision to publish these rulings suggests it has wider concerns about the scale and privacy risks of current programmatic ad practices in the mobile space than can be attached to any single player.

So the suggestion is that just publishing the rulings looks intended to put the industry on notice…

The decision also notes that the @CNIL is openly using this to inform not just the company in question but whole ecosystem, including adtech of course but also app makers who embed ads and marketers who use them. You're all on notice!

— Robin Berjon (@robinberjon) November 16, 2018


Meanwhile, adtech giant Google has also made itself unpopular with publisher “partners” over its approach to GDPR by forcing them to collect consent on its behalf. And in May a group of European and international publishers complained that Google was imposing unfair terms on them.

The CNIL decision could sharpen that complaint too — raising questions over whether audits of publishers that Google said it would carry out will be enough for the arrangement to pass regulatory muster.

This rules the @IABEurope out as an option, but more than that: @Google forced publishers to collect consent on its behalf for advertising profiling. They have said that they will audit that publishers do it right — but will auditing be enough?

— Robin Berjon (@robinberjon) November 16, 2018


For a demand-side platform like Vectaury, which was acting on behalf of more than 32,000 partner mobile apps with user eyeballs to trade for ad cash, achieving GDPR compliance would mean either asking users for genuine consent and/or having a very large number of contracts on which it’s doing actual due diligence.

Yet Google is orders of magnitude more massive, of course.

The Vectaury file gives us a fascinating little glimpse into adtech “business as usual.” Business which also wasn’t, in the regulator’s view, legal.

The firm was harvesting a bunch of personal data (including people’s location and device IDs) on its partners’ mobile users via an SDK embedded in their apps, and receiving bids for these users’ eyeballs via another standard piece of the programmatic advertising pipe — ad exchanges and supply side platforms — which also get passed personal data so they can broadcast it widely via the online ad world’s real-time bidding (RTB) system. That’s to solicit potential advertisers’ bids for the attention of the individual app user… The wider the personal data gets spread, the more potential ad bids.

That scale is how programmatic works. It also looks horrible from a GDPR “privacy by design and default” standpoint.

The sprawling process of programmatic explains the very long list of “partners” nested non-transparently behind the average publisher’s online consent flow. The industry, as it is shaped now, literally trades on personal data.

So if the consent rug it’s been squatting on for years suddenly gets ripped out from underneath it, there would need to be radical reshaping of ad-targeting practices to avoid trampling on EU citizens’ fundamental right.

GDPR’s really big change was supersized fines. So ignoring the law would get very expensive.

Oh hai real-time bidding!

In Vectaury’s case, CNIL discovered the company was holding the personal data of a staggering 67.6 million people when it conducted an on-site inspection of the company in April 2018.

That already sounds like A LOT of data for a small mobile adtech player. Yet it might actually have been a tiny fraction of the personal data the company was routinely handling — given that Vectaury’s own website claims 70 percent of collected data is not stored.

In the decision there was no fine, but CNIL ordered the firm to delete all data it had not already deleted (having judged collection illegal given consent was not valid); and to stop processing data without consent.

But given the personal-data-based hinge of current-gen programmatic adtech, that essentially looks like an order to go out of business. (Or at least out of that business.)

And now we come to another interesting GDPR adtech complaint that’s not yet been ruled on by the two DPAs in question (Ireland and the U.K.) — but which looks even more compelling in light of the CNIL Vectaury decision because it picks at the adtech scab even more daringly.

Filed last month with the Irish Data Protection Commission and the U.K.’s ICO, this adtech complaint — the work of three individuals, Johnny Ryan of private web browser Brave; Jim Killock, exec director of digital and civil rights group, the Open Rights Group; and University College London data protection researcher, Michael Veale — targets the RTB system itself.

Here’s how Ryan, Killock and Veale summarized the complaint when they announced it last month:

Every time a person visits a website and is shown a “behavioural” ad on a website, intimate personal data that describes each visitor, and what they are watching online, is broadcast to tens or hundreds of companies. Advertising technology companies broadcast these data widely in order to solicit potential advertisers’ bids for the attention of the specific individual visiting the website.

A data breach occurs because this broadcast, known as an “bid request” in the online industry, fails to protect these intimate data against unauthorized access. Under the GDPR this is unlawful.

The GDPR, Article 5, paragraph 1, point f, requires that personal data be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss.” If you can not protect data in this way, then the GDPR says you can not process the data.


Ryan tells TechCrunch that the crux of the complaint is not related to the legal basis of the data sharing but rather focuses on the processing itself — arguing “that it itself is not adequately secure… that they’re aren’t adequate controls.”

Though he says there’s a consent element too, and so sees the CNIL ruling bolstering the RTB complaint. (On that keep in mind that CNIL judged Vectaury should not have been holding the RTB data of 67.6M people because it did not have valid consent.)

“We do pick up on the issue of consent in the complaint. And this particular CNIL decision has a bearing on both of those issues,” he argues. “It demonstrates in a concrete example that involved investigators going into physical premises and checking the machines — it demonstrates that even one small company was receiving tens of millions of people’s personal data in this illegal way.

“So the breach is very real. And it demonstrates that it’s not unreasonable to suggest that the consent is meaningless in any case.”

Reaching for a handy visual explainer, he continues: “If I leave a briefcase full of personal data in the middle of Charing Cross station at 11am and it’s really busy, that’s a breach. That would have been a breach back in the 1970s. If my business model is to drive up to Charing Cross station with a dump-truck and dump briefcases onto the street at 11am in the full knowledge that my business partners will all scramble around and try and grab them — and then to turn up at 11.01am and do the same thing. And then 11.02am. And every microsecond in between. That’s still a fucking data breach!

“It doesn’t matter if you think you’ve consent or anything else. You have to [comply with GDPR Article 5, paragraph 1, point f] in order to even be able to ask for a legal basis. There are plenty of other problems but that’s the biggest one that we highlighted. That’s our reason for saying this is a breach.”

“Now what CNIL has said is this company, Vectaury, was processing personal data that it did not lawfully have — and it got them through RTB,” he adds, spelling the point out. “So back to the GDPR — GDPR is saying you can’t process data in a way that doesn’t ensure protection against unauthorized or unlawful processing.”

In other words, RTB as a funnel for processing personal data looks to be on inherently shaky ground because it’s inherently putting all this personal data out there and at risk…

What’s bad for data brokers…

In another loop back, Ryan says the regulators have been in touch since their RTB complaint was filed to invite them to submit more information.

He says the CNIL Vectaury decision will be incorporated into further submissions, predicting: “This is going to be bounced around multiple regulators.”

The trio is keen to generate extra bounce by working with NGOs to enlist other individuals to file similar complaints in other EU Member States — to make the action a pan-European push, just like programmatic advertising itself.

“We now have the opportunity to connect our complaint with the excellent work that Privacy International has done, showing where these data end up, and with the excellent work that CNIL has done showing exactly how this actually applies. And this decision from CNIL takes, essentially my report that went with our complaint and shows exactly how that applies in the real world,” he continues.

“I was writing in the abstract — CNIL has now made a decision that is very much not in the abstract, it’s in the real world affecting millions of people… This will be a European-wide complaint.”

But what does programmatic advertising that doesn’t entail trading on people’s grubbily obtained personal data actually look like? If there were no personal data in bid requests Ryan believes quite a few things would happen. Such as, for e.g. the demise of clickbait.

“There would be no way to take your TechCrunch audience and buy it cheaper on some shitty website. There would be no more of that arbitrage stuff. Clickbait would die! All that nasty stuff would go away,” he suggests.

(And, well, full disclosure: We are TechCrunch — so we can confirm that does sound really great to us!)

He also reckons ad values would go up. Which would also be good news for publishers. (“Because the only place you could buy the TechCrunch audience would be on TechCrunch — that’s a really big deal!”)

He even suggests ad fraud might shrink because the incentives would shift. Or at least they could so long as the “worthy” publishers that are able to survive in the new ad world order don’t end up being complicit with bot fraud anyway.

As it stands, publishers are being screwed between the twin plates of the dominant adtech platforms (Google and Facebook), where they are having to give up a majority of their ad revenue — leaving the media industry with a shrinking slice of ad revenues (that can be as lean as ~30 percent).

That then has a knock on impact on funding newsrooms and quality journalism. And, well, on the wider web too — given all the weird incentives that operate in today’s big tech social media platform-dominated internet.

While a privacy-sucking programmatic monster is something only shadowy background data brokers that lack any meaningful relationships with the people whose data they’re feeding the beast could truly love.

And, well, Google and Facebook.

Ryan’s view is that the reason an adtech duopoly exists boils down to the “audience leakage” being enabled by RTB. Leakage which, in his view, also isn’t compliant with EU privacy laws.

He reckons the fix for this problem is equally simple: Keep doing RTB but without any personal data.

A real-time ad bidding system that’s been stripped of personal data does not mean no targeted ads. It could still support ad targeting based on real-time factors such as an approximate location (say to a city region) and/or generic and aggregated data.

Crucially it would not use unique identifiers that enable linking ad bids to a individual’s entire digital footprint and bid request history — as is the case now. Which essentially translates into: RIP privacy rights.

Ryan argues that RTB without personal data would still offer plenty of “value” to advertisers — who could still reach people based on general locations and via real-time interests. (It’s a model that sounds much like what privacy search engine DuckDuckGo is doing, and also been growing.)

The really big problem, though, is turning the behavioral ad tanker around. Given that the ecosystem is embedded, even as the duopoly milks it.

That’s also why Ryan is so hopeful now, though, having parsed the CNIL decision.

His reading is regulators will play a decisive role in pushing the ad industry’s trigger — and force through much-needed change in their targeting behavior.

“Unless the entire industry moves together, no one can be the first to remove personal data from bid requests but if the regulators step in in a big way… and say you’re all going to go out of business if you keep putting personal data into bid requests then everyone will come together — like the music industry was forced to eventually, under Steve Jobs,” he argues. “Everyone can together decide on a new short term disadvantageous but long term highly advantageous change.”

Of course such a radical reshaping is not going to happen overnight. Regulatory triggers tend to be slow motion unfoldings at the best of times. You also have to factor in the inexorable legal challenges.

But look closely and you’ll see both momentum massing behind privacy — and regulatory writing on the wall.

“Are we going to see programmatic forced to be non-personal and therefore better for every single citizen of the world (except, say, if they work for a data broker),” adds Ryan, posing his own concluding question. “Will that massive change, which will help society and the web… will that change happen before Christmas? No. But it’s worth working on. And it’s going to take some time.

“It could be two years from now that we have the finality. But a finality there will be. Detroit was only able to fight against regulation for so long. It does come.”

Who’d have though “taking back control” could ever sound so good?
https://techcrunch.com/2018/11/20/ho...tech-for-good/





The Present Phase of Stagnation in the Foundations of Physics is Not Normal
Sabine Hossenfelder

Nothing is moving in the foundations of physics. One experiment after the other is returning null results: No new particles, no new dimensions, no new symmetries. Sure, there are some anomalies in the data here and there, and maybe one of them will turn out to be real news. But experimentalists are just poking in the dark. They have no clue where new physics may be to find. And their colleagues in theory development are of no help.

Some have called it a crisis. But I don’t think “crisis” describes the current situation well: Crisis is so optimistic. It raises the impression that theorists realized the error of their ways, that change is on the way, that they are waking up now and will abandon their flawed methodology. But I see no awakening. The self-reflection in the community is zero, zilch, nada, nichts, null. They just keep doing what they’ve been doing for 40 years, blathering about naturalness and multiverses and shifting their “predictions,” once again, to the next larger particle collider.

I think stagnation describes it better. And let me be clear that the problem with this stagnation is not with the experiments. The problem is loads of wrong predictions from theoretical physicists.

The problem is also not that we lack data. We have data in abundance. But all the data are well explained by the existing theories – the standard model of particle physics and the cosmological concordance model. Still, we know that’s not it. The current theories are incomplete.

We know this both because dark matter is merely a placeholder for something we don’t understand, and because the mathematical formulation of particle physics is incompatible with the math we use for gravity. Physicists knew about these two problems already in 1930s. And until the 1970s, they made great progress. But since then, theory development in the foundations of physics has stalled. If experiments find anything new now, that will be despite, not because of, some ten-thousands of wrong predictions.

Ten-thousands of wrong predictions sounds dramatic, but it’s actually an underestimate. I am merely summing up predictions that have been made for physics beyond the standard model which the Large Hadron Collider (LHC) was supposed to find: All the extra dimensions in their multiple shapes and configurations, all the pretty symmetry groups, all the new particles with the fancy names. You can estimate the total number of such predictions by counting the papers, or, alternatively, the people working in the fields and their average productivity.

They were all wrong. Even if the LHC finds something new in the data that is yet to come, we already know that the theorists’ guesses did not work out. Not. A. Single. One. How much more evidence do they need that their methods are not working?

This long phase of lacking progress is unprecedented. Yes, it has taken something like two-thousand years from the first conjecture of atoms by Democritus to their actual detection. But that’s because for most of these two-thousand years people had other things to do than contemplating the structure of elementary matter. Like, for example, how to build houses that don’t collapse on you. For this reason, quoting chronological time is meaningless. We should better look at the actual working time of physicists.

I have some numbers for you on that too. Oh, yes, I love numbers. They’re so factual.

According to membership data from the American Physical Society and the German Physical Society the total number of physicists has increased by a factor of roughly 100 between the years 1900 and 2000.* Most of these physicists do not work in the foundations of physics. But for what publication activity is concerned the various subfields of physics grow at roughly comparable rates. And (leaving aside some bumps and dents around the second world war) the increase in the number of publications as well as in the number of authors is roughly exponential.

Now let us assume for the sake of simplicity that physicists today work as many hours per week as they did 100 years ago – the details don’t matter all that much given that the growth is exponential. Then we can ask: How much working time starting today corresponds to, say, 40 years working time starting 100 years ago. Have a guess!

Answer: About 14 months. Going by working hours only, physicists today should be able to do in 14 months what a century earlier took 40 years.

Of course you can object that progress doesn’t scale that easily, for despite all the talk about collective intelligence, research is still done by individuals. This means processing time can’t be decreased arbitrarily by simply hiring more people. Individuals still need time to exchange and comprehend each other’s insights. On the other hand, we have also greatly increased the speed and ease of information transfer, and we now use computers to aid human thought. In any case, if you want to argue that hiring more people will not aid progress, then why hire them?

So, no, I am not serious with this estimate, but I it explains why the argument that the current stagnation is not unprecedented is ill-informed. We are today making more investments into the foundations of physics than ever before. And yet nothing is coming out of it. That’s a problem and it’s a problem we should talk about.

I’ve recently been told that the use of machine learning to analyze LHC data signals a rethinking in the community. But that isn’t so. To begin with, particle physicists have used machine learning tools to analyze data for at least three decades. They use it more now because it’s become easier, and because everyone does it, and because Nature News writes about it. And they would have done it either way, even if the LHC would have found new particles. So, no, machine learning in particle physics is not a sign of rethinking.

Another comment-not-a-question I constantly have to endure is that I supposedly only complain but don’t have any better advice for what physicists should do.

First, it’s a stupid criticism that tells you more about the person criticizing than the person being criticized. Consider I was criticizing not a group of physicists, but a group of architects. If I inform the public that those architects spent 40 years building houses that all fell to pieces, why is it my task to come up with a better way to build houses?

Second, it’s not true. I have spelled out many times very clearly what theoretical physicists should do differently. It’s just that they don’t like my answer. They should stop trying to solve problems that don’t exist. That a theory isn’t pretty is not a problem. Focus on mathematically well-defined problems, that’s what I am saying. And, for heaven’s sake, stop rewarding scientists for working on what is popular with their colleagues.

I don’t take this advice out of nowhere. If you look at the history of physics, it was working on the hard mathematical problems that led to breakthroughs. If you look at the sociology of science, bad incentives create substantial inefficiencies. If you look at the psychology of science, no one likes change.

Developing new methodologies is harder than inventing new particles in the dozens, which is why they don’t like to hear my conclusions. Any change will reduce the paper output, and they don’t want this. It’s not institutional pressure that creates this resistance, it’s that scientists themselves don’t want to move their butts.

How long can they go on with this, you ask? How long can they keep on spinning theory-tales?

I am afraid there is nothing that can stop them. They review each other’s papers. They review each other’s grant proposals. And they constantly tell each other that what they are doing is good science. Why should they stop? For them, all is going well. They hold conferences, they publish papers, they discuss their great new ideas. From the inside, it looks like business as usual, just that nothing comes out of it.

This is not a problem that will go away by itself.
http://backreaction.blogspot.com/201...nation-in.html





Court Again Rules That Cable Giants Can't Weaponize The First Amendment
Karl Bode

Over the last few years, telecom giants have increasingly been trying to claim that pretty much any effort to hold them accountable for their terrible service (or anything else) is a violation of their First Amendment rights. Historically that hasn't gone so well. For example, courts generally laughed off ISP lawyer claims that net neutrality violated their free speech rights, quite correctly highlighting that ISPs are simply conduits to information, not acting as editors of available speech through their blocking or filtering of available information.

Charter Spectrum, the nation's second biggest cable operator, has been trying to embrace this argument a lot lately as it fights off state lawsuits for terrible service. It recently tried to use the First Amendment card again in a legal battle with Byron Allen's Entertainment Studios Networks (ESN), which recently accused Charter of violating the Civil Rights Act of 1866 by refusing to carry TV channels run by the African-American-owned ESN.

While Charter tried to have the suit dismissed by claiming that the First Amendment prohibits such claims because an ISP enjoys "editorial discretion," the ruling (pdf) by the U.S. Court of Appeals for the Ninth Circuit didn't agree. The court noted that while ISPs and cable companies do enjoy some First Amendment protection, it doesn't apply here, just like it didn't apply in the net neutrality fight:

"As part of its defense, Charter had told the court that by choosing which channels to carry, the company was engaging in a form of editorial discretion protected by the First Amendment. Therefore, it said, the court would have to use a stricter standard to evaluate Entertainment Studios’ claim of a legal violation — a standard that might result in the claim being rejected.

The Ninth Circuit said otherwise, saying that just because Charter engages in corporate speech when it selects which channels to carry does not “automatically” require the court to use the tougher standard.

As a result, the court is letting the case move forward. For its part, ESN's discrimination complaint alleges that its complaint is based on more than just having its channel withheld from the company's cable lineup:

"The opinion on Charter’s motion to dismiss also marks a victory for the 25-year-old programming firm founded by comedian Byron Allen, which bought the Weather Channel in March and accused Charter executives in court of hurling racist insults at Allen and other black Americans in numerous encounters. In one alleged instance, Charter chief executive Tom Rutledge called Allen, who is black, “boy” at an industry conference and advised him to change his behavior, according to court documents. In another alleged example, the court said, Charter’s senior executive in charge of programming, Allan Singer, approached a group of black protesters outside Charter’s offices to tell them to “get off of welfare."

Consumer groups like Public Knowledge were quick to applaud the ruling, happy to see another effort to "weaponize" the First Amendment shot down. The district court will now proceed to determining whether Charter did engage in racially discriminatory conduct.

"Holding us accountable for absolutely anything violates our free speech" rights was historically something telecom lobbyists often just throw at a wall in a bid to see if it sticks. But these efforts have escalated in the last few years. For example FCC staffers under Ajit Pai, at this point nearly indistinguishable from big telecom lobbying efforts, have even tried to claim that community-run broadband (an organic, voter-approved response to terrible service) is a threat to free speech, a charge there's absolutely zero supporting evidence for.

It's a legal argument giant ISPs have also embraced more recently in large part because they hope that new Justice Brett Kavanaugh, who bought into some of these arguments during previous net neutrality battles, will ultimately be a deciding vote should many of these battles wind their way to the Supreme Court. So far, however, these efforts haven't worked out all that well, and while that's not likely to change when the net neutrality court fight kicks off next February, it could be an important issue should that fight make its way to the highest court in the land.
https://www.techdirt.com/articles/20...mendment.shtml

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

November 17th, November 10th, November 3rd, October 27th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - November 24th, '12 JackSpratts Peer to Peer 0 21-11-12 09:20 AM
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 04:54 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)