Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Thread Tools Search this Thread Display Modes
Old 20-04-22, 06:35 AM   #1
JackSpratts's Avatar
Join Date: May 2001
Location: New England
Posts: 9,927
Default Peer-To-Peer News - The Week In Review - April 23rd, 22

Since 2002

April 23rd, 2022

California Net Neutrality Law to Remain Intact after Appeals Court Says it Won’t Reconsider Earlier Decision

California’s net neutrality law is considered the toughest in the US
Kim Lyons

A federal appeals court has denied a request for a rehearing on its January decision that upholds California’s net neutrality law. The 2018 law, widely considered the strongest in the US, was signed into law a year after the Federal Communications Commission (FCC) repealed the Open Internet Order. That order had established stringent net neutrality rules that prohibited internet service providers from throttling or blocking legal websites and apps, and banned ISPs from prioritizing paid content.

California’s law, which finally took effect last year, also prohibits throttling and speed lanes. Wireless trade associations including the NCTA, the CTIA, and ISPs including Comcast, Verizon, and AT&T sued to block California’s law from taking effect, saying the FCC decision should preempt the state law. But that challenge was rejected by a district court judge. The Ninth Circuit voted 3-0 in January to uphold the lower court ruling, saying the FCC “no longer has the authority” to regulate broadband internet services because the agency reclassified them as “information services, instead of telecommunications services. The FCC therefore cannot preempt the state action.

FCC chairwoman Jessica Rosenworcel praised the decision on Twitter, reiterating her position that she wants to see net neutrality become “the law on the land” again. The FCC can’t currently reinstate net neutrality at the federal level however since the panel lacks a majority and the two Democrats and two Republicans remain deadlocked on the issue. President Biden’s FCC nominee Gigi Sohn is still awaiting a confirmation vote in the Senate.

The CTIA did not immediately reply to a request for comment on Thursday, while the NCTA’s senior vice president of strategic communications Brian Dietz said that organization had no comment on the decision. If the telecom companies want to continue pursuing the matter, the next step would be an appeal to the US Supreme Court to hear the case.

The Unreasonable Fight for Municipal Broadband

I love Longmont’s municipal broadband, but we had to fight Comcast every step of the way to get it.
Tyler Cipriani

As a gracious person, I’ll dutifully pretend that the problem might be on my side of the Zoom call. But the problem is never me—my internet is just too good.

Since 2016, I’ve been using Longmont’s municipal broadband—NextLight—and it’s been objectively awesome.

• It’s fast—1Gbps symmetric
• It’s cheap(ish)—$49.95 per month
• It’s rock-solid—I’ve never had an outage
• I’ve never hit a data cap
• PC Magazine ranks NextLight among the top five ISPs in the United States every year—often besting Google Fiber

And, even better, the city insists I’ll pay $49.95 forever.

But Longmont spent more than a decade fighting Comcast to provide this excellent internet. And any city working on its municipal broadband offering should prepare to do the same.

Canceling Comcast

In 2016, I was paying Comcast $150 a month for their top-tier (at the time) 100Mbps speeds. I could usually eke out a little more than 30Mbps on a good day.

I lept at the chance for gigabit internet. I signed up for NextLight as a charter member—locking in $49.95/mo for life (the current price is only $69.95 last I checked). And I took the opportunity to upgrade my $20 router from 2008 at the same time.
My new fancy router—Ubiquiti EdgeRouter Lite + EdgeSwitch 16-POE

And when I returned my cable box to Comcast to cancel my service, the representative felt compelled to counsel me: “NextLight, huh? you know,” he said, leaning in, “if you miss even a single payment, they’ll raise your price?”

“I’ll take my chances.”

I’ve been automatically billed $49.95 every month since, and this is what my speed looks like this morning:
930Mbps is not quite 1Gbps, but I’ll take it

Why can’t we have nice things?

Big cable companies suck. Big cable companies burned hundreds of thousands of dollars to stop Longmont’s municipal broadband.

In 2005, Comcast and CenturyLink rammed through the egregious Colorado SB-05-152, prohibiting municipalities in the state of Colorado from offering telecommunication services.

Longmont had to hold two referendums on the measure—one in 2009, which failed, and another in 2011, which passed:

In 2009, “No Blank Check Longmont” (Comcast/CenturyLink) spent $250,000 to dash our dreams of municipal broadband. They framed it as a choice between fast internet vs. police and firefighters.

In 2011, “Look Before You Leap Longmont” (Comcast/CenturyLink) spent $300,000 urging us to rethink our municipal broadband plans. They stood in lone opposition to our unanimous city council and our local paper.

Comcast spent $500,000 in a tiny city of less than 100,000 people. You can be sure, Comcast will do all this again in a heartbeat.


Rather than use their vast resources to improve their service, Comcast will spend big to ensure they never have to compete.

Let Longmont be a lesson. In 2011, Longmont won because it formed an honest citizens’ advisory group: Longmont’s Future. Longmont’s Future got the word out about the vote on Facebook, its website, and the local press.

And ever since, Longmonsters (that’s right—Longmont’s demonym is “Longmonster”) have chosen NextLight over competing services.

Real competition won. Fuck Comcast. Long live municipal broadband.

10 Gigabit Internet Is Coming Within a Decade

DOCSIS 4 and hybrid fiber coax will enable the next step of the Internet revolution. But there are some obstacles
Anton Shilov

Almost two decades ago, widely available 500 Mbps – 1 Gbps Internet connections opened the door to services we could barely dream of. For the next step of Internet evolution, higher speeds are needed. CableLabs, the company that heads development of the DOCSIS protocol used by cable networks, already has technology that will enable home or office Internet connections at 10 Gbps, but to make them widespread this decade, it needs assistance from industry peers.

CableLabs' DOCSIS 3.1 and DOCSIS 4.0 standards already support up to 10 Gbps maximum downstream speed as well as 1 – 2 Gbps or 6 Gbps upstream speeds, respectively. To support such extreme data rates over long ranges, the technologies use the full spectrum of the cable (0 MHz to ~1.80 GHz, DOCSIS 4.0 only), 4096 quadrature amplitude modulation, narrower (25 kHz or 50 kHz wide) orthogonal frequency-division multiplexing (OFDM) sub-carriers that can be bonded inside a block spectrum, and a number of other innovations.

But while DOCSIS 3.1 is used by several cable companies in the U.S. (albeit at around 3 Gbps download speeds), DOCSIS 4.0 yet has to be widely supported. DOCSIS 4.0 is not going to become a mainstream technology overnight, there are many companies working on solutions to make 10 Gbps Internet connections a reality within this decade, reports ZDNet.

First up, to build network equipment that supports full-duplex DOCSIS 4.0 protocol, one needs appropriate modem system-on-chips. Last year Broadcom and Comcast successfully transferred data at a 4 Gbps data rate over a lab-based hybrid fiber-coaxial (HFC) network using a prototype DOCSIS 4.0 modem based on Broadcom's SoC as well as two cable modem chips and Comcast's virtual cable modem termination system (vCMTS).

Theoretically, it is possible to build a DOCSIS 4.0-supporting modem today, but the problem is that DOCSIS 4.0 requires an optical fiber network or a slightly cheaper hybrid fiber-coaxial (HFC) network or just to show its potential.

Armstrong, a U.S. cable company, launched a 10 Gbps fiber-optic network in Medina, Ohio, to deliver this extremely-fast connectivity to more than 3000 businesses and residences. Meanwhile, Charter Communications has demonstrated a higher than 8.5 Gbps downstream and 6 Gbps upstream over an existing HFC network.

Building new infrastructure takes time and money. To make it easier for operators to adopt DOCSIS 4.0, CableLabs has developed a new device called Coherent Termination Device (CTD) that teams up coherent optics and wavelength-division multiplexing (WDM) in the optical access network to increase efficiency of existing fiber optics networks and therefore increase data transfer rates. While the technology works, it is unknown how fast will it be adopted by the industry.

"While we don't know what the future holds, we do know that the internet will play a vital role in shaping it," said Phil McKinney, CableLabs president and CEO. "The 10G platform and its applications [will] create a better future for humanity."

Europe Seals a Deal on Tighter Rules for Digital Services
Natasha Lomas

In the small hours local time, European Union lawmakers secured a provisional deal on a landmark update to rules for digital services operating in the region — grabbing political agreement after a final late night/early morning of compromise talks on the detail of what is a major retooling of the bloc’s existing ecommerce rulebook.

The political agreement on the Digital Services Act (DSA) paves the way for formal adoption in the coming weeks and the legislation entering into force — likely later this year. Although the rules won’t start to apply until 15 months after that — so there’s a fairly long lead in time to allow companies to adapt.

The regulation is wide ranging — setting out to harmonize content moderation and other governance rules to speed up the removal of illegal content and products. It addresses a grab-bag of consumer protection and privacy concerns, as well as introducing algorithmic accountability requirements for large platforms to dial up societal accountability around their services. While ‘KYC’ requirements are intended to do the same for online marketplaces.

How effective the package will be is of course tbc but the legislation that’s was agreed today goes further than the Commission proposal in a number of areas — with, for example, the European Parliament pushing to add in limits on tracking-based advertising.

A prohibition on the use of so-called ‘dark patterns’ for online platforms is also included — but not, it appears, a full blanket ban for all types of digital service (per details of the final text shared with TechCrunch via our sources).

See below for a fuller breakdown of what we know so far about what’s been agreed.

The DSA was presented as a draft proposal by the Commission back in December 2020 which means it’s taken some 16 months of discussion — looping in the other branches of the EU: the directly elected European Parliament and the Council, which represents EU Member States’ governments — to reach this morning’s accord.

After last month’s deal on the Digital Markets Act (DMA), which selectively targets the most powerful intermediating platforms (aka gatekeepers) with an ex ante, pro-competition regime, EU policy watchers may be forgiven for a little euphoria at the (relative) speed with which substantial updates to digital rules are being agreed.

Big Tech’s lobbying of the EU over this period has been of an unprecedented scale in monetary terms. Notably, giants like Google have also sought to insert themselves into the ‘last mile’ stage of discussions where EU institutions are supposed to shut themselves off from external pressures to reach a compromise, as a report published earlier today by Corporate Europe Observatory underlines. That illustrates what they believe is at stake.

The full impact of Google et al‘s lobbying won’t be clear for months or even years. But, at the least, Big Tech’s lobbyists were not success in entirely blocking the passage of the two major digital regulations — so the EU is saved from an embarrassing repeat of the (stalled) ePrivacy update which may indicate that regional lawmakers are wising up to the tech industry’s tactics. Or, well, that Big Tech’s promises are not as shiny and popular as they used to be.

The Commission’s mantra for the DSA has always been that the goal is to ensure that what’s illegal offline will be illegal online. And in a video message tweeted out in the small hours local time, a tired but happy looking EVP, Margrethe Vestager, said it’s “not a slogan anymore that’s what illegal offline should also be seen and dealt with online”.

“Now it is a real thing,” she added. “Democracy’s back.”

It’s a wrap! We have a deal on the #DSA! Two years after we tabled the proposal �� @SchaldemoseMEP and @cedric_o – and our amazing teams – for great cooperation ���� pic.twitter.com/8V8xE5Yw7w

— Margrethe Vestager (@vestager) April 23, 2022

In a statement, Commission president Ursula von der Leyen added:

“Today’s agreement on the Digital Services Act is historic, both in terms of speed and of substance. The DSA will upgrade the ground-rules for all online services in the EU. It will ensure that the online environment remains a safe space, safeguarding freedom of expression and opportunities for digital businesses. It gives practical effect to the principle that what is illegal offline, should be illegal online. The greater the size, the greater the responsibilities of online platforms. Today’s agreement — complementing the political agreement on the Digital Markets Act last month — sends a strong signal: to all Europeans, to all EU businesses, and to our international counterparts.”

In its own press release, the Council called the DSA “a world first in the field of digital regulation”.

While the parliament said the “landmark rules… effectively tackle the spread of illegal content online and protect people’s fundamental rights in the digital sphere”.

In a statement, its rapporteur for the file, MEP Christel Schaldemose, further suggested the DSA will “set new global standards”, adding: “Citizens will have better control over how their data are used by online platforms and big tech-companies. We have finally made sure that what is illegal offline is also illegal online. For the European Parliament, additional obligations on algorithmic transparency and disinformation are important achievements. These new rules also guarantee more choice for users and new obligations for platforms on targeted ads, including bans to target minors and restricting data harvesting for profiling.”

Other EU lawmakers are fast dubbing the DSA a “European constitution for the Internet”. And it’s hard not to see the gap between the EU and the US on comprehensive digital lawmaking as increasingly gaping.

Vestager’s victory message notably echoes encouragement tweeted out earlier this week by the former US secretary of state, senator, first lady and presidential candidate, Hillary Clinton, who urged Europe to get the DSA across the line and “bolster global democracy before it’s too late”, as she put it, adding: “For too long, tech platforms have amplified disinformation and extremism with no accountability. The EU is poised to do something about it.”

DSA: What’s been agreed?

In their respective press releases trumpeting the deal, the parliament and Council have provided an overview of areas of key elements of the regulation they’ve agreed.

It’s worth emphasizing that the full and final text hasn’t been published yet — and won’t be for a while. It’s pending legal checks and translation into the bloc’s many languages — which means the full detail of the regulation and the implication of all its nuance remains tbc.

But here’s an overview of what we know so far…

Scope, supervision & penalties

On scope, the Council says the DSA will apply to all online intermediaries providing services in the EU.

The regulation’s obligations are intended to be proportionate to the nature of the services concerned and the number of users — with extra, “more stringent” requirements for “very large online platforms” (aka VLOPs) and very large online search engines (VLOSEs).

Services with more than 45M monthly active users in the EU will be considered VLOPs or VLOSEs. So plenty of services will reach that bar — including, for example, the homegrown music streaming giant Spotify.

“To safeguard the development of start-ups and smaller enterprises in the internal market, micro and small enterprises with under 45 million monthly active users in the EU will be exempted from certain new obligations,” the Council adds.

The Commission itself will be responsible for supervising VLOPs and VLOSEs for the obligations that are specific to them — which is intended to avoid bottlenecks in oversight and enforcements of larger platforms (such as happened with the EU’s GDPR).

But national agencies at the Member State level will supervise the wider scope of the DSA — so EU lawmakers say this arrangement maintains the country-of-origin principle that’s baked into existing digital rules.

Penalties for breaches of the DSA can scale up to 6% of global annual turnover.

Per the parliament, there will also be a right for recipients of digital services to seek redress for any damages or loss suffered due to infringements by platforms.

Content moderation & marketplace rules

The content moderation measures are focused on harmonizing rules to ensure “swift” removal of illegal content.

This is being done through what the parliament describes as a “clearer ‘notice and action’ procedure” — where “users will be empowered to report illegal content online and online platforms will have to act quickly”, as it puts it.

It also flags support for victims of cyber violence — who it says will be “better protected especially against non-consensual sharing (revenge porn) with immediate takedowns”.

MEPs say fundamental rights are protected from the risk of over-removal of content from the regulation putting pressure on platforms to act quickly through “stronger safeguards to ensure notices are processed in a non-arbitrary and non-discriminatory manner and with respect for fundamental rights, including the freedom of expression and data protection”.

The regulation is also intended to ensure swift removal of illegal products/services from online marketplaces. So there are new requirements incoming for ecommerce players.

On this, the Council says the DSA will impose a “duty of care” on marketplaces vis--vis sellers who sell products or services on their online platforms.

“Marketplaces will in particular have to collect and display information on the products and services sold in order to ensure that consumers are properly informed,” it notes, although there will be plenty of devil in the detail of the exact provisions.

On this, the parliament says marketplaces will “have to ensure that consumers can purchase safe products or services online by strengthening checks to prove that the information provided by traders is reliable (‘Know Your Business Customer’ principle) and make efforts to prevent illegal content appearing on their platforms, including through random checks”.

Random checks on traders/goods had been pushed for by consumer protection organizations — who had been concerned the measure would be dropped during trilogues — so EU lawmakers appear to have listened to those concerns.

Extra obligations for VLOPs/VLOSEs

These larger platform entities will face scrutiny of how their algorithms work from the European Commission and Member State agencies — which the parliament says will both have access to the algorithms of VLOPs.

The DSA also introduces an obligation for very large digital platforms and services to analyse “systemic risks they create” and to carry out “risk reduction analysis”, per the Council.

The analysis must be done annually — which the Council suggests will allow for monitoring of and reduced risks in areas such as the dissemination of illegal content; adverse effects on fundamental rights; manipulation of services having an impact on democratic processes and public security; adverse effects on gender-based violence, and on minors and serious consequences for the physical or mental health of users.

Additionally, VLOPs/VLOSEs will be subject to independent audits each year, per the parliament.

Large platforms that use algorithms to determine what content users see (aka “recommender systems”) will have to provide at least one option that is not based on profiling. Albeit, many already do — although they often also undermine these choices by applying dark pattern techniques to nudge users away from control over their feeds so holistic supervision will be needed to meaningfully improve user agency.

There will also be transparency requirements for the parameters of these recommender systems with the goal of improving information for users and any choices they make. Again, the detail will be interesting to see there.

Limits on targeted advertising

Restrictions on tracking-based advertising appear to have survived the trilogue process with all sides reaching agreement on a ban on processing minors’ data for targeted ads.

This applies to platforms accessible to minors “when they are aware that a user is a minor”, per the Council.

“Platforms will be prohibited from presenting targeted advertising based on the use of minors’ personal data as defined in EU law,” it adds.

A final compromise text shared with TechCrunch by our sources suggests the DSA will stipulate that providers of online platforms should not do profile based advertising “when they are aware with reasonable certainty that the recipient of the service is a minor”.

A restriction on the use of sensitive data for targets ads has also made it into the text.

The parliament sums this up by saying “targeted advertising is banned when it comes to sensitive data (e.g. based on sexual orientation, religion, ethnicity)”.

Inside a European push to outlaw creepy ads

The wording of the final compromise text which we’ve seen states that: “Providers of online platforms shall not present advertising to recipients of the service based on profiling within the meaning of Article 4(4) of Regulation 2016/679 [aka, the GDPR] using special categories of personal data as referred to in article 9(1) of Regulation 2016/679.”

Article 4(4) of the GDPR defines ‘profiling’ as: “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements;”.

While the GDPR defines special category data as personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, as well as biometric and health data, data on sex life and/or sexual orientation.

So targeting ads based on tracking or inferring users’ sensitive interests is — on paper — facing a hard ban in the DSA.

Ban on use of dark patterns

A prohibition on dark patterns also made it into the text. But, as we understand it, this only applies to “online platforms” — so it does not look like a blanket ban across all types of apps and digital services.

That is unfortunate. Unethical practices shouldn’t be acceptable no matter the size of the business.

On dark patterns, the parliament says: “Online platforms and marketplaces should not nudge people into using their services, for example by giving more prominence to a particular choice or urging the recipient to change their choice via interfering pop-ups. Moreover, cancelling a subscription for a service should become as easy as subscribing to it.”

The wording of the final compromise text that we’ve seen says that: “Providers of online platforms shall not design, organize or operate their online interfaces in a way that deceives, manipulates or otherwise materially distorts or impairs the ability of recipients of their service to make free and informed decisions” — after which there’s an exemption for practices already covered by Directive 2005/29/EC [aka the Unfair Commercial Practices Directive] and by the GDPR.

The final compromise text we reviewed further notes that the Commission may issue guidance on specific practices — such as platforms giving more prominence to certain choices, repeatedly requesting a user makes a choice after they already have and making it harder to terminate a service than sign up. So the effectiveness of the dark pattern ban could well come down to how much attention the Commission is willing to give to a massively widespread online problem.

The wording of the associated recital in the final compromise we saw also specifies that the dark pattern ban (only) applies for “intermediary services”.

Crisis mechanism

An entirely new article was also added to the DSA following Russia’s invasion of Ukraine — and in connection with rising concern around the impact of online disinformation — that creates a crisis response mechanism which will give the Commission extra powers to scrutinize VLOPs/VLOSEs in order to analyze the impact of their activities to the crisis in question.

The EU’s executive will also be able to come up with what the Council bills as “proportionate and effective measures to be put in place for the respect of fundamental rights”.

The mechanism will be activated by the Commission on the recommendation of the board of national Digital Services Coordinators.

American Phone-Tracking Firm Demo’d Surveillance Powers by Spying on CIA and NSA

Anomaly Six, a secretive government contractor, claims to monitor the movements of billions of phones around the world and unmask spies with the press of a button.
Sam Biddle, Jack Poulson

In the months leading up to Russia’s invasion of Ukraine, two obscure American startups met to discuss a potential surveillance partnership that would merge the ability to track the movements of billions of people via their phones with a constant stream of data purchased directly from Twitter. According to Brendon Clark of Anomaly Six — or “A6” — the combination of its cellphone location-tracking technology with the social media surveillance provided by Zignal Labs would permit the U.S. government to effortlessly spy on Russian forces as they amassed along the Ukrainian border, or similarly track Chinese nuclear submarines. To prove that the technology worked, Clark pointed A6’s powers inward, spying on the National Security Agency and CIA, using their own cellphones against them.

Virginia-based Anomaly Six was founded in 2018 by two ex-military intelligence officers and maintains a public presence that is scant to the point of mysterious, its website disclosing nothing about what the firm actually does. But there’s a good chance that A6 knows an immense amount about you. The company is one of many that purchases vast reams of location data, tracking hundreds of millions of people around the world by exploiting a poorly understood fact: Countless common smartphone apps are constantly harvesting your location and relaying it to advertisers, typically without your knowledge or informed consent, relying on disclosures buried in the legalese of the sprawling terms of service that the companies involved count on you never reading. Once your location is beamed to an advertiser, there is currently no law in the United States prohibiting the further sale and resale of that information to firms like Anomaly Six, which are free to sell it to their private sector and governmental clientele. For anyone interested in tracking the daily lives of others, the digital advertising industry is taking care of the grunt work day in and day out — all a third party need do is buy access.

Company materials obtained by The Intercept and Tech Inquiry provide new details of just how powerful Anomaly Six’s globe-spanning surveillance powers are, capable of providing any paying customer with abilities previously reserved for spy bureaus and militaries.

According to audiovisual recordings of an A6 presentation reviewed by The Intercept and Tech Inquiry, the firm claims that it can track roughly 3 billion devices in real time, equivalent to a fifth of the world’s population. The staggering surveillance capacity was cited during a pitch to provide A6’s phone-tracking capabilities to Zignal Labs, a social media monitoring firm that leverages its access to Twitter’s rarely granted “firehose” data stream to sift through hundreds of millions of tweets per day without restriction. With their powers combined, A6 proposed, Zignal’s corporate and governmental clients could not only surveil global social media activity, but also determine who exactly sent certain tweets, where they sent them from, who they were with, where they’d been previously, and where they went next. This enormously augmented capability would be an obvious boon to both regimes keeping tabs on their global adversaries and companies keeping tabs on their employees.

The source of the materials, who spoke on the condition of anonymity to protect their livelihood, expressed grave concern about the legality of government contractors such as Anomaly Six and Zignal Labs “revealing social posts, usernames, and locations of Americans” to “Defense Department” users. The source also asserted that Zignal Labs had willfully deceived Twitter by withholding the broader military and corporate surveillance use cases of its firehose access. Twitter’s terms of service technically prohibit a third party from “conducting or providing surveillance or gathering intelligence” using its access to the platform, though the practice is common and enforcement of this ban is rare. Asked about these concerns, spokesperson Tom Korolsyshun told The Intercept “Zignal abides by privacy laws and guidelines set forth by our data partners.”

A6 claims that its GPS dragnet yields between 30 to 60 location pings per device per day and 2.5 trillion locational data points annually worldwide, adding up to 280 terabytes of location data per year and many petabytes in total, suggesting that the company surveils roughly 230 million devices on an average day. A6’s salesperson added that while many rival firms gather personal location data via a phone’s Bluetooth and Wi-Fi connections that provide general whereabouts, Anomaly 6 harvests only GPS pinpoints, potentially accurate to within several feet. In addition to location, A6 claimed that it has built a library of over 2 billion email addresses and other personal details that people share when signing up for smartphone apps that can be used to identify who the GPS ping belongs to. All of this is powered, A6’s Clark noted during the pitch, by general ignorance of the ubiquity and invasiveness of smartphone software development kits, known as SDKs: “Everything is agreed to and sent by the user even though they probably don’t read the 60 pages in the [end user license agreement].”

The Intercept was not able to corroborate Anomaly Six’s claims about its data or capabilities, which were made in the context of a sales pitch. Privacy researcher Zach Edwards told The Intercept that he believed the claims were plausible but cautioned that firms can be prone to exaggerating the quality of their data. Mobile security researcher Will Strafach agreed, noting that A6’s data sourcing boasts “sound alarming but aren’t terribly far off from ambitious claims by others.” According to Wolfie Christl, a researcher specializing in the surveillance and privacy implications of the app data industry, even if Anomaly Six’s capabilities are exaggerated or based partly on inaccurate data, a company possessing even a fraction of these spy powers would be deeply concerning from a personal privacy standpoint.

Reached for comment, Zignal’s spokesperson provided the following statement: “While Anomaly 6 has in the past demonstrated its capabilities to Zignal Labs, Zignal Labs does not have a relationship with Anomaly 6. We have never integrated Anomaly 6’s capabilities into our platform, nor have we ever delivered Anomaly 6 to any of our customers.”

When asked about the company’s presentation and its surveillance capabilities, Anomaly Six co-founder Brendan Huff responded in an email that “Anomaly Six is a veteran-owned small business that cares about American interests, natural security, and understands the law.”

Companies like A6 are fueled by the ubiquity of SDKs, which are turnkey packages of code that software-makers can slip in their apps to easily add functionality and quickly monetize their offerings with ads. According to Clark, A6 can siphon exact GPS measurements gathered through covert partnerships with “thousands” of smartphone apps, an approach he described in his presentation as a “farm-to-table approach to data acquisition.” This data isn’t just useful for people hoping to sell you things: The largely unregulated global trade in personal data is increasingly finding customers not only at marketing agencies, but also federal agencies tracking immigrants and drone targets as well as sanctions and tax evasion. According to public records first reported by Motherboard, U.S. Special Operations Command paid Anomaly Six $590,000 in September 2020 for a year of access to the firm’s “commercial telemetry feed.”

Anomaly Six software lets its customers browse all of this data in a convenient and intuitive Google Maps-style satellite view of Earth. Users need only find a location of interest and draw a box around it, and A6 fills that boundary with dots denoting smartphones that passed through that area. Clicking a dot will provide you with lines representing the device’s — and its owner’s — movements around a neighborhood, city, or indeed the entire world.

As the Russian military continued its buildup along the country’s border with Ukraine, the A6 sales rep detailed how GPS surveillance could help turn Zignal into a sort of private spy agency capable of assisting state clientele in monitoring troop movements. Imagine, Clark explained, if the crisis zone tweets Zignal rapidly surfaces through the firehose were only a starting point. Using satellite imagery tweeted by accounts conducting increasingly popular “open-source intelligence,” or OSINT, investigations, Clark showed how A6’s GPS tracking would let Zignal clients determine not simply that the military buildup was taking place, but track the phones of Russian soldiers as they mobilized to determine exactly where they’d trained, where they were stationed, and which units they belonged to. In one case, Clark showed A6 software tracing Russian troop phones backward through time, away from the border and back to a military installation outside Yurga, and suggested that they could be traced further, all the way back to their individual homes. Previous reporting by the Wall Street Journal indicates that this phone-tracking method is already used to monitor Russian military maneuvers and that American troops are just as vulnerable.

In another A6 map demonstration, Clark zoomed in closely on the town of Molkino, in southern Russia, where the Wagner Group, an infamous Russian mercenary outfit, is reportedly headquartered. The map showed dozens of dots indicating devices at the Wagner base, along with scattered lines showing their recent movements. “So you can just start watching these devices,” Clark explained. “Any time they start leaving the area, I’m looking at potential Russian predeployment activity for their nonstandard actors, their nonuniform people. So if you see them go into Libya or Democratic Republic of the Congo or things like that, that can help you better understand potential soft power actions the Russians are doing.”

The pitch noted that this kind of mass phone surveillance could be used by Zignal to aid unspecified clients with “counter-messaging,” debunking Russian claims that such military buildups were mere training exercises and not the runup to an invasion. “When you’re looking at counter-messaging, where you guys have a huge part of the value you provide your client in the counter-messaging piece is — [Russia is] saying, ‘Oh, it’s just local, regional, um, exercises.’ Like, no. We can see from the data that they’re coming from all over Russia.”

To fully impress upon its audience the immense power of this software, Anomaly Six did what few in the world can claim to do: spied on American spies. “I like making fun of our own people,” Clark began. Pulling up a Google Maps-like satellite view, the sales rep showed the NSA’s headquarters in Fort Meade, Maryland, and the CIA’s headquarters in Langley, Virginia. With virtual boundary boxes drawn around both, a technique known as geofencing, A6’s software revealed an incredible intelligence bounty: 183 dots representing phones that had visited both agencies potentially belonging to American intelligence personnel, with hundreds of lines streaking outward revealing their movements, ready to track throughout the world. “So, if I’m a foreign intel officer, that’s 183 start points for me now,” Clark noted.

The NSA and CIA both declined to comment.

Clicking on one of dots from the NSA allowed Clark to follow that individual’s exact movements, virtually every moment of their life, from that previous year until the present. “I mean, just think of fun things like sourcing,” Clark said. “If I’m a foreign intel officer, I don’t have access to things like the agency or the fort, I can find where those people live, I can find where they travel, I can see when they leave the country.” The demonstration then tracked the individual around the United States and abroad to a training center and airfield roughly an hour’s drive northwest of Muwaffaq Salti Air Base in Zarqa, Jordan, where the U.S. reportedly maintains a fleet of drones.

“There is sure as hell a serious national security threat if a data broker can track a couple hundred intelligence officials to their homes and around the world,” Sen. Ron Wyden, D-Ore., a vocal critic of the personal data industry, told The Intercept in an interview. “It doesn’t take a lot of creativity to see how foreign spies can use this information for espionage, blackmail, all kinds of, as they used to say, dastardly deeds.”

Back stateside, the person was tracked to their own home. A6’s software includes a function called “Regularity,” a button clients can press that automatically analyzes frequently visited locations to deduce where a target lives and works, even though the GPS pinpoints sourced by A6 omit the phone owner’s name. Privacy researchers have long shown that even “anonymized” location data is trivially easy to attach to an individual based on where they frequent most, a fact borne out by A6’s own demonstration. After hitting the “Regularity” button, Clark zoomed in on a Google Street View image of their home.

“Industry has repeatedly claimed that collecting and selling this cellphone location data won’t violate privacy because it is tied to device ID numbers instead of people’s names. This feature proves just how facile those claims are,” said Nate Wessler, deputy director of the American Civil Liberties Union’s Speech, Privacy, and Technology Project. “Of course, following a person’s movements 24 hours a day, day after day, will tell you where they live, where they work, who they spend time with, and who they are. The privacy violation is immense.”

The demo continued with a surveillance exercise tagging U.S. naval movements, using a tweeted satellite photo of the USS Dwight D. Eisenhower in the Mediterranean Sea snapped by the commercial firm Maxar Technologies. Clark broke down how a single satellite snapshot could be turned into surveillance that he claimed was even more powerful than that executed from space. Using the latitude and longitude coordinates appended to the Maxar photo along with its time stamp, A6 was able to pick up a single phone signal from the ship’s position at that moment, south of Crete. “But it only takes one,” Clark noted. “So when I look back where that one device goes: Oh, it goes back to Norfolk. And actually, on the carrier in the satellite picture — what else is on the carrier? When you look, here are all the other devices.” His screen revealed a view of the carrier docked in Virginia, teeming with thousands of colorful dots representing phone location pings gathered by A6. “Well, now I can see every time that that ship is deploying. I don’t need satellites right now. I can use this.”

Though Clark conceded that the company has far less data available on Chinese phone owners, the demo concluded with a GPS ping picked up aboard an alleged Chinese nuclear submarine. Using only unclassified satellite imagery and commercial advertising data, Anomaly Six was able to track the precise movements of the world’s most sophisticated military and intelligence forces. With tools like those sold by A6 and Zignal, even an OSINT hobbyist would have global surveillance powers previously held only by nations. “People put way too much on social media,” Clark added with a laugh.

As location data has proliferated largely unchecked by government oversight in the United States, one hand washes another, creating a private sector capable of state-level surveillance powers that can also fuel the state’s own growing appetite for surveillance without the usual judicial scrutiny. Critics say the loose trade in advertising data constitutes a loophole in the Fourth Amendment, which requires the government to make its case to a judge before obtaining location coordinates from a cellular provider. But the total commodification of phone data has made it possible for the government to skip the court order and simply buy data that’s often even more accurate than what could be provided by the likes of Verizon. Civil libertarians say this leaves a dangerous gap between the protections intended by the Constitution and the law’s grasp on the modern data trade.

“The Supreme Court has made clear that cellphone location information is protected under the Fourth Amendment because of the detailed picture of a person’s life it can reveal,” explained Wessler. “Government agencies’ purchases of access to Americans’ sensitive location data raise serious questions about whether they are engaged in an illegal end run around the Fourth Amendment’s warrant requirement. It is time for Congress to end the legal uncertainty enabling this surveillance once and for all by moving toward passage of the Fourth Amendment Is Not For Sale Act.”

Though such legislation could restrict the government’s ability to piggyback off commercial surveillance, app-makers and data brokers would remain free to surveil phone owners. Still, Wyden, a co-sponsor of that bill, told The Intercept that he believes “this legislation sends a very strong message” to the “Wild West” of ad-based surveillance but that clamping down on the location data supply chain would be “certainly a question for the future.” Wyden suggested that protecting a device’s location trail from snooping apps and advertisers might be best handled by the Federal Trade Commission. Separate legislation previously introduced by Wyden would empower the FTC to crack down on promiscuous data sharing and broaden consumers’ ability to opt out of ad tracking.

A6 is far from the only firm engaged in privatized device-tracking surveillance. Three of Anomaly Six’s key employees previously worked at competing firm Babel Street, which named all three of them in a 2018 lawsuit first reported by the Wall Street Journal. According to the legal filing, Brendan Huff and Jeffrey Heinz co-founded Anomaly Six (and lesser-known Datalus 5) months after ending their employment at Babel Street in April 2018, with the intent of replicating Babel’s cellphone location surveillance product, “Locate X,” in a partnership with major Babel competitor Semantic AI. In July 2018, Clark followed Huff and Heinz by resigning from his position as Babel’s “primary interface to … intelligence community clients” and becoming an employee of both Anomaly Six and Semantic.

Like its rival Dataminr, Zignal touts its mundane partnerships with the likes of Levi’s and the Sacramento Kings, marketing itself publicly in vague terms that carry little indication that it uses Twitter for intelligence-gathering purposes, ostensibly in clear violation of Twitter’s anti-surveillance policy. Zignal’s ties to government run deep: Zignal’s advisory board includes a former head of the U.S. Army Special Operations Command, Charles Cleveland, as well as the CEO of the Rendon Group, John Rendon, whose bio notes that he “pioneered the use of strategic communications and real-time information management as an element of national power, serving as a consultant to the White House, U.S. National Security community, including the U.S. Department of Defense.” Further, public records state that Zignal was paid roughly $4 million to subcontract under defense staffing firm ECS Federal on Project Maven for “Publicly Available Information … Data Aggregation” and a related “Publicly Available Information enclave” in the U.S. Army’s Secure Unclassified Network.

The remarkable world-spanning capabilities of Anomaly Six are representative of the quantum leap occurring in the field of OSINT. While the term is often used to describe the internet-enabled detective work that draws on public records to, say, pinpoint the location of a war crime from a grainy video clip, “automated OSINT” systems now use software to combine enormous datasets that far outpace what a human could do on their own. Automated OSINT has also become something of a misnomer, using information that is by no means “open source” or in the public domain, like commercial GPS data that must be bought from a private broker.

While OSINT techniques are powerful, they are generally shielded from accusations of privacy violation because the “open source” nature of the underlying information means that it was already to some extent public. This is a defense that Anomaly Six, with its trove of billions of purchased data points, can’t muster. In February, the Dutch Review Committee on the Intelligence and Security Services issued a report on automated OSINT techniques and the threat to personal privacy they may represent: “The volume, nature and range of personal data in these automated OSINT tools may lead to a more serious violation of fundamental rights, in particular the right to privacy, than consulting data from publicly accessible online information sources, such as publicly accessible social media data or data retrieved using a generic search engine.” This fusion of publicly available data, privately procured personal records, and computerized analysis isn’t the future of governmental surveillance, but the present. Last year, the New York Times reported that the Defense Intelligence Agency “buys commercially available databases containing location data from smartphone apps and searches it for Americans’ past movements without a warrant,” a surveillance method now regularly practiced throughout the Pentagon, the Department of Homeland Security, the IRS, and beyond.

As Shares Plunge, Netflix Takes Aim at Password Sharing, Ads
Michael Liedtke and Mae Anderson

An unexpected drop in subscribers sent Netflix shares into freefall Wednesday, forcing the company to consider experimenting with ads and -- hold onto your remote -- cracking down on millions of freeloaders who use passwords shared by friends or family.

The surprising net loss of 200,000 subscribers rattled investors, who had been told by the company to expect a gain of 2.5 million subscribers. Netflix shares sank 35% on the news, falling to their lowest level since early 2018.

Netflix estimates that about 100 million households worldwide — or roughly one out of every three households using its service — are streaming for free. “We’ve just got to get paid at some degree for them,” co-CEO Reed Hastings said during a shareholder call Tuesday.

Netflix has already been experimenting in Latin America with programs that use a soft touch to convince the unsubscribed to sign up. In Costa Rica, for instance, Netflix plan prices range from $9 to $15 a month, but subscribers can create sub-accounts for two other individuals outside their household for $3 a month. On Tuesday, Hastings suggested that the company may adopt something similar in other markets.

Just how Netflix will erect barriers remains unclear, and Hastings indicated that the company probably will spend the next year assessing different approaches. In one test last year, Netflix prompted viewers to verify their accounts via email or text.

Some current subscribers say even a gentle nudge to reduce password sharing might push them to sign off.

Alexander Klein, who lives near Albany, N.Y., has subscribed to Netflix since 2013 and shares his account with his mother-in-law. While he likes the service, a string of price increases and the loss of licensed shows has annoyed him — and any password-sharing crackdown might be the last straw.

“If they start cracking down on password sharing and I’m stuck paying the full $15 (a month) just for one person watching at a time, that’s frustrating,” he said. “If they decided to do that I’d likely cancel.”

Netflix is bracing for more subscriber losses even before it attempts to weed out freeloaders. The company predicted its customer base will shrink by another 2 million subscribers by the end of June. That would still leave Netflix with 220 million worldwide subscribers, more than any other video streaming service.

Despite some fears that a Netflix crackdown on password-sharing could encourage other streaming services to follow suit, experts say that’s not likely.

“I think we would see competitors take different strategies here,” said Raj Venkatesan, a professor of business administration at the University of Virginia. “Some will follow the lead of Netflix and crack down on password sharing. Others will use this as a differentiator and promise simplicity by saying you can have one password for the family.”

For years, amid rapid global growth, Netflix has looked the other way at the not-so-secret practice of subscribers sharing passwords beyond their households. And Hastings has spoken passionately in the past about keeping Netflix ad-free.

But competitive pressure is on the rise. Deep-pocketed rivals such as Apple, Walt Disney and HBO have begun to chip away at Netflix’s dominance with their own streaming services. The easing of the pandemic is giving consumers entertainment options beyond binge-watching their favorite shows, and rising inflation is making families think twice about how many different streaming services they’re willing to pay for.

All of this has given investors major jitters for months. The Wednesday selloff came on top of earlier trouble for the stock, which has lost 62 percent of its market value since the end of 2021, erasing $167 billion in shareholder wealth.

Netflix has no choice but to try new ways to boost its profits to appease shareholders, said J. Christopher Hamilton, a Syracuse University professor who studies streaming services.

“It feels like this is Netflix’s ‘come-to-Jesus’ moment,” said Hamilton, a former lawyer for movie studios. “They were able to be headstrong and play the role as a disruptor for a long time. But now the honeymoon is over and they have to face the reality of business.”

Hamilton believes offering a lower priced version of Netflix’s service that includes ads will be warmly received by consumers looking to save money, as long as subscribers willing to pay more can still binge watch without commercial interruption.

Ad revenue in streaming services during the next five years is likely to grow more rapidly than subscription revenue, according to a recent study by the consulting group Accenture. By 2027, Accenture expects advertising sales in video services total $21 billion annually, up from just $1 billion in 2017.

Netflix is counting on bringing some advertising into the mix to help bolster its profits, which totaled $1.6 billion during the January-March period, a 6% decline from the same time last year.

The crackdown on password sharing could be more problematic, though.

“I think we may be at the point of no return for password sharing,,” said Ben Treanor, a digital marketing strategist for Time2Play, a gaming site that recently studied the “streaming swindlers” phenomenon. “I think there’s a chance if you throw someone off their family’s account, they may not pick up their own account.”

Netflix has survived customer backlash before. Back in 2011, it unveiled plans to begin charging for its then-nascent streaming service, which had been bundled for free with its traditional DVD-by-mail service. In the months after that change, Netflix lost 800,000 subscribers, prompting an apology from Hastings for botching the execution of the spin-off. But the company bounced back.

Ads, meanwhile, have never been a favorite of Hastings, who has long viewed them as a distraction from the entertainment Netflix provides.

Ravin Ramjit, a 41-year-old living in London, will have none of them.

“I specifically signed up for Netflix back in the day because there were no ads,” he said. “Ads are too intrusive and they break your concentration and the continuity of the shows. You might be in a nice, intense scene -- you’re really into it -- and all of a sudden they cut to commercial.”

Stalwarts like David Lewis in Norwalk, Connecticut, say the changes don’t seem like a big deal. Lewis shares a premium plan with his three adult children and some of their friends and says they will keep it, even if they have to cut off the friends and each pay for their own accounts.

“We would keep Netflix and pay for the four in our family, even if it was more,” he said. “We love the service and what it offers.”

Netflix began heading in a new direction last year when its service added video games at no additional charge in an attempt to give people another reason to subscribe.


Anderson reported from New York. AP technology writer Matt O’Brien in Providence, R.I., also contributed to this report.

Shameful: Insteon Looks Dead—Just Like its Users’ Smart Homes

The app and servers are dead. The CEO scrubbed his LinkedIn page. No one is responding.
Ron Amadeo

The smart home company Insteon has vanished.

The entire company seems to have abruptly shut down just before the weekend, breaking users' cloud-dependent smart-home setups without warning. Users say the service has been down for three days now despite the company status page saying, "All Services Online." The company forums are down, and no one is replying to users on social media.

As Internet of Things reporter Stacey Higginbotham points out, high-ranking Insteon executives, including CEO Rob Lilleness, have scrubbed the company from their LinkedIn accounts. In the time it took to write this article, Lilleness also removed his name and picture from his LinkedIn profile. It seems like that is the most communication longtime Insteon customers are going to get.

Insteon is (or, more likely, "was") a smart home company that produced a variety of Internet-connected lights, thermostats, plugs, sensors, and of course, the Insteon Hub. At the core of the company was Insteon's proprietary networking protocol, which was a competitor to more popular and licensable alternatives like Z-Wave and Zigbee. Insteon's "unique and patented dual-mesh technology" used both a 900 MHz wireless protocol and powerline networking, which the company said created a more reliable network than wireless alone. The Insteon Hub would bridge all your gear to the Internet and enable use of the Insteon app.

Insteon technically has a parent company, Smartlabs Inc., though Smartlabs and Insteon seem to share the same executives. Smartlabs Inc. owns the website smarthome.com, which primarily sells Insteon equipment, and it actually licenses the Nokia name for "Nokia Smart Lighting," which just seems to be rebranded Insteon equipment.

In 2017, Smartlabs Inc. was acquired by Richmond Capital Partners, a private investment firm founded by Rob Lilleness, and Lilleness was installed as CEO. Insteon scrubbed the blog post about this acquisition from its website, but archive.org still has the announcement. Insteon's biggest tech-news splash was being one of two launch partners for Apple's HomeKit in 2015.

With its servers down, the Insteon app appears worthless, and users' automations and schedules have stopped working. Many of Insteon's wall switches were actual electrical switches, so the worst that will ever happen is that they become dumb switches. Even without the Insteon servers and app, Insteon's protocol has been reverse-engineered for a while now, so it's possible to control the devices locally without the app. It's also possible to pipe that local control into another platform's hub controller, returning the smarts and remote access to your smart home.

Home Assistant is probably the most popular upgrade path—it's an open source home server that you control, so nothing like this can happen to you ever again. OpenHab is another open source option with Insteon support, and a Homebridge plugin can get Insteon working with Apple's HomeKit.

If you are a spurned Insteon user seeking to move your hardware to some other system, whatever you do, don't factory-reset your Insteon Hub. Apparently, contacting Insteon's servers is a key step of the initial setup, so this may now fail. Home Assistant has already updated its Insteon page with a warning for users. It reads:

The Insteon company has shut down and turned off their cloud as of April 2022. Do not factory reset your device under any circumstances as it will not be recoverable.

Update: Recovery method
Home Assistant has updated its documentation again, and now says they have a working setup process, even if you've factory reset your device.

Users on the /r/insteon subreddit are processing their collective grief right now, and with the official forums dead, that's probably the biggest community out there for help. Everyone is in the same boat and investigating lots of porting options.

A company shutdown, especially a smart home company shutdown, is never easy. Many customers invested hundreds of dollars into this formerly multimillion-dollar ecosystem; suddenly closing up shop like a bunch of fly-by-night grifters would be an unacceptable way to treat paying customers. Hopefully, more communication will be forthcoming.

The company could have given everyone a month's notice that it was going out of business. It could have open sourced code or posted documentation to help users get running on some other system. It could have given forum members a chance to get organized on some other site.

But that didn't happen. Instead, Insteon committed the cardinal sin of smart home companies: leaving customers—and their gear—in the lurch.

Book Banning Efforts are Inspiring Readers to Form Banned Book Clubs
Harmeet Kaur

Members of the Teen Banned Book Club gather at Firefly Bookstore in Kutztown, Pennsylvania.

When Joslyn Diffenbaugh learned about efforts in Texas to remove certain books from school libraries and classrooms, she was surprised by the titles that were being challenged.

An avid reader, the 8th grader from Kutztown, Pennsylvania, said she had read several of the books in question. Among the titles that had come under attack in recent years were "The Hate U Give," a novel about a young Black girl who grapples with racism and police brutality, and "All American Boys," a novel about two teenagers -- one Black and one White -- who contend with similar issues.

Those books had been eye-opening for Diffenbaugh, exposing her to realities that she might not otherwise have encountered. That some parents and politicians were trying to limit other young people's understanding of such issues as racism was concerning to her.

Some school librarians fed up with book bans are organizing and fighting back

"The reason these books are being banned are the reasons why they should probably be read," the 14-year-old said she was thinking at the time.

The recent wave of book challenges inspired Diffenbaugh to join forces with the local Firefly Bookstore and start the Banned Book Club. Since January, she and other young people in her area have been meeting every other week to discuss classic and contemporary titles that have been contested.

The community is one of several banned book clubs that have formed in response to a growing push from the right to control what titles young people have access to. And it points to an ironic effect: The more certain books are singled out, the more people want to read them.

One club hopes readers find themselves in banned books

Book banning -- or at least, book banning attempts -- appears to be having a resurgence.

The American Library Association recorded 729 challenges to library, school and university materials and services in 2021, the most since the organization began tracking those attempts in 2000. While that might seem low overall considering the approximately 99,000 K-12 public schools in the US, the ALA says it's likely an extreme undercount.

In recent months, conservative local and state officials have taken aim both at specific titles and broad categories of books that deal with race, gender or sexuality. And while attempts to remove those books from library shelves or classrooms haven't all been successful, the efforts themselves have garnered interest in banned books from readers across the country.

Book bans move to center stage in the red-state education wars

That was the impetus for the Banned Books Book Club, a project from the company Reclamation Ventures, which also runs the newsletter Anti-Racism Daily. Nicole Cardoza, the company's founder and CEO, said that young readers of the newsletter had increasingly been asking for resources on how they might engage with books being targeted for removal.

"This conservative pushback is actually generating a lot of interest in books that might not be something the average student is being exposed to otherwise," she said. "[We want to] help connect more people to the stories that matter most -- that reflect marginalized experiences that they might not hear otherwise."

Many of the books that have been challenged recently center Black or LGBTQ characters, and Cardoza said she hopes that members of the Banned Books Book Club might find parts of themselves reflected in the books that are chosen. The club, which launched in early April and plans to meet virtually once a month, is reading "The Hate U Give" as its first pick.

"The book has been around for a while and it reflects a teenage experience and relationship to police brutality, which has been such a strong conversation of the past couple of years," Cardoza said. "We thought it was a really great way to center the intention around the book club."

Beyond that, the team has a list of 20 or so books that it hopes to cover over the next two years, including "Gender Queer" by Maia Kobabe and "Cinderella is Dead" by Kalynn Bayron for their explorations of queer and non-binary experiences. They want such books to be available to anyone, which is why the project also includes a banned books library through which readers can access discussion guides and request free copies of titles.

Other clubs have been talking about censorship

For some banned book clubs, recent book banning attempts have been a springboard for wider discussions around censorship.

The Banned Book Club at Firefly Bookstore read George Orwell's "Animal Farm" as its first pick. While the satirical novella, which makes a pointed critique of totalitarianism, isn't one of the books currently being challenged in the US, it was banned in the Soviet Union until its fall and was rejected for publication in the UK during its wartime alliance with the USSR. And it faced challenges in Florida in the '80s for being "pro-communist." That history made for some thought-provoking conversations.

"It taught a lot because it had references to different forms of government that maybe some adults didn't like their kids reading about, even though it was run by pigs," Diffenbaugh said. "I really thought it shouldn't have been banned for those reasons, or at all."

Teenagers at the Common Ground Teen Center in Washington, Pennsylvania, formed a banned book club soon after a Tennessee school district voted to remove "Maus" from an eighth grade curriculum. But while the graphic novel about the Holocaust was the catalyst for the club, says director Mary Jo Podgurski, the first title they chose to read was, fittingly, "Fahrenheit 451" -- the 1953 dystopian novel about government censorship that itself has been challenged over the years.

"Obviously this whole idea of taking away books that they wanted to read or that they thought they should read sparked a nerve in them," said Podgurski, an educator and counselor who oversees the Common Ground Teen Center.

The young people at the center take turns choosing a book and facilitating the discussion, while Podgurski helps guide the conversations. They talk about the message of the book, and why some might have found it objectionable. Since reading "Fahrenheit 451," the club has also discussed "Animal Farm" and "1984," which has been challenged for its political themes and sexual content. So far, the young readers at the Common Ground Teen Center have been puzzled as to why those books were once deemed inappropriate.

"I often wonder, do adults understand what kids have in their phones?" Podgurski said. "They have access to everything. Saying 'don't read this book' shows that you're not understanding teen culture. Young people have access to much information. What they need is an adult to help them process it."

They see value in reading banned books

The Banned Book Club at King's Books in Tacoma, Washington, has long understood the value in reading banned books. Though recent headlines have attracted new interest in the club, the group has been meeting monthly for more than a decade.

David Rafferty, who has been coordinating the club since 2014, said he first joined because he was looking for a space to engage with deeper subjects that might not come up in casual conversation. While many book challenges today take aim at young adult novels that depict the harsh realities of racism or that grapple with gender identity, the Banned Book Club at King's Books has discussed titles that faced pushback for all kinds of reasons.

One of the first books that Rafferty read through the club was Mark Twain's "Adventures of Huckleberry Finn," which has been challenged for decades over concerns that it contributes to racial stereotypes. The meaningful conversations that came out of that meeting turned him into a regular member.

Books about LGBTQ and Black people were among the most challenged books in 2021

"It does use a racial slur -- the N-word -- fairly often and casually," Rafferty said. "We've gotten into some interesting discussions about whether or not that was more used at the time and whether [Twain] is trying to reflect the time, whether or not the book itself was racist."

More recently, the club has read "The Color Purple," which has been banned for its depictions of homosexuality and sexual assault, as well as "The Call of The Wild," which has been challenged for its depictions of animal cruelty and violence. But as Rafferty sees it, it's better to read and discuss than to avoid tough subjects altogether.

"People want to shield their children from certain topics like sexual assault, sexual explicitness, profanity, racism, LGBTQ [issues]," he said. "My argument is that children and teenagers are going to be dealing with us in some form or another, and the books give them a chance to experience it or learn about it before they actually have to deal with it directly. So when they do have to deal with it, they can deal with it better."

The teenagers in banned book clubs agree. Lizzy Brison, a member of the club at the Common Ground Teen Center, said she understands why some books might merit extra care and caution when it comes to younger readers. But she feels removing them from shelves is a step too far.

"They're protecting what they think is innocence but in reality, they're just limiting children to what they can access with their own identity," Brison, who is in 10th grade, said. "It's gonna be uncomfortable to help a child through that process. But it's going to be worth it in the end, because your child will end up knowing who they are and where they belong in the world."

Diffenbaugh, too, has a desire to better understand the world around her. So she plans to keep reading.

"You're going to come across people who are of a different race. You're going to come across people who may have a different gender identity. It's a way that you can understand them more as people," she said. "All these books that are being banned are about present issues. If we can read them now, we have that knowledge for the future."

Honda Orders Big Takedown of Honda-Related 3D Printing Models From Maker Communities

Printables removed all models with “Honda” in the listing uploaded prior to March 30, 2022.
Rob Stumpf

3D printing has been one of the greatest introductions to the automotive DIY scene in years. I've written about it before, and it's amazing to watch problems get solved with unique solutions from imaginative minds across the globe. What's more, it's great to see car companies embrace the maker community, even supplying the plans to help build and share custom parts.

Unfortunately, not all automakers share that same sentiment.

Recently, I noticed a part that I made for my Honda Accord was removed from Printables, the newly rebranded 3D printing repository offered by Prusa. There seemed to be no rhyme or reason for it, but I didn't think anything else about it...until reports of a mass deletion started popping up on Reddit.

All models referencing the word "Honda" posted prior to March 30, 2022, were seemingly removed from Printables without warning. These included speaker brackets, key housings, hood latches, shifter bushings, washer fluid caps, roof latch handles, and my trunk lid handle—a part not offered on 10th generation Accords sold in the U.S. at all. In fact, many of the removed parts had no Honda branding but were just compatible with Honda vehicles. As it turns out, Prusa says it was issued a takedown notice from Honda and removed all 3D models that referenced the brand.

"I can confirm to you that we have received a letter from a lawyer representing Honda, informing us that we were required to remove any model which used 'Honda' in the listing, the model itself, or one of several trademarks/logos also associated with Honda," a Prusa spokesperson told The Drive in an email. "This will also be related to the naming of the files it self (sic), as for Honda this would be considered as a violation of their trademark/patents."

A Prusa employee responded to a post on the company's forums, noting that Honda sent a "huge legal document" that covered every model that the company wished to have deleted. The document reportedly included items that did not have Honda logos, but also specific items with certain shapes and dimensions—like a washer fluid reservoir cap, for example.

A response from another employee was posted suggesting other sites that host 3D models were also sent a similar takedown notice. These files still remain up on Thingiverse, Thangs, and other repositories at the time of publishing, however.

Honda Motors North America informed me the order was issued by Honda Motors Europe, though the latter did not respond when repeatedly asked for comment on the matter. Prusa also declined to provide The Drive with a copy of the document to review.

While it's true that Honda must protect its trademarks and intellectual property, this particular scenario crosses into a legal conundrum. If a part doesn't feature a Honda logo on it and simply states that it's compatible with a particular vehicle, does that give Honda the right to enforce a potential trademark or copyright violation? Sort of.

A lot of it comes down to wording. Some files that were taken down were named something along the lines of "Honda Civic Cup Holder," whereas others were titled similar to the likeness of "Cup Holder for Honda Civic." The order of that wording matters and could be the reason Honda responded in the way it did, and it's why Prusa obliged by taking down all items that referenced the Honda brand.

"From a trademark law perspective, there definitely can be a difference," said Maya Eckstein, a partner at Hunton Andrews Kurth that specializes in the law surrounding additive manufacturing, in an email to The Drive. "Every situation is fact-specific, but, generally speaking, the former suggests that the cup holder was manufactured by Honda or is endorsed by Honda (i.e. that Honda is the “source” of the item being sold), risking a likelihood of confusion and, thus, a trademark violation. The latter doesn’t necessarily do that; it falls into the category of nominative fair use, where you are merely describing the thing being sold but not implying sponsorship or endorsement by the brand."

Eckstein does have a word of advice for makers who want to continue producing these types of files and distributing them to the community: "When referring to a brand in this way, it’s important to refer to it simply; using the brand’s distinctive logos, lettering, coloring, etc., could lead to a likelihood of confusion about whether the brand is the source of the product."

"We are working with other companies to make them realize this should be embraced and not hunted down," said Josef Prusa, CEO of Prusa Research in a Reddit thread.

At the end of the day, Honda's decision to protect its property is warranted; however, Prusa's statement that the automaker issued a takedown of "any model which used 'Honda' in the listing" feels overly broad and perhaps an overreach of fair use. It also feels like a setback for the maker community as a whole.

Android Users May Soon Get this New File Sharing Feature

Google will soon allow Android and Chrome OS users to quickly share files via Nearby Share. According to recent tweets by Esper’s Mishaal Rahman, Nearby Share will soon allow you to share files between your devices quickly without any approval. Currently when you try to share any file via Nearby Share, the receiver has to approve your share request in order to get the file. The purpose of this additional step is to keep the devices safe from potential malicious files that may be pushed by fraudsters.

Rahman suggests that the ‘self-share’ mode will allow you to share files to your other devices without any authentication, if both the devices are signed in using the same Google account. “Nearby Share's "self-share" mode will let you quickly share files to other devices signed into the same Google account without needing to approve the share. This hasn't rolled out yet from what I can see, but it's present in the latest version of Google Play Services.” tweet from Rahman reads.

References to this feature were first spotted back a couple of months ago. It is worth noting that the mode on is not actually called ‘self-share’, but it is referred to as such in the app’s resources. Google has not yet revealed anything about the feature till now, but the images shared by Rehman suggest that we may be able to use the feature in the near future.

For those who are unaware, Google rolled out a Nearby Share tool for Android devices a couple of years ago. The feature allows you to share files, links, pictures and more to other Android users around you. It is quite similar to the AirDrop feature that is found in the Apple ecosystem. Nearby Share automatically chooses the best protocol for fast sharing using Bluetooth, Bluetooth Low Energy, WebRTC or peer-to-peer WiFi. This means that you can use the feature without any internet connectivity.

Windows 11 Update will Wave Goodbye to Insecure File-Sharing

SMB1 protocol will soon be disabled by default on all versions of Windows 11
Anthony Spadafora

Sharing files on Windows 11 will soon be even more secure as Microsoft has announced its plans to finally disable the SMB1 protocol in all editions of its operating system.

For those unfamiliar, the Server Message Block (SMB) protocol was originally developed by IBM back in the 1980s to make it easier to share access to files, printers and other resources on a network. SMB1 meanwhile is a dialect of the protocol that was also created by IBM for file sharing in DOS.

In a new blog post, principal program manager in the Windows Server engineering group, Ned Pyle explained that Windows Insiders on the Dev Channel will be the first to see SMB1 disabled by default for all Windows 11 editions.

This makes a great deal of sense as Microsoft has shipped both Windows 10 and Windows Server without SMB1 installed since the release of the Fall Creators Update back in 2017. Now though, this will extend to all versions of Windows 11 which will no longer have the insecure file sharing protocol enabled.

Still available as an unsupported install package

Although SMB1 is an insecure protocol, it’s still used today to connect to older NAS devices on Windows PCs.

While the protocol will no longer be enabled by default in Windows 11 going forward, the change won’t affect in-place upgrades of machines where end users are already using SMB1. Microsoft also plans to remove the SMB1 binaries in a future release.

As for businesses that still need to use SMB1 to connect to older devices such as factory machinery and medical gear, the software giant will provide an out-of-band unsupported install package.

In his post, Nyle warned that Microsoft’s plans regarding SMB1 could create pain points for consumers that are still running older hardware who will likely be confused as to why their new business laptop running Windows 11 can’t connect to their aging networked hard drive.

Senators Want to Mandate Anti-Piracy Technology Across The Web

Websites could face mandatory anti-piracy technology upgrades every three years.
Timothy B. Lee

Two senators have introduced legislation that would give the US Copyright Office power to mandate the adoption of anti-piracy technology across the Internet. Websites that failed to comply would face damages as high as $150,000 on the first offense. The bill, known as the SMART Copyright Act, is co-sponsored by Sen. Thom Tillis (R-N.C.) and Vermont Sen. Patrick Leahy, one of the Senate's most senior Democrats.

"In the fight to combat copyright theft, there is currently no consensus-based standard technical measures and that needs to be addressed," Tillis said in a press release last month.

But opponents dispute that. A letter signed by a coalition of public interest and tech industry lobbying groups argues that "this proposal would also put an agency with no engineering or other relevant expertise in charge of how digital products are designed." Moreover, they said the legislation "risks corruption and capture from specific businesses and vendors pitching their own products."

It's not clear when—or even if—this legislation will come up for a vote. Traditionally, a bill like this would be considered by a Senate committee before making its way to the Senate floor. But as Congress has become more dysfunctional, it has become increasingly common for bills like this to get attached at the last minute to gargantuan "must-pass" spending bills.

For example, in December 2020, Tillis introduced legislation to make it a felony to run a pirate streaming site. Just two weeks later, the proposal was attached to the massive 5,600-page, $900 billion COVID spending bill. As a result, Tillis' bill became law before most lawmakers—to say nothing of the general public—had time to read it.

We don't know if something similar will happen with the SMART Copyright Act. But we thought it would be worth digging into the legislation now, just in case.

A new approach to filtering

Congress last did a comprehensive overhaul of copyright law with the 1998 Digital Millennium Copyright Act. That law included the notice-and-takedown system that's familiar to many Internet users. Under this system, online service providers are shielded from liability for copyright infringement if they promptly take down potentially infringing material when notified to do so by copyright holders.

This "safe harbor" rule included many caveats, including a requirement that a service provider "accommodates and does not interfere" with "standard technical measures." Lawmakers envisioned copyright holders and online service providers working together to develop an industry standard for watermarking copyrighted content. Then they hoped service providers could automatically flag and take down watermarked content if the owner didn't authorize it.

But almost a quarter-century later, that hasn't happened. The courts haven't identified any "standard technical measures" that online service providers must accommodate. Instead, most major platforms have developed proprietary filtering technologies tailored to their needs. YouTube, for example, has a system called ContentID that uses fingerprinting technology to automatically detect infringing video and audio content. YouTube said in 2018 that it had spent $100 million to create this system.

But many smaller websites don't use any particular anti-piracy technology. And some rightsholders argue that this is a problem. So the new law would give the Library of Congress—and its subsidiary, the US Copyright Office—the power to unilaterally pick anti-piracy "technical measures" that online platforms must adopt.

Specifically, the law would establish a new three-year cycle to adopt new anti-piracy technologies. Every three years, the public could submit petitions proposing new mandates for anti-piracy technology. The Copyright Office would seek public comment on each proposal and then decide which ones would become legally mandatory. Online platforms would then have at least a year to implement the new measures. Then a year or two later, the whole process would start again.

If you're a copyright nerd, this "triennial" rule-making process might sound familiar. It's the same process used by another section of the DMCA—the one that criminalizes the distribution of "circumvention devices" for digital rights management schemes. That portion of the law gave the Copyright Office power to grant case-by-case exceptions. Some copyright reformers have criticized that process, arguing that it's too haphazard and that the Copyright Office is too biased toward copyright holders. But Leahy and Tillis have taken it as a model in their new legislation.

Mandatory anti-piracy technology

The practical upshot is that anyone who runs a website hosting user-submitted content would be placed on a three-year cycle of mandatory technology upgrades. Companies that sell anti-piracy products could petition the Copyright Office to be added to the list. If the companies succeeded, any platform with a US presence would have as little as a year to adopt the technology.

The law tries to provide some safeguards against abuse of the process. A technology could only be mandated if it was available on a non-discriminatory basis and for "reasonable" royalties. The Copyright Office is only supposed to mandate anti-piracy technologies if they "do not impose substantial and disproportionate costs on service providers or substantial and disproportionate burdens on their systems or networks." If the Copyright Office does its job well, this might not turn into a nightmare for companies running online platforms.

Of course, the Copyright Office might underestimate how much of a burden a mandatory filtering technology might impose on a platform or its users. There's also a risk that mandating the adoption of new anti-piracy technologies could introduce unexpected security vulnerabilities. The US Copyright Office's main job is to keep track of copyright registrations; it doesn't exactly have a deep bench of technology expertise.

To try to remedy that lack of expertise, the bill creates a new position, the chief technology adviser, to advise the Copyright Office on this kind of issue. It also directs the agency to seek advice from the National Institute of Standards and Technology and from "any relevant cybersecurity agency."

But at a minimum, the legislation would increase the overhead required to run any website or app that hosts user-submitted content. If you wanted to start a new online platform, one of the first steps would be to hire a lawyer to review the latest anti-piracy technology mandates and figure out which ones apply to your new service.

If you get this wrong, you could be exposed to litigation from copyright holders. The first lawsuit could lead to damages as high as $150,000. Repeat offenders could face statutory damages as high as $800,000.

These technology mandates could also create headaches for the users of online platforms. Over the years, we've written many stories about YouTube's ContentID system going haywire and flagging legitimate content as infringing copyright. This kind of snafu could become much more common if platforms across the Internet are forced to adopt anti-piracy technologies they don't own and might not be able to modify.

DuckDuckGo Insists it Didn’t ‘Purge’ Piracy Sites from Search Results

Blank site search results for The Pirate Bay gave users reason to believe otherwise
Umar Shakir

Users of privacy-focused search engine DuckDuckGo have been unable to site search the domains of some well-known pirated media sites recently, as reported by TorrentFreak on Friday. DuckDuckGo CEO Gabriel Weinberg called it “completely made up,” tweeting over the weekend that this is the result of a site operator error. Weinberg insisted the company is not purging any results. “Anyone can verify this by searching for an outlet and see it come up in results,” Weinberg tweeted.

To observers, it seemed as if DuckDuckGo had de-indexed searches for copyright-flouting media download sites like The Pirate Bay and Fmovies, and even a site search for the open-source tool youtube-dl came up empty. TorrentFreak later updated its report citing a company spokesperson blaming the issue on Bing search data, which DuckDuckGo relies upon.

Similarly, we are not "purging" YouTube-dl or The Pirate Bay and they both have actually been continuously available in our results if you search for them by name (which most people do). Our site: operator (which hardly anyone uses) is having issues which we are looking into.
— Gabriel Weinberg (@yegg) April 17, 2022

We reached out to DuckDuckGo about the issue and received this response from senior communications manager Allison Goodman:

After looking into this, our records indicate that YouTube-dl and The Pirate Bay were never removed from our search results when you searched for them directly by name or URL, which the vast majority of people do (it’s rare for people to use site operators or query operators in general).

The Verge was told the new behavior is not targeted to piracy-linked sites. “We are having issues with our site: operator, and not just for these sites,” wrote Goodman. “Some of the other sites routinely change domain names and have spotty availability, and so naturally come in and out of the index but should be available as of now.” The Verge was able to observe these changes: a Friday search for “site:thepiratebay.org spider-man” gave zero results (including the absence of the main site), but today, the search does at least yield the thepiratebay.org website — but not anything within.

Like Goodman, Weinberg claimed site operators are sparingly used and downplayed it as an issue:

Similarly, we are not “purging” YouTube-dl or The Pirate Bay and they both have actually been continuously available in our results if you search for them by name (which most people do). Our site: operator (which hardly anyone uses) is having issues which we are looking into.

No matter what’s causing the change, this is the second recent dust-up for DuckDuckGo over its search results. In March, the company responded to Russia’s invasion of Ukraine by saying it would down-rank sites spreading Russian misinformation. Right-wing figures who had promoted it as an alternative to Google claimed it had abandoned principles of free speech for censorship, while DDG spokesperson Kamyl Bazbaz told Recode, “This isn’t censorship. It’s just search rankings.”

Until next week,

- js.

Current Week In Review

Recent WiRs -

April 16th, April 9th, April 2nd, March 26th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.

"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
Thanks For Sharing
JackSpratts is offline   Reply With Quote

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 30th, '11 JackSpratts Peer to Peer 0 27-07-11 06:58 AM
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM

All times are GMT -6. The time now is 09:26 PM.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.
www.p2p-zone.com - Napsterites - 2000 - 2021