P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 23-09-20, 06:36 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - September 26th, ’20

Since 2002
































































September 26th, 2020




Sony, Others Can't Take Pirating Claim To 11th Circ.

U.S. District Judge Mary S. Scriven said she would not allow the music labels, which include Sony Music Entertainment and UMG Recordings Inc., to appeal a July 8 decision that dismissed their claim that BHN is vicariously liable for the copyright infringement committed by customers who download pirated music.

They had argued an interlocutory appeal now would potentially reduce the amount of litigation necessary on remand and eliminate inefficiencies, but the judge did not agree.

"The resolution of plaintiffs' proposed interlocutory appeal will not advance the ultimate termination of this litigation," the judge said. "It will only serve potentially to drive two appeals to the circuit court."

The labels claim BHN failed to address "rampant, repeated infringements" of their copyrighted works on its network despite receiving more than 100,000 notices between 2012 and 2016 detailing specific acts of infringement by subscribers.

In their lawsuit, which alleges contributory copyright infringement and vicarious liability against BHN, the music labels say BHN looked the other way in order to continue to reap millions in profits from subscribers.

In January, BHN asked the court to dismiss only the vicarious liability claim, arguing the music labels had failed to plausibly allege the telecommunications provider received a direct financial benefit from the alleged infringement. The company argued the music labels were required to plead that customers were drawn to BHN's internet service because of the ability to engage in infringing activities as opposed to just wanting to access the internet efficiently.

The court agreed and on July 8 it dismissed the vicarious liability claim after holding the music labels would have to allege the availability of infringing content was the main customer draw to the BHN service in order to proceed on that claim.

But the labels said that other courts, including the Ninth Circuit, have reached the opposite conclusion and asked for certification of an immediate appeal of the nonfinal order.

An attorney for BHN declined to comment. An attorney for the labels did not respond to a request for comment.

The labels are represented by Jonathan M. Sperling, Mitchell A. Kamin and Neema T. Sahni of Covington & Burling LLP; David C. Banker and Bryan D. Hull of Bush Ross PA; and Matthew J. Oppenheim, Scott A. Zebrak, Jeffrey M. Gould and Kerry M. Mustico of Oppenheim & Zebrak LLP.

BHN is represented by Michael S. Elkin, Thomas Patrick Lane, Seth E. Spitzer, Erin R. Ranahan, Shilpa A. Coorg and Jennifer A. Golinveaux of Winston & Strawn LLP and William J. Schifino Jr., John Schifino and Ryan Lee Hedstrom of Gunster.

The case is UMG Recordings Inc. et al. v. Bright House Networks LLC, case number 8:19-cv-00710, in the U.S. District Court for the Middle District of Florida.

--Editing by Alyssa Miller.
https://www.law360.com/telecom/artic...-to-11th-circ-





Dark Web Drugs Raid Leads to 179 Arrests
BBC

Police forces around the world have seized more than $6.5m (£5m) in cash and virtual currencies, as well as drugs and guns in a co-ordinated raid on dark web marketplaces.

Some 179 people were arrested across Europe and the US, and 500kg (1,102lb) of drugs and 64 guns confiscated.

The operation, known as DisrupTor, was a joint effort between the Department of Justice and Europol. It is believed that the criminals engaged in tens of thousands of sales of illicit goods and services across the US and Europe.

Of those arrested 119 were based in the US, two in Canada, 42 in Germany, eight in the Netherlands, four in the UK, three in Austria and one in Sweden.

Police are getting better at targeting operations on the dark web - a part of the internet that is accessible only through specialised tools. This latest raid follows the takedown of the Wall Street market last year, which was then thought to be the second-largest illegal online market on the dark web.

Mr Sileris said: "Law enforcement is most effective when working together, and today's announcement sends a strong message to criminals selling or buying illicit goods on the dark web: the hidden internet is no longer hidden and your anonymous activity is not anonymous."

"With the spike in opioid-related overdose deaths during the Covid-19 pandemic, we recognise that today's announcement is important and timely," said FBI director Christopher Wray.

Kacey Clark, a researcher at dark web monitoring specialist Digital Shadows said: "This is another further blow to organised cybercrime. The operation which took down the AlphaBay and Hansa marketplaces three years ago spooked cyber criminals, since it resulted in many follow up prosecutions as law enforcement pieced evidence together - often many months later.

"Wall Street market emerged from these ashes and was the most significant one in existence at the time. It would appear that law enforcement has followed the same pattern and that is why we are seeing arrests today."

In the short-term there could be a big impact. This operation follows other recent incidents that have shaken trust in dark web stores.

It's thought the administrators made off with members' funds, leaving customers' wallets empty and vendors needing to rebuild their shops somewhere else.

Three other major sites have also been linked to exit scams in the last 12 months. So, the police operation comes at a time when many people may already be questioning their shopping habits.

However, as we've seen in the past with big takedowns like AlphaBay, the lure of buying drugs and other illegal items on the internet means there will always be a market.

Other sites will try to boost their security and anonymity, and it's likely more marketplaces will sprout up, potentially using even more innovative techniques to make it harder for law enforcement to find them.
https://www.bbc.com/news/technology-54247529





China-Based Karaoke Machine Company Faces Prosecution for Copyright Infringement in Taiwan

Karaoke is popular in many Asian countries, including Taiwan, and the market for karaoke equipment is highly competitive. One of the main costs in the industry is the royalties for the songs contained within the karaoke machine, prompting many to get around these costs by pirating music. Legislation and legal departments are trying to eliminate this kind of crime, which is becoming smarter with technological developments.

In Taiwan, Thunderstone Technology Ltd, a Chinese set-top box company, is facing multiple charges under Article 87 of the Copyright Act, which aims to combat the rise in online piracy. In order to block the use of piracy websites more comprehensively, the courts have recently begun to look at karaoke devices, which have been placed in the provisions of Article 87(1.8) of the Copyright Ac amended in May 2019.

Thunderstone is the largest karaoke service provider in China and the person in charge of the Taiwanese branch was prosecuted in early September 2020. The company is charged with manufacturing karaoke devices that can access unauthorised cloud playlists, which is prohibited by the new law. According to the indictment, there is no controversy with the machine itself. Nevertheless, applications installed on it can still access websites that often contain unauthorised media content. Since Thunderstone installs these apps in the machines, it is undoubtedly liable for property infringement.

“The case that karaoke devices via the cloud that have not yet obtained authorized songs is a new type of copyright infringement,” said Mao Hao Gi, the supervisor of the copyright department at the Ministry of Economic Affairs. He states that there is no relevant domestic case law in this area so far.

Previous copyright infringement cases have mainly focused on the reproduction, public performance and public transmission of unauthorised music. However, the Taiwan Intellectual Property Office (TIPO), which is part of the Ministry of Economic Affairs, amended the provisions of Article 87(1(8)) of the Copyright Act last year, which was later approved by the Legislative Yuan.

Under the new law, organising, selling and sharing links to unauthorised media content for profit, or selling set-up boxes that could connect to unauthorised sites is considered to be wrongdoing. In addition, these acts may also lead to a sentence of up to two years imprisonment or a fine up to a maximum of NT$500,000 as per Article 93.

As the process of authorising intellectual property across several countries can be complicated and contentious, the case has garnered much attention in the karaoke service industry. However, the crucial factor determining copyright infringement depends on the evidence provided by both parties. Thus, this case is bound to cause a wave of legal battles. If the person in charge of Thunderstone is found guilty, this lawsuit will be the first infringement case involving karaoke in Taiwan.

Jane CC Wang, Han-Wei Lin

Formosa Transnational
https://www.lexology.com/library/det...1-40f90162d05a





Facebook will let People Claim Ownership of Images and Issue Takedown Requests

The days of reposting images on Instagram might be over
Ashley Carman

Facebook is going to let people take more control over the images they own and where they end up. In an update to its rights management platform, the company is starting to work with certain partners today to give them the power to claim ownership over images and then moderate where those images show up across the Facebook platform, including on Instagram. The goal is to eventually open this feature up to everyone, as it already does with music and video rights. The company didn’t give a timeline on when it hopes to open this up more broadly.

Facebook didn’t disclose who its partners are, but this could theoretically mean that if a brand like National Geographic uploaded its photos to Facebook’s Rights Manager, it could then monitor where they show up, like on other brands’ Instagram pages. From there, the company could choose to let the images stay up, issue a takedown, which removes the infringing post entirely, or use a territorial block, meaning the post stays live but isn’t viewable in territories where the company’s copyright applies.

“We want to make sure that we understand the use case very, very well from that set of trusted partners before we expand it out because, as you can imagine, a tool like this is a pretty sensitive one and a pretty powerful one, and we want to make sure that we have guardrails in place to ensure that people are able to use it safely and properly,” says Dave Axelgard, product manager of creator and publisher experience at Facebook, in a comment to The Verge.

To claim their copyright, the image rights holder uploads a CSV file to Facebook’s Rights Manager that contains all the image’s metadata. They’ll also specify where the copyright applies and can leave certain territories out. Once the manager verifies that the metadata and image match, it’ll then process that image and monitor where it shows up. If another person tries to claim ownership of the same image, the two parties can go back and forth a couple times to dispute the claim, and Facebook will eventually yield it to whoever filed first. If they then want to appeal that decision, they can use Facebook’s IP reporting forms.

"The system defaults ownership to whoever filed first"

This update could potentially upend the way Instagram works right now, with accounts often sharing the same image and only tagging the presumed original rights holder as credit. Now those rights holders can take down the post without delay. Ultimately, creators might end up having to invest in their own photography or image creation to avoid having posts taken down. This might ultimately be what Instagram wants: to become a place where original images are shared versus regurgitated. This will be especially interesting to watch with memes.

Axelgard says they’re starting with a small group to “learn more and figure out the proper way to address specific use cases like memes.”

Part of that learning process means determining how much editing can happen to an image, like a meme, before it qualifies as a “match” with a rights holder’s image. Memes, of course, are edited constantly, so Facebook needs to determine whether it’ll let people remove those memes.

Copyright on Instagram has been an issue for years, and most recently, the company said websites need photographers’ permission to embed their posts. In the past, paparazzi have sued celebrities for uploading their photos to their own accounts, too. Basically, copyright gets messy, especially on Instagram, and the Rights Manager might streamline things while also transforming the platform.
https://www.theverge.com/2020/9/21/2...right-takedown





Windows XP Source Code Leaks Online

Windows Server 2003 source also included
Tom Warren

Microsoft’s source code for Windows XP and Windows Server 2003 has leaked online. Torrent files for both operating systems’ source code have been published on various file sharing sites this week. It’s the first time source code for Windows XP has leaked publicly, although the leaked files claim this code has been shared privately for years.

The Verge has verified the material is legitimate, and we’ve reached out to Microsoft to comment on the leak.

It’s unlikely that this latest source code leak will pose any significant threat to companies still stuck running Windows XP machines. Microsoft ended support for Windows XP back in 2014, although the company responded to the massive WannaCry malware attack with a highly unusual Windows XP patch in 2017.

While this is the first time Windows XP source code has appeared publicly, Microsoft does run a special Government Security Program (GSP) that allows governments and organizations controlled access to source code and other technical content.

This latest XP leak isn’t the first time Microsoft’s operating system source code has appeared online. At least 1GB of Windows 10-related source code leaked a few years ago, and Microsoft has even faced a series of Xbox-related source code leaks this year. Original Xbox and Windows NT 3.5 source code appeared online back in May, just weeks after Xbox Series X graphics source code was stolen and leaked online.

It’s not immediately clear how much of the Windows XP source code is included in this leak, but one Windows internals expert has already found Microsoft’s NetMeeting user certificate root signing keys.

Parts of the source code leak also reference Microsoft’s Windows CE operating systems, MS-DOS, and other leaked Microsoft material. Bizarrely, the files also include references to Bill Gates conspiracy theories, in a clear attempt to spread misinformation.
https://www.theverge.com/2020/9/25/2...urce-code-leak





Pandemic Accelerated Cord Cutting, Making 2020 the Worst-Ever Year for Pay TV
Sarah Perez

The pandemic has accelerated adoption of a number of technologies, from online grocery to multiplatform gaming to streaming services and more. But one industry that has not benefited is traditional pay TV. According to new research from eMarketer, the cable, satellite and telecom TV industry is on track to lose the most subscribers ever. This year, over 6 million U.S. households will cut the cord with pay TV, bringing the total number of cord-cutter households to 31.2 million.

The firm says that by 2024, the number will grow even further, reaching 46.6 million total cord-cutter households, or more than a third of all U.S. households that no longer have pay TV.

Despite these significant declines, there are still more households that have a pay TV subscription than those that do not. Today, there are 77.6 million U.S. households that have cable, satellite or telecom TV packages. But that number has declined 7.5% year-over-year — its biggest-ever drop. The figure is also down from pay TV’s peak in 2014, the analysts said.

The pay TV losses, as you may expect, are due to the growing adoption of streaming services. But if anything, the pandemic has pushed forward the cord-cutting movement’s momentum as the health crisis contributed to a down economy and the loss of live sports during the first part of the year. These trends may have also encouraged more consumers to cut the cord than would have otherwise.

“Consumers are choosing to cut the cord because of high prices, especially compared with streaming alternatives,” said eMarketer forecasting analyst at Insider Intelligence Eric Haggstrom. “The loss of live sports in H1 2020 contributed to further declines. While sports have returned, people will not return to their old cable or satellite plans,” he added.

Pay TV providers have been attempting to mitigate their losses by shifting their focus to more profitable internet packages, which help power the services that consumers are turning to, like Netflix and Hulu.

Related to the pay TV decline, the loss in TV viewership is also impacting the advertising industry.

Total TV ad spend will drop 15% in 2020 to $60 billion — the lowest-ever figure the industry has since since 2011.

Some of this is pandemic-related, however, so TV ad dollars are expected to rebound some in 2021. But, overall, TV ad spending will continue to remain below pre-pandemic levels through at least 2024, the analysts said.

But it may never get back to “normal” levels in the future.

“While TV ad spending will rebound in 2021 with the broader economy, it will never return to pre-pandemic levels,” Haggstrom stated. “Given trends in cord-cutting, audience erosion and growth in streaming video, more ad dollars will shift from TV to digital video in the future.”
https://techcrunch.com/2020/09/21/pa...ar-for-pay-tv/





T-Mobile Amassed “Unprecedented Concentration of Spectrum,” AT&T Complains

T-Mobile rivals say it has too much spectrum, urge FCC to impose limits.
Jon Brodkin

AT&T and Verizon are worried about T-Mobile's vast spectrum holdings and have asked the Federal Communications Commission to impose limits on the carrier's ability to obtain more spectrum licenses. Verizon kicked things off in August when it petitioned the FCC to reconsider its acceptance of a new lease that would give T-Mobile another 10MHz to 30MHz of spectrum in the 600MHz band in 204 counties. AT&T followed that up on Friday with a filing that supports many of the points made in Verizon's petition.

T-Mobile was once the smallest of four national carriers and complained that it didn't have enough low-band spectrum to match AT&T and Verizon's superior coverage. But T-Mobile surged past Sprint in recent years and then bought the company, making T-Mobile one of three big nationwide carriers along with AT&T and Verizon. T-Mobile also bolstered its low-band spectrum holdings by dominating a 600MHz auction in 2017.

"The combination of Sprint and T-Mobile has resulted in an unprecedented concentration of spectrum in the hands of one carrier," AT&T wrote in its filing to the FCC on Friday. "In fact, the combined company exceeds the Commission's spectrum screen, often by a wide margin, in Cellular Market Areas representing 82 percent of the US population, including all major markets."

T-Mobile's large spectrum holdings demand "changes in how the Commission addresses additional acquisitions of spectrum by that carrier," AT&T said in another part of the filing. AT&T also posted a blog on the topic, saying that "Additional spectrum leases with Dish will cause T-Mobile to exceed the 250MHz screen by as much as 136MHz."

FCC must explain itself, AT&T says

The FCC's spectrum screen is not a hard-and-fast limit but, rather, one data point the FCC uses in its public-interest analyses. T-Mobile's new leases in the 600MHz band are with entities called Channel 51 License Company and LB License Co, which do not provide any service over the spectrum.

Even without the pending transactions, Verizon told the FCC that "T-Mobile already holds licenses for 311MHz of low- and mid-band spectrum nationwide. That is more than the low- and mid-band spectrum licensed to Verizon and AT&T combined." Verizon said there is "a high likelihood" of competitive harms being caused by T-Mobile acquiring more spectrum, and it urged the FCC to "reject the arrangements or require T-Mobile to take action to mitigate those harms, including requiring spectrum divestitures."

Officially, AT&T said it "takes no position on whether T-Mobile's lease applications were properly accepted by the FCC," but the company said that the FCC "should provide an explanation of why it permitted T-Mobile to further exceed the spectrum screen."

"The Commission's failure to issue a written order in a transaction allowing spectrum aggregation in excess of the screen to this degree is highly unusual... Moreover, without a written order explaining its analysis, there is no evidence that the Commission has carefully attempted to evaluate the potential for competitive harm," AT&T wrote.

T-Mobile says Verizon is “disingenuous”

T-Mobile opposed Verizon's position in an August filing. "Verizon does not even try to demonstrate any harm to itself from the 600MHz spectrum leases—a requirement for petitioners who did not participate in an earlier stage of the proceeding," T-Mobile wrote. "As a company that elected not to participate in the Commission's 600MHz auction and currently touts its massive millimeter-wave spectrum holdings as support for 5G superiority, it is simply disingenuous for Verizon to now complain that T-Mobile's addition of 600MHz spectrum to its portfolio is somehow anticompetitive."

T-Mobile also said its expanded spectrum holdings will boost competition for home-Internet service, "as T-Mobile plans to use the leased spectrum to offer sorely needed in-home wireless broadband in competition with Verizon—particularly in rural areas."

Verizon had complained about T-Mobile's large holdings of low- and mid-band spectrum in big cities including Los Angeles, Chicago, San Francisco, Baltimore, Philadelphia, Boston, Dallas, Houston, and Atlanta. T-Mobile countered that Verizon actually has more spectrum overall in each of those cities when counting the high-band (millimeter-wave) spectrum Verizon uses in its 5G network. But as we've covered in numerous articles, millimeter-wave signals don't travel far and are easily blocked by walls and other obstacles, resulting in Verizon having a sparse 5G footprint. Recent tests by OpenSignal found that T-Mobile has significantly more 5G coverage than its rivals.
https://arstechnica.com/tech-policy/...rum-advantage/





T-Mobile Hits Back at AT&T and Verizon after Spectrum-Hoarding Accusations

Carrier battle heats up as FCC prepares to auction more spectrum.
Jon Brodkin

T-Mobile US CEO Mike Sievert yesterday fired back at AT&T and Verizon, saying the carriers' complaints about T-Mobile obtaining more spectrum licenses show that they are afraid of competition.

"The duopolists are scrambling to block this new competition any way they can... Suddenly in the unfamiliar position of not having a dominant stranglehold on the wireless market, and preferring not to meet the competitive challenge in the marketplace, AT&T and Verizon are urging the FCC to slow T-Mobile down and choke off our ability to compete fairly for added radio spectrum," Sievert wrote in a blog post.

As we wrote Monday, Verizon and AT&T have urged the Federal Communications Commission to impose limits on T-Mobile's ability to obtain more spectrum licenses. AT&T complained that T-Mobile's acquisition of Sprint allowed it to amass "an unprecedented concentration of spectrum."

Verizon in August petitioned the FCC to reconsider its acceptance of a new lease that would give T-Mobile another 10MHz to 30MHz of spectrum in the 600MHz band in 204 counties. But "the main event," according to Sievert, is the FCC's upcoming C-band spectrum auction in December, which will distribute 280MHz of spectrum in the 3.7–3.98GHz band.

"Anticipated to raise many billions of dollars for American taxpayers, the C-band is the largest pool of new spectrum expected any time in the near future," Sievert wrote. "The results of the auction will shape market competition and network advancement in the US for years to come."

Verizon told the FCC that "T-Mobile already holds licenses for 311MHz of low- and mid-band spectrum nationwide," more than AT&T and Verizon combined and well above the FCC's 250MHz "spectrum screen." The FCC uses this spectrum screen as one data point in its public-interest analyses, but the 250MHz screen isn't a hard-and-fast limit, and T-Mobile says it should be allowed to purchase more spectrum licenses.

Sievert pointed out that Verizon has more spectrum than T-Mobile when including high-band spectrum that isn't counted in the FCC's screen:

Citing the "spectrum screen", AT&T and Verizon would like to keep T-Mobile out of the bidding or at least put a damper on our ability to aggressively bid against them in this important auction, so that they can run the table unchecked. The "spectrum screen" was put in place years ago with good intention and broad support including from T-Mobile—to help ensure competition in the market when spectrum supply was limited—but is now being cited by AT&T and Verizon for reasons entirely opposite to this intention. And importantly, the "spectrum screen" pre-dates, and therefore ignores, certain 5G spectrum where Verizon already dominates. In fact, Verizon holds massive spectrum (far more than T-Mobile's entire portfolio of low, mid-, and high-band spectrum) in the "millimeter wave" bands, which are the cornerstone of their 5G strategy and which are not subject to the "spectrum screen".

Verizon has the most spectrum of any US carrier "by far" but "has the anti-competitive instincts and sheer audacity to complain that a much smaller T-Mobile has too much," Sievert wrote. "After holding massive spectrum advantages over T-Mobile and others for decades, Verizon and AT&T just can't stand the idea of anyone else being ahead of them or having a fair shot in an auction where they plan to use their financial might to do what they have always done—dominate."

Sievert also wrote that the 600MHz spectrum T-Mobile is leasing was previously controlled by AT&T. "AT&T had won at auction the spectrum that Columbia Capital is now leasing to T-Mobile and—guess what—AT&T decided it didn't want it and sold it to Columbia," Sievert wrote. "Verizon, the ringleader in opposing this lease, never bothered to even show up and bid for any 600MHz spectrum. In short, we have AT&T and Verizon seeking to block T-Mobile from using spectrum that AT&T decided to jettison, and Verizon had no interest in pursuing. Now both companies are seeking to block T-Mobile from putting this spectrum to use for the benefit of American consumers."

Fight to continue ahead of auction

The dueling statements suggest that the carrier battle will continue in the months ahead as the C-band auction draws near. "If T-Mobile participates in the auction, the Commission will almost certainly be forced to contend—for the first time—with the situation of applying its spectrum screen (and post-auction case-by-case review) to an acquisition of licenses by an applicant whose holdings already exceed the screen by a wide margin," AT&T said in its FCC filing last week.

The spectrum screen is likely to be raised from 250Mhz to 345MHz after the auction to account for increased availability of licenses, AT&T said. T-Mobile already has more than 345MHz in some parts of the US, and there are "many other" US markets where T-Mobile would exceed 345MHz if it wins any spectrum in the auction, AT&T said.

The auction is scheduled to begin on December 8, and preliminary "short-form" applications were due yesterday. The FCC will review the applications and then release a list of bidders.
https://arstechnica.com/tech-policy/...g-accusations/





At this Point, 5G is a Bad Joke

Thinking of buying a new phone, just for high-speed mmWave 5G? Do yourself a favor: Don't.
Steven J. Vaughan-Nichols

Who doesn't want more bandwidth?

I sure do, and I currently have 300Mbps to my home office via Spectrum cable. What I really want is a Gigabit via fiber optic to my doorstep. Maybe I'll get it someday. But, what I do know for a fact is I'm not going to get Gigabit-per-second speeds from 5G. Not now, not tomorrow, not ever.

At the moment, there are a lot of things the telecomms are telling you in one ad after another that's just not true. I know – shocking news right? But, even by their standards, 5G is pretty bogus.

Let's start with the name itself. There is no single "5G." There are, in fact, three different varieties, with very different kinds of performance.

First, there's low-band 5G, which is the one T-Mobile likes to talk about. That's the one offering broad coverage. A single tower can cover hundreds of square miles. It's no speed demon, but even 20+ Mbps speeds are a heck of a lot better than the 3Mbps speeds that rural DSL sticks you with. And, in ideal situations, it may give you 100+ Mbps speeds. (Back in my home county in West Virginia, population about 7,200, I'd have killed for speeds like that.)

Then, there's mid-band 5G. This runs between 1GHz and 6GHz and it has about half the coverage of 4G. You can hope to see speeds in the 200Mbps range with it. If you're in the United States, you rarely hear about this one. Only T-Mobile, which inherited Sprint's 2.5GHz mid-band 5G, is deploying it. It's going slowly, though, because a lot of its potential bandwidth is already being used.

But, what most people want, what most people lust for is 1Gbps speeds with less than 10 milliseconds of latency. According to a new NPD study, about 40% of iPhone and 33% of Android users are extremely or very interested in getting 5G. They want all that speed and they want it now. And 18% even claim they understand the difference between the 5G network band types.

I don't believe it. Because if they did, they wouldn't be in such a rush to get a 5G smartphone. You see, to get that kind of speed you must have mmWave 5G – and it comes with a lot of caveats.

First, it has a range, at best, of 150 meters. If you're driving, that means, until 5G base stations are everywhere, you're going to be losing your high-speed signal a lot. Practically speaking, for the next few years, if you're on the move, you're not going to be seeing high-speed 5G.

And, even if you are in range of a 5G base station, anything – and I mean anything – can block its high-frequency signal. Window glass, for instance, can stop it dead. So, you could have a 5G transceiver literally on your street corner and not be able to get a good signal.

How bad is this? NTT DoCoMo, Japan's top mobile phone service provider, is working on a new kind of window glass, just so their mmWave 5G will work. I don't know about you, but I don't want to shell out a few grand to replace my windows just to get my phone to work.

Let's say, though, that you've got a 5G phone and you're sure you can get 5G service – what kind of performance can you really expect? According to Washington Post tech columnist Geoffrey A. Fowler, you can expect to see a "diddly squat" 5G performance. That sounds about right.

And, technically speaking what are diddly squat speeds? Try "AT&T with 32Mbps with the 5G phone and 34Mbps on the 4G one. On T-Mobile, I got 15Mbps on the 5G phone and 13Mbps on the 4G one." He wasn't able to check Verizon. That's not a typo, by the way. His 4G phone was faster than his 5G phone.

It wasn't just him, since he lives in that technology backwater known as the San Francisco bay area. He checked with several national firms tracking 5G performance. They found that all three major US telecom networks' 5G isn't that much faster than 4G.

Indeed, OpenSignal reports that US 5G users saw an average speed of 33.4Mbps. Better than 4G, yes, but not "Wow! This is great!" speeds most people seem to be dreaming of. It's also, I might add, much worse than any other country using 5G, with the exception of the United Kingdom.

You're also only going to get 5G about 20% of the time. That's generic 5G, which includes T-Mobile with its great coverage, not that ultrafast mmWave 5G, with its tiny coverage, of your dreams. Unless you live or work right next to an mmWave transceiver, you're simply not going to see those promised speeds or anything close to them.

The bottom line is, if you live in the fields and woods of rural America, you should definitely consider T-Mobile 600MHz 5G. But, if you're longing for super speed, forget about it. I don't think mmWave 5G will be worth the money until 2022.

Frankly, I'm not counting on it being widely available until 2025 at the earliest. And, come the day it is, we'll still not see real-world Gigabit second speeds.

Now, about 6G though….
https://www.computerworld.com/articl...-bad-joke.html





Amazon Details its Low-Bandwidth Sidewalk Neighborhood Network, Coming to Echo and Tile Devices Soon
Frederic Lardinois

Last year, Amazon announced its Sidewalk network, a new low-bandwidth, long-distance wireless protocol it developed to help connect smart devices inside and — maybe even more importantly — outside of your home. Sidewalk, which is somewhat akin to a mesh network that, with the right amount of access points, could easily cover a whole neighborhood, is now getting closer to launch.

As Amazon announced today, compatible Echo devices will become Bluetooth bridges for the Sidewalk network later this year, and select Ring Floodlight and Spotlight Cams will also be part of the network. Because these are low-bandwidth connections, Amazon expects that users won’t mind sharing a small fraction of their bandwidth with their neighbors.

In addition, the company also announced that Tile will be the first third-party Sidewalk device to use the network when it launches its compatible tracker in the near future.

When Amazon first announced Sidewalk, it didn’t quite detail how the network would work. That’s also changing today, as the company published a whitepaper about how it will ensure privacy and security on this shared network. To talk about all of that — and Amazon’s overall vision for Sidewalk — I sat down with the general manager of Sidewalk, Manolo Arana.

Arana stressed that we shouldn’t look at Sidewalk as a competitor to Thread or other mesh networking protocols. “I want to make sure that you see that Sidewalk is actually not competing with Thread or any of the other mesh networks available,” he said. “And indeed, when you think about applications like ZigBee and Z-Wave, you can connect to Sidewalk the same way.” He noted that the team isn’t trying to replace existing protocols but just wants to create another transport mechanism — and a way to manage the radios that connect the devices.

And to kickstart the network and create enough of a presence to allow homeowners to connect their smart lights at the edge of their properties, for example, what better way for Amazon than to use the Echo family of devices.

“Echos are going to serve as bridges, that’s going to be a big thing for us,” Arana said. “You can imagine the number of customers that will benefit from that feature. And for us to be able to have that kind of service, that’s super important. And Tile is going to be the first edge device, the first Sidewalk-enabled device, and they’ll be able to track your valuables, your wallet, whatever it is that you love.”

And in many ways, that’s the promise of Sidewalk. You share a bit of bandwidth with your neighbors and in return, you get the ability to connect to a smart light in your garden that would otherwise be outside of your own network, for example, or get motion sensor alerts even when your home Wi-Fi is out, or to track your lost dog who is wearing a smart pet finder (something Amazon showed off when it first announced Sidewalk).

In today’s whitepaper, the team notes that Amazon will make sure that shared bandwidth is capped and provide a simple on/off control for compatible devices to give users the choice to participate. The maximum bandwidth a device can use is capped at 500MB and the bandwidth between a bridge and the Sidewalk server in the cloud won’t exceed 80Kbps.

The overall architecture of the Sidewalk service is pretty straightforward. The endpoint, say a connected garden light, talks to the bridge (or gateway, as Amazon also calls it in its documentation). Those gateways will use Bluetooth Low Energy (BLE), Frequency Shift Keying (FSK) and LoRa in the 900 MHz band to connect to the devices on one side — and then talk to the Sidewalk Network server in the cloud on the other.

That network server — which is operated by Amazon — manages incoming packets and ensures that they come from authorized devices and services. The server then talks to the application server, which is either operated by Amazon or a third-party vendor.

All these communications are encrypted multiple times, and even Amazon won’t be able to know the commands or messages that are being passed through the network. There are three layers of encryption here. First, there’s the application layer that enables the communication between the application server and the endpoint. Then, there’s Sidewalk’s network layer, which protects the packets over the air. In addition, there’s the so-called Flex layer, which is added by the gateway and which provides the network server with what Amazon calls “a trusted reference of message-received time and adds an additional layer of packet confidentiality.”

In addition, whatever routing information Amazon receives is purged every 24 hours and device IDs are regularly rotated to ensure data can’t be tied to individual customers, in addition to using one-way hashing keys and other cryptographic techniques.

Arana stressed that the team decided not to go public with this project until it had gone through extensive penetration tests, for example, and added kill switches and advanced security features. The team also developed novel techniques to provision devices inside the network securely.

He also noted that the silicon vendors who want to enable their products for Sidewalk have to go through an extensive testing procedure.

“When you look at the level of security requirements for the silicon to be part of Sidewalk, many of our silicon [vendors] haven’t been qualified, just because it needs to be the new version, it needs to have certain secure boot features and things. That has been quite an eye-opener for everyone, to see that IoT is definitely improving — and it is going to get to a super level — but there’s a lot of work to do and this is part of it. We took it on and embraced that security level to the maximum and the vendors have been extremely positive and forthcoming working with us.”

Among those vendors the team has been working with are Silicon Labs, Texas Instruments, Semtech and Nordic Semiconductor.

To test Sidewalk, Amazon partnered with the Red Cross to run a proof of concept implementation to help it track blood collection supplies between its distribution centers and donation sites.

“What we do with this is very simple tracking,” Arana said. “If you think about what they need, it is: did [the supplies] leave the building? Did they arrive at the other building? And it’s just an immense simplification for them in terms of the logistics and creates efficiencies in terms of the distribution of those [supplies].”

This is obviously not so much a consumer use case, but it does show the potential for Sidewalk to also take on more industrial use cases over time. As of now, that’s not necessarily what the team is focusing on, but Arana noted that there are a lot of use cases where Sidewalk may be able to replace cell networks to provide IoT connectivity for sensors and other small edge devices that don’t have large bandwidth requirements — and adding cellular connectivity also makes these devices more expensive to build.

Because Amazon is jumpstarting the network with its Echo and Ring Devices, chances are you’ll hear quite a bit more about Sidewalk in the near future.
https://techcrunch.com/2020/09/21/am...-devices-soon/





How Iranian Diaspora is Using Old-School Tech to Fight Internet Shutdown at Home

With the threat of another big internet blackout looming, companies are creating workarounds for Iranians using satellite dishes.
Mehr Nadeem

One November morning last year, Mehdi Yahyanejad listened to a voicemail in his Los Angeles office: “I’m contacting you from the city of Tehran,” said the voice. “This was the first time I’ve experienced an internet shutdown. … It feels like I’m in a prison.”

A few weeks earlier, Iran’s largest mobile networks and internet providers went offline. Amid weeks of growing anti-regime protests, Iranian authorities imposed the longest internet shutdown in the country’s history, effectively cutting off external communication for over 80 million Iranians. In an unprecedented crackdown, regime forces killed more than 300 protesters and arrested over 7,000 people. When access was finally restored on November 23, nearly half the country was still unable to come online.

Nine months after the November blackout, Iranians still live in fear of another all-out shutdown. As authorities tighten their hold on internet access, diaspora-led companies are filling the gap for Iranians who are seeking a way to bypass censors. The circumvention tools, created largely by diaspora entrepreneurs, are becoming increasingly critical as they face a crackdown at home and the bite of American-led sanctions online.

On November 15, as Iranian authorities first moved to induce the digital blackout, 44-year-old Yahyanejad raced against the clock in Los Angeles to make sure that people back home had downloaded his satellite file-casting application Toosheh. “It was a very small window,” says Yahyanejad. “Once they were fully disconnected, I wasn’t sure they’d be able to download the software.”

Launched by Yahyanejad in 2016, the technology aggregates uncensored content, like news articles, YouTube videos, and podcasts, and sends it to Iranian homes directly via satellite TV. When Yahyanejad first began developing Toosheh in 2013, an estimated 70% of Iranian households owned a satellite dish, while around 20% had access to the internet. Even as internet access has grown, state censorship means Toosheh’s satellite technology is a much more reliable source for uncensored content. Iranians can install Toosheh’s satellite channel and receive a daily dispatch in the form of a file package of up to 8 gigabytes. Once a user downloads the app, the satellite transfers circumvent the internet entirely.

Yahyanejad says Toosheh gained nearly 100,000 new Iranians users in November 2019. In the absence of an internet connection, it became the only way for many users to access news from the outside world. The voice on Toosheh’s voicemail belonged to one such user, a 34-year-old high school principal in Tehran who downloaded emergency VPN and proxy tools delivered to him through the satellite service.

Having navigated extensive cyber censorship for over a decade, Iranians are tech savvy and adept at nimbly crossing firewalls, using proxies and foreign circumvention tools. “It’s a constant cat-and-mouse game,” says Fereidoon Bashar, executive director of ASL19, a Canadian organization working to help Iranians bypass internet censorship. The group often works in tandem with Yahyanejad to distribute proxy tools.

Bashar says Iranians adapt quickly to ever-changing institutionalized control online. But the last five years of Iranian president Hassan Rouhani’s rule have seen a tighter grip on internet connections. Site blocking and calculated internet outages have become easier to enforce: the regime has reduced Iran’s dependence on global networks by pushing a local intranet, with the aim to keep online traffic inside the country. With strict American sanctions that threaten hefty fines for companies interfacing with Iran, foreign tech companies limit ordinary Iranians’ ability to purchase reliable proxies out of an abundance of caution. Riddled with insecurity, the local VPN black market is not a reliable option for those trying to avoid government attention.

But even as internet access grew incrementally difficult over the years, no one saw the November blackout coming. “An internet shutdown was previously viewed as a kind of dystopian political campaign,” says Kaveh Azarhoosh, an internet policy researcher. In November, the worst-case scenario for Iran’s censorship suddenly became a reality.

In the early days of the protest, Toosheh created a special “Protest News Package.” Every night, after aggregating content from over 200 publications, Toosheh delivered digital bundles containing clips of protests occuring in different cities: Tabriz, Qom, Shiraz, Mashhad, and others. It also contained slides about how to stay safe during a protest; crucial news coverage from banned sites, like the New York Times, Voice of America Persian, and Deutsche Welle; and a curated compilation of tweets from Iranian politicians. These packages weren’t just bringing news of the outside world to Iran: they kept Iranians informed about what was happening inside their own country too.

Yahyanejad, a physicist by training, left Iran in 1997 to pursue a Ph.D. from the Massachusetts Institute of Technology. “I’ve lived in Iran, and I’ve gone to school and college there,” he explains. “I know that this repressive government exists because they are able to control the flow of information.” He says he’s always had an interest in limiting their control. “I want,” he says, “to see democracy in Iran in my lifetime.”

In 2006, Yahyanejad launched a Reddit-like forum called Balatarin. “Its popularity surprised me,” he says. The site posted a translated rumor about the supreme leader’s death, after he hadn’t been seen in public for two months, and was swiftly blocked by Iran shortly after. The moment was a turning point for Yahyanejad, who says, “I made a conscious decision to keep the platform open at a personal cost.”

The Iranian blocks on Balatarin inspired Yahyanejad to explore censorship circumvention. He launched the satellite app Toosheh in 2016. “I went on BBC Persian’s ‘Newshour,’ and as soon as I talked about it, people started downloading and testing it immediately,” he says.

Yahyanjad finds himself among a cohort of diaspora Iranians working to fight the regime’s censors. ASL19, the Canadian technology group, collaborated with him to deliver proxy tools to over half a million Iranians during November’s shutdown. ASL19’s Bashar, who left Iran in the early 2000s before the tumultuous Green Movement, says diaspora Iranians are stepping into the field because Iranians “risk harsh conditions, imprisonment, and long sentences” if they’re caught creating circumvention tools inside the country.

But even outside of Iran, outspoken diaspora activists like Bashar and Yahyanejad face immense risks. In June, it was reported that an Iranian activist named Ruhollah Zam was sentenced to death in Iran after creating a popular anti-government Telegram news channel that he operated while living in exile in France. The channel, with 1.4 million followers, was shut down shortly after. For Yahyanejad, who knew of Ruhollah through the diaspora community, the ordeal was a shot across the bow. “I can never go back to Iran,” Yahyanejad admits. “But I see myself as part of the movement.”

Yahyanejad’s work has become crucial for Iranians, even after November’s shutdown. On July 14, following news that Iran’s Supreme Court had upheld the death sentences of three young anti-regime protesters, Iranians took to banned social media sites in an unprecedented protest, with over 6 million posts under the hashtag #DontExecute. Hours after #DontExecute began trending online, digital rights organization NetBlocks monitored disruptions in the network. Panicked users, still reeling from November’s shutdown, speculated another block was imminent. Luckily, an all-out ban didn’t occur, but the renewed threat of one was enough to increase Toosheh’s usage by more than 50% in the days that followed.

For Yahyanejad, who has been actively fighting the Iranian regime’s censorship for over a decade, the past year is proof that his work is even more necessary. “Internet shutdowns are psychological tools designed to terrify populations, to convince them that they are voiceless,” he says. “Fighting shutdowns is important so that you can show people that they are not alone and that there are others.”
https://restofworld.org/2020/cat-and-mouse-censorship/





As Trump Holds Back, Tech Firms Step in on Election Security
Mary Clare Jalonick

Adam Schiff was in the audience at the 2018 Aspen Security Forum when a Microsoft executive mentioned an attempted hacking of three politicians up for reelection. It was the first that Schiff, then the top Democrat on the House Intelligence Committee, had ever heard of it.

Schiff said he thought it was “odd” that Congress hadn’t been briefed. He got in touch with high-ranking officials in the intelligence agencies, and they didn’t know about it, either. It turned out that Russian hackers had unsuccessfully tried to infiltrate the Senate computer network of then-Sen. Sen. Claire McCaskill, D-Mo., and other unidentified candidates.

Two years later, Schiff says that breakdown is still emblematic of the disjointed effort among government agencies, Congress and private companies as they try to identify and address foreign election interference. But this year, with President Donald Trump adamant that Russia is not interfering and his administration often trying to block what Congress learns about election threats, it’s those private companies that often are being called upon to fill the breach.

Lawmakers welcome the help from the private sector and say the companies have become increasingly forthcoming, but it’s a haphazard way to get information. It allows the companies to control much of what the public knows, and some are more cooperative than others.

“If a company wants to publicize it, that’s great,” says Virginia Sen. Mark Warner, the top Democrat on the Senate Intelligence Committee. “But what happens when they don’t want to bring it to the attention of the government?”

That’s what happened in 2016, when Russia spread disinformation through social media, including Facebook, Twitter and YouTube. Those companies were slow to recognize the problem and they initially balked at government requests for more information. But after Congress pushed them publicly, they gradually became more cooperative.

Now, Facebook and Twitter give Congress regular briefings to the intelligence committees, issue frequent reports about malicious activity and are part of a group that regularly meets with law enforcement and intelligence officials in the administration.

Microsoft, which is part of that group, announced last week that Russian hackers had tried to breach computers at more than 200 organizations, including political campaigns and their consultants. Most of the hacking attempts by Russian, Chinese and Iranian agents were halted by Microsoft security software and the targets notified. But the company would not say which candidates or entities may have been breached.

Lawmakers say the private sector can only do so much.

“It’s certainly important that the social media companies participate and cooperate, which they have not always done in the past, but that does not in any way replace the analysis that is done by the intelligence community, and I believe that analysis should be shared with Congress,” says Sen. Susan Collins, R-Maine, a member of the Senate Intelligence Committee.

That relationship between intelligence agencies and Congress has grown strained since Trump took office. He has has doubted the agencies’ conclusions about Russian interference in 2016 and he fired, demoted and criticized officials who shared information he didn’t like.

The current director of national intelligence, John Ratcliffe, a close Trump ally, tried to end most in-person election security briefings — a decision he later reversed after criticism from lawmakers from both parties. But Ratcliffe maintains that his office will not provide “all member” briefings for all lawmakers, citing what he says were leaks from some of those meetings this year.

Lawmakers say that in restricting what’s given to Congress, the administration is effectively restricting what it tells the public about election security and misinformation. That threatens to sow confusion, just as foreign adversaries such as Russia are hoping for.

Schiff, now chairman of the House Intelligence Committee, has pressured the companies to act more quickly, including taking down misinformation before it goes viral, not after. He has particular concerns about Google, which owns YouTube, and says it has been less transparent than others. Schiff and other lawmakers have stepped up concerns about doctored videos and foreign-owned news outlets spreading fake news on the video platform.

At a hearing with tech companies in June, Schiff pressed Google, saying that it “has essentially adopted a strategy of keeping its head down and avoiding attention to its platform while others draw heat.”

Richard Salgado, Google’s director for law enforcement and information security, told Schiff: “I certainly hope that is not the perception. If it is, it is a misperception, Mr. Chairman.”

Google has made some disclosures, including recently revealing a Chinese effort to target Trump campaign staffers and an Iranian group’s attempt to target the Biden campaign. But the company gave little detail on the attacks, including when they took place or how many were targeted.

Still, the companies have stepped up in many cases.

Facebook and Microsoft have been making disclosures to the public while also working behind the scenes with the federal government and the intelligence committees. Facebook issues a monthly release on foreign and domestic election activity, and Microsoft has publicly disclosed more than a dozen instances of threat activity since Schiff was caught unaware at the Aspen event in 2018.

The executive who revealed the Russian activity at that event, Microsoft’s Tom Burt, says the company has learned to be more proactive with the federal government. He says the attempted hackings were not something he had planned to announce at the security forum, but he answered honestly when asked a question by the moderator. Today, Burt says the company gives federal and congressional authorities a heads-up when they have announcements about election interference.

Foreign attackers “are persistent, they are skilled, they are super well-resourced, and they are going to continue to try and interfere with the electoral process and try to sow distrust with the American people,” Burt said.

As lawmakers pursue other channels of information, there are still places where the private sector cannot help. Florida Rep. Stephanie Murphy, a Democrat, has been fighting for more than a year to have the administration publicly identify two Florida counties where Russian hackers gained access to voter databases before the 2016 election. People living in those counties are still unaware.

“The only way you can fight that disinformation is with transparency, and the U.S. government has to be transparent about the attacks on our democracy by providing the public with the information they need to push back against this foreign interference,” Murphy said. “I think maybe companies are accustomed to disclosing when they have had data breaches, and that is why you are seeing corporate America lead in providing the American public with information about meddling in our election.”

___

Associated Press writer Frank Bajak in Boston contributed to this report.
https://apnews.com/680a74243257c46b99b9cdc862d7cab0





Pro-Trump Youth Group Enlists Teens in Secretive Campaign Likened to a ‘Troll Farm,’ Prompting Rebuke by Facebook and Twitter
Isaac Stanley-Becker

One tweet claimed coronavirus numbers were intentionally inflated, adding, “It’s hard to know what to believe.” Another warned, “Don’t trust Dr. Fauci.”

A Facebook comment argued that mail-in ballots “will lead to fraud for this election,” while an Instagram comment amplified the erroneous claim that 28 million ballots went missing in the past four elections.

The messages have been emanating in recent months from the accounts of young people in Arizona seemingly expressing their own views - standing up for President Donald Trump in a battleground state and echoing talking points from his reelection campaign.

Far from representing a genuine social media groundswell, however, the posts are the product of a sprawling yet secretive campaign that experts say evades the guardrails put in place by social media companies to limit online disinformation of the sort used by Russia during the 2016 campaign.

Teenagers, some of them minors, are being paid to pump out the messages at the direction of Turning Point Action, an affiliate of Turning Point USA, a prominent conservative youth organization based in Phoenix, according to four people with independent knowledge of the effort. Their descriptions were confirmed by detailed notes from relatives of one of the teenagers who recorded conversations with him about the efforts.

The campaign draws on the spam-like behavior of bots and trolls, with the same or similar language posted repeatedly across social media. But it is carried out, at least in part, by humans paid to use their own accounts, though nowhere disclosing their relationship with Turning Point Action or the digital firm brought in to oversee the day-to-day activity. One user included a link to Turning Point USA’s website in his Twitter profile until The Washington Post began asking questions about the activity.

In response to questions from The Post, Twitter on Tuesday suspended at least 20 accounts involved in the activity for “platform manipulation and spam.” Facebook also deactivated a number of accounts as part of what the company said is an ongoing investigation.

The effort generated thousands of posts this summer on Twitter, Facebook and Instagram, according to an examination by The Post and an assessment by an independent specialist in data science. Nearly 4,500 tweets containing identical content that were identified in the analysis probably represent a fraction of the overall output.

The months-long effort by the tax-exempt nonprofit is among the most ambitious domestic influence campaigns uncovered this election cycle, said experts tracking the evolution of deceptive online tactics.

“In 2016, there were Macedonian teenagers interfering in the election by running a troll farm and writing salacious articles for money,” said Graham Brookie, director of the Atlantic Council’s Digital Forensic Research Lab. “In this election, the troll farm is in Phoenix.”

The effort, Brookie added, illustrates “that the scale and scope of domestic disinformation is far greater than anything a foreign adversary could do to us.”

Turning Point Action, whose 26-year-old leader, Charlie Kirk, delivered the opening speech at this year’s Republican National Convention, issued a statement from the group’s field director defending the social media campaign and saying any comparison to a troll farm was a “gross mischaracterization.”

“This is sincere political activism conducted by real people who passionately hold the beliefs they describe online, not an anonymous troll farm in Russia,” the field director, Austin Smith, said in the statement.

He said the operation reflected an attempt by Turning Point Action to maintain its advocacy despite the challenges presented by the coronavirus pandemic, which has curtailed many traditional political events.

“Like everyone else, Turning Point Action’s plans for nationwide in-person events and activities were completely disrupted by the pandemic,” Smith said. “Many positions TPA had planned for in field work were going to be completely cut, but TPA managed to reimagine these roles and working with our marketing partners, transitioned some to a virtual and online activist model.”

The group declined to make Kirk available for an interview.

The online salvo targeted prominent Democratic politicians and news organizations on social media. It mainly took the form of replies to their posts, part of a bid to reorient political conversation.

The messages - some of them false and some simply partisan - were parceled out in precise increments as directed by the effort’s leaders, according to the people with knowledge of the highly coordinated activity, most of whom spoke on the condition of anonymity to protect the privacy of minors carrying out the work.

One parent of two teenagers involved in the effort, Robert Jason Noonan, said his 16- and 17-year-old daughters were being paid by Turning Point to push “conservative points of view and values” on social media. He said they have been working with the group since about June, adding in an interview, “The job is theirs until they want to quit or until the election.”

Four years ago, the Kremlin-backed Internet Research Agency amplified Turning Point’s right-wing memes as part of Moscow’s sweeping interference aimed at boosting Trump, according to expert assessments prepared for the Senate Intelligence Committee. One report pointed specifically to the use of Turning Point content as evidence of Russia’s “deep knowledge of American culture, media, and influencers.”

Now, some technology industry experts contend that the effort this year by Turning Point shows how domestic groups are not just producing eye-catching online material but also increasingly using social media to spread it in disruptive or misleading ways.

“It sounds like the Russians, but instead coming from Americans,” said Jacob Ratkiewicz, a software engineer at Google whose academic research, as a PhD student at Indiana University at Bloomington, addressed the political abuse of social media.

To some participants, the undertaking feels very different. Notes from the recorded conversation with a 16-year-old participant - the authenticity of which was confirmed by The Post - indicate that “He said it’s really fun and he works with his friends.” The participant, through family members, declined to comment.

The users active in the campaign, some of whom were using their real names, identified themselves only as Trump supporters and young Republicans. One simply described herself as a high school sophomore interested in softball and cheerleading.

Noonan, 46, said “some of the comments may go too far” but cast the activity as a response to similar exaggerations by Democrats. “Liberals say things that are way out there, and conservatives say things that are sometimes way out there, or don’t have enough evidence.”

Those recruited to participate in the campaign were lifting the language from a shared online document, according to Noonan and other people familiar with the setup. They posted the same lines a limited number of times to avoid automated detection by the technology companies, these people said. They also were instructed to edit the beginning and ending of each snippet to differentiate the posts slightly, according to the notes from the recorded conversation with a participant.

Noonan said his daughters sometimes work from an office in the Phoenix area and are classified as independent contractors, not earning “horrible money” but also not making minimum wage. Relatives of another person involved said the minor is paid an hourly rate and can score bonuses if his posts spur higher engagement.

Smith, as part of written responses to The Post, deferred specific questions about the financial setup to a “marketing partner” called Rally Forge, which he said was running the program for Turning Point.

Jake Hoffman, president and chief executive of the Phoenix-based digital marketing firm, confirmed that the online workers were classified as contractors but declined to comment further on “private employment matters.” He did not respond to a question about the office setup.

Addressing the use of centralized documents to prepare the messages, Hoffman said in written responses, “Every working team within my agency works out of dozens of collaborative documents every day, as is common with all dynamic marketing agencies or campaign phone banks for example.”

The messages have appeared mainly as replies to news articles about politics and public health posted on social media. They seek to cast doubt on the integrity of the electoral process, asserting that Democrats are using mail balloting to steal the election - “thwarting the will of the American people,” they alleged.

The posts also play down the threat from the novel coronavirus, which claimed the life of Turning Point’s co-founder Bill Montgomery in July. One post, which was spread across social media dozens of times, suggested baselessly that the Centers for Disease Control and Prevention is inflating the death toll from the disease. (Most experts say deaths are probably undercounted.) Another pushed for schools to reopen, saying, “President Trump is not worried because younger people do very well while dealing with covid.”

Much of the blitz was aimed squarely at Joe Biden, the Democratic presidential nominee. The former vice president, asserted one message, “is being controlled by behind the scenes individuals who want to take America down the dangerous path towards socialism.”

By seeking to rebut mainstream news articles, the operation illustrates the extent to which some online political activism is designed to discredit the media.

While Facebook and Twitter have pledged to crack down on what they have labeled coordinated inauthentic behavior, in Facebook’s case, and platform manipulation and spam, as Twitter defines its rules, their efforts falter in the face of organizations willing to pay users to post on their own accounts, maintaining the appearance of independence and authenticity.

In removing accounts Tuesday, Twitter pointed to policies specifying, “You can’t artificially amplify or disrupt conversations through the use of multiple accounts.” That includes “coordinating with or compensating others to engage in artificial engagement or amplification, even if the people involved use only one account,” according to Twitter.

On Twitter, the nearly verbatim language emanated from about two dozen accounts through the summer. The exact number of people posting the messages was not clear. Smith, the Turning Point field director, said, “The number fluctuates and many have gone back to school.” Hoffman, in an email, said, “Dozens of young people have been excited to share their beliefs on social media.”

The Rally Forge leader is a city council member in Queen Creek, Ariz., and a candidate for the state legislature.

Some of the users at points listed their location as Gilbert, Ariz., a suburb of Phoenix, according to screen shots reviewed by The Post. Some followed each other on Twitter, while most were following only a list of prominent politicians and media outlets.

One was followed by a former member of Congress, Republican Tim Huelskamp of Kansas, who is on the Catholics for Trump advisory board. Huelskamp said he could not recall what led him to follow the account and was not familiar with the effort by Turning Point. But he praised the group for “doing a great job of messaging, particularly with younger folks.”

Several teenagers were using their real names or variations of their names, while other accounts active in posting the pro-Trump messaging appeared to be operating under pseudonyms. The Post’s review found that some participants seem to maintain multiple accounts on Facebook, which is a violation of the company’s policies.

Explaining why the users do not disclose that they are being paid as political activists, Hoffman said they are “using their own personal profiles and sharing their content that reflects their values and beliefs.” He pointed to the risk of online bullying, as well as physical harm, in explaining why “we’ve left how much personal and professional information they wish to share up to them.”

The accounts on Twitter posted 4,401 tweets with identical content, not including slight variations of the language, according to Pik-Mai Hui, a PhD student in informatics at Indiana University at Bloomington who performed an analysis of the content at the request of The Post. The analysis found characteristics strongly suggestive of bots - such as double commas and dangling commas that often appear with automatic scripts - though at least some of the accounts were being operated by humans.

While the messaging appears designed to seed pro-Trump content across social media, said Kathleen Hall Jamieson, a professor of communication at the University of Pennsylvania’s Annenberg School for Communication, the act of repeated posting also helps instill the ideas among those performing the activity. In addition, it familiarizes the users with the ways of online combat, she said, and makes their accounts valuable assets should different needs arise as the election nears.

“There is a logic to having an army locally situated in a battleground state, having them up and online and ready to be deployed,” Jamieson said.

Turning Point Action debuted as a 501(c)(4) organization last year, with more leeway in undertaking political advocacy than is afforded to the original group, which is barred from campaign activity as a 501(c)(3). Both nonprofits are required only to disclose the salaries of directors, officers and key employees, said Marc Owens, a tax attorney with Loeb & Loeb.

Turning Point dates to 2012, when Montgomery, retired from a career in marketing, heard Kirk, 18 at the time, deliver a speech in the Chicago suburbs at Benedictine University’s “Youth Government Day.” He called the address “practically Reaganesque,” according to a 2015 profile in Crain’s Chicago Business newspaper, and urged Kirk, a former Eagle Scout, to put off college in favor of full-time political activism. Kirk became the face of Turning Point, while Montgomery was “the old guy who keeps it all legal,” he told the business weekly.

The organization amassed prominent and wealthy conservative allies, including Richard Grenell, the former ambassador to Germany and acting director of national intelligence, and Foster Friess, who made a fortune in mutual funds and helps bankroll conservative and Christian causes. Both men sit on Turning Point’s honorary board.

Its standing rose significantly as Trump came to power. Turning Point USA brought in nearly $80,000 in contributions and other funds in the fiscal year ending June 2013, according to IRS filings, a fraction of the $8 million it reported for 2017 and $11 million for 2018.

The group, which describes itself as the “largest and fastest-growing youth organization in America,” claims to have a presence on more than 2,000 college and high school campuses. It hosts activist conferences and runs an alumni program. It also maintains a “Professor Watchlist” designed to expose instructors who “discriminate against conservative students, promote anti-American values and advance leftist propaganda in the classroom.”

Kirk, the group’s president and co-founder, has been embraced and promoted by Trump and his family. Speaking at Turning Point USA’s Teen Student Action Summit last year, Trump hailed Kirk for building a “movement unlike anything in the history of our nation.” A quote attributed to Donald Trump Jr., who has appeared at numerous Turning Point events, features prominently on the group’s website: “I’m convinced that the work by Turning Point USA and Charlie Kirk will win back the future of America.”

Kirk has returned the praise. In his speech at last month’s Republican nominating convention, he extolled Trump as the “bodyguard of Western civilization.”

Equally impassioned rhetoric marked the campaign on social media, with posts asserting that Black Lives Matter protesters were “fascist groups . . . terrorizing American citizens” and decrying the “BLM Marxist agenda,” among other incendiary language.

Noonan said that his wife, a hairstylist, monitors the online activity of their daughters more closely than he does, and that their work is often a topic of conversation when the family convenes in the evening.

“We are Trump supporters, but one of the things my wife and I have been very consistent on is to always understand both sides and make decisions from there,” the father said.
https://www.adn.com/nation-world/202...k-and-twitter/





YouTube’s Plot to Silence Conspiracy Theories

From flat-earthers to QAnon to Covid quackery, the video giant is awash in misinformation. Can AI keep the lunatic fringe from going viral?
Clive Thompson

Mark Sargent saw instantly that his situation had changed for the worse. A voluble, white-haired 52-year-old, Sargent is a flat-earth evangelist who lives on Whidbey Island in Washington state and drives a Chrysler with the vanity plate “ITSFLAT.” But he's well known around the globe, at least among those who don't believe they are living on one. That's thanks to YouTube, which was the on-ramp both to his flat-earth ideas and to his subsequent international stardom.

Formerly a tech-support guy and competitive virtual pinball player, Sargent had long been intrigued by conspiracy theories, ranging from UFOs to Bigfoot to Elvis' immortality. He believed some (Bigfoot) and doubted others (“Is Elvis still alive? Probably not. He died on the toilet with a whole bunch of drugs in his system”). Then, in 2014, he stumbled upon his first flat-earth video on YouTube.
Image may contain Symbol Flag Transportation Vehicle Aircraft and Airplane

This feature appears in the October 2020 issue. Subscribe to WIRED.

He couldn't stop thinking about it. In February 2015 he began uploading his own musings, in a series called “Flat Earth Clues.” As he has reiterated in a sprawling corpus of more than 1,600 videos, our planet is not a ball floating in space; it's a flat, Truman Show-like terrarium. Scientists who insist otherwise are wrong, NASA is outright lying, and the government dares not level with you, because then it would have to admit that a higher power (aliens? God? Sargent's not sure about this part) built our terrarium world.

Sargent's videos are intentionally lo-fi affairs. There's often a slide show that might include images of Copernicus (deluded), astronauts in space (faked), or Antarctica (made off-limits by a cabal of governments to hide Earth's edge), which appear onscreen as he speaks in a chill, avuncular voice-over.

Sargent's top YouTube video received nearly 1.2 million views, and he has amassed 89,200 followers—hardly epic by modern influencer standards but solid enough to earn a living from the preroll ads, as well as paid speaking and conference gigs.

Crucial to his success, he says, was YouTube's recommendation system, the feature that promotes videos for you to watch on the homepage or in the “Up Next” column to the right of whatever you're watching. “We were recommended constantly,” he tells me. YouTube's algorithms, he says, figured out that “people getting into flat earth apparently go down this rabbit hole, and so we're just gonna keep recommending.”

Scholars who study conspiracy theories were realizing the same thing. YouTube was a gateway drug. One academic who interviewed attendees of a flat-earth convention found that, almost to a person, they'd discovered the subculture via YouTube recommendations. And while one might shrug at this as marginal weirdness—They think the Earth is flat, who cares? Enjoy the crazy, folks—the scholarly literature finds that conspiratorial thinking often colonizes the mind. Start with flat earth, and you may soon believe Sandy Hook was a false-flag operation or that vaccines cause autism or that Q's warnings about Democrat pedophiles are a serious matter. Once you convince yourself that well-documented facts about the solar system are a fraud, why believe well-documented facts about anything? Maybe the most trustworthy people are the outsiders, those who dare to challenge the conventions and who—as Sargent understood—would be far less powerful without YouTube's algorithms amplifying them.

For four years, Sargent's flat-earth videos got a steady stream of traffic from YouTube's algorithms. Then, in January 2019, the flow of new viewers suddenly slowed to a trickle. His videos weren't being recommended anywhere near as often. When he spoke to his flat-earth peers online, they all said the same thing. New folks weren't clicking. What's more, Sargent discovered, someone—or something—was watching his lectures and making new decisions: The YouTube algorithm that had previously recommended other conspiracies was now more often pushing mainstream videos posted by CBS, ABC, or Jimmy Kimmel Live, including ones that debunked or mocked conspiracist ideas. YouTube wasn't deleting Sargent's content, but it was no longer boosting it. And when attention is currency, that's nearly the same thing.

“You will never see flat-earth videos recommended to you, basically ever,” he told me in dismay when we first spoke in April 2020. It was as if YouTube had flipped a switch.

In a way, it had. Scores of them, really—a small army of algorithmic tweaks, deployed beginning in 2019. Sargent's was among the first accounts to feel the effects of a grand YouTube project to teach its recommendation AI how to recognize the conspiratorial mindset and demote it. It was a complex feat of engineering, and it worked; the algorithm is less likely now to promote misinformation. But in a country where conspiracies are recommended everywhere—including by the president himself—even the best AI can't fix what's broken.
illustration of people with signs that say investigate Pizzagate and hoax

When Google bought YouTube in 2006, it was a woolly startup with a DIY premise: “Broadcast Yourself.” YouTube's staff back then wasn't thinking much about conspiracy theories or disinformation. The big concern, as an early employee told me, was what they referred to internally as “boobs and beheadings”—uploads of pornography and gruesome al Qaeda actions.

From the first, though, YouTube executives intuited that recommendations could fuel long binges of video surfing. By 2010, the site was suggesting videos using collaborative filtering: If you watched video A, and lots of people who watched A also watched B, then YouTube would recommend you watch B too. This simple system also up-ranked videos that got lots of views, under the assumption that it was a signal of value. That methodology tended to create winner-take-all dynamics that resulted in “Gangnam Style”-type virality; lesser-known uploads seldom got a chance.

In 2011, Google tapped Cristos Goodrow, who was then director of engineering, to oversee YouTube's search engine and recommendation system. Goodrow noticed another problem caused by YouTube's focus on views, which was that it encouraged creators to use misleading tactics—like racy thumbnails—to dupe people into clicking. Even if a viewer immediately bailed, the click would goose the view count higher, boosting the video's recommendations.

Goodrow and his team decided to stop ranking videos based on clicks. Instead, they focused on “watch time,” or how long viewers stayed with a video; it seemed to them a far better metric of genuine interest. By 2015, they would also introduce neural-net models to craft recommendations. The model would take your actions (whether you'd finished a video, say, or hit Like) and blend that with other information it had gleaned (your search history, geographic region, gender, and age, for example; a user's “watch history” became increasingly significant too). Then the model would predict which videos you'd be most likely to actually watch, and presto: recommendations, more personalized than ever.

The recommendation system became increasingly crucial to YouTube's frenetic push for growth. In 2012, YouTube's vice president of product, Shishir Mehrotra, declared that by the end of 2016 the site would hit a billion hours of watch time per day. It was an audacious goal; at the time, people were watching YouTube for only 100 million hours a day, compared to more than 160 million on Facebook and 5 billion on TV. So Goodrow and the engineers began thirstily hunting for any tiny tweak that would bump watch time upward. By 2014, when Susan Wojcicki took over as CEO, the billion-hour goal “was a religion at YouTube, to the exclusion of nearly all else,” as she later told the venture capitalist John Doerr. She kept the goal in place.

The algorithmic tweaks worked. People spent more and more time on the site, and the new code meant small creators and niche content were finding their audience. It was during this period that Sargent saw his first flat-earth video. And it wasn't just flat-earthers. All kinds of misinformation, some of it dangerous, rose to the top of watchers' feeds. Teenage boys followed recommendations to far-right white supremacists and Gamergate conspiracies; the elderly got stuck in loops about government mind control; anti-vaccine falsehoods found adherents. In Brazil, a marginal lawmaker named Jair Bolsonaro rose from obscurity to prominence in part by posting YouTube videos that falsely claimed left-wing scholars were using “gay kits” to convert kids to homosexuality.

In the hothouse of the 2016 US election season, observers argued that YouTube's recommendations were funneling voters into ever-more-extreme content. Conspiracy thinkers and right-wing agitators uploaded false rumors about Hillary Clinton's imminent mental collapse and involvement in a nonexistent pizzeria pedophile ring, then watched, delightedly, as their videos lifted off in YouTube's Up Next column. A former Google engineer named Guillaume Chaslot coded a web-scraper program to see, among other things, whether YouTube's algorithm had a political tilt. He found that recommendations heavily favored Trump as well as anti-Clinton material. The watch time system, in his view, was optimizing for whomever was most willing to tell fantastic lies.

As 2016 wore on and the billion-hour deadline loomed, the engineers went into overdrive. Recommendations had become the thrumming engine of YouTube, responsible for an astonishing 70 percent of all its watch time. In turn, YouTube became a key source of revenue in the Alphabet empire.

Goodrow hit the target: On October 22, 2016, a few weeks before the presidential election, users watched 1 billion hours of videos on YouTube.

After the 2016 election, the tech industry came in for a reckoning. Critics laced into Facebook's algorithm for boosting conspiratorial rants and hammered Twitter for letting in phalanxes of Russian bots. Scrutiny of YouTube emerged a bit later. In 2018 a UC Berkeley computer scientist named Hany Farid teamed up with Guillaume Chaslot to run his scraper again. This time, they ran the program daily for 15 months, looking specifically for how often YouTube recommended conspiracy videos. They found the frequency rose throughout the year; at the peak, nearly one in 10 videos recommended were conspiracist fare.

“It turns out that human nature is awful,” Farid tells me, “and the algorithms have figured this out, and that's what drives engagement.” As Micah Schaffer, who worked at YouTube from 2006 to 2009, told me, “It really is they are addicted to that traffic.”

YouTube executives deny that the billion-hour push led to a banquet of conspiracies. “We don't see evidence that extreme content or misinformation is on average more engaging, or generates more viewership, than anything else,” Goodrow said. (YouTube also challenged Farid and Chaslot's research, saying it “does not accurately reflect how YouTube's recommendations work or how people watch and interact with YouTube.”) But, within YouTube, the principle of “Broadcast Yourself,” without restriction, was colliding with concerns about safety and misinformation.

On October 1, 2017, when a man used an arsenal of weapons to fire into a crowd of people at a concert in Las Vegas, YouTube users immediately began uploading false-flag videos claiming the shooting was orchestrated to foment opposition to the Second Amendment.

Just 12 hours after the shooting, Geoff Samek arrived for his first day as a product manager at YouTube. For several days he and his team were run ragged trying to identify fabulist videos and delete them. He was, he told me, “surprised” by how little was in place to manage a crisis like this. (When I asked him what the experience felt like, he sent me a clip of Tim Robbins being screamed at as a new mailroom hire in The Hudsucker Proxy.) The recommendation system was apparently making things worse; as BuzzFeed reporters found, even three days after the shooting the system was still promoting videos like “PROOF: MEDIA & LAW ENFORCEMENT ARE LYING.”

“I can say it was a challenging first day,” Samek told me dryly. “Frankly, I don't think our site was performing super well for misinformation ... I think that kicked off a lot of things for us, and it was a turning point.”

YouTube already had policies forbidding certain types of content, like pornography or speech encouraging violence. To hunt down and delete these videos, the company used AI “classifiers”—code that automatically detects potentially policy-violating videos by analyzing, among other signals, the headlines or the words spoken in a video (which YouTube generates using its automatic speech-to-text software). They also had human moderators who reviewed videos the AI flagged for deletion.

After the Las Vegas shooting, executives began focusing more on the challenge. Google's content moderators grew to 10,000, and YouTube created an “intelligence desk” of people who hunt for new trends in disinformation and other “inappropriate content.” YouTube's definition of hate speech was expanded to include Alex Jones' claim that the murders at Sandy Hook Elementary School never occurred. The site had already created a “breaking-news shelf” that would run on the homepage and showcase links to content from news sources that Google News had previously vetted. The goal, as Neal Mohan, YouTube's chief product officer, noted, was not just to delete the obviously bad stuff but to boost reliable, mainstream sources. Internally, they began to refer to this strategy as a set of R's: “remove” violating material and “raise up” quality stuff.

But what about content that wasn't quite bad enough to be deleted? Like alleged conspiracies or dubious information that doesn't advocate violence or promote “dangerous remedies or cures” or otherwise explicitly violate policies? Those videos wouldn't be removed by moderators or the content-blocking AI. And yet, some executives wondered if they were complicit by promoting them at all. “We noticed that some people were watching things that we weren't happy with them watching,” says Johanna Wright, one of YouTube's vice presidents of product management, “like flat-earth videos.” This was what executives began calling “borderline” content. “It's near the policy but not against our policies,” as Wright said.

By early 2018, YouTube executives decided they wanted to tackle the borderline material too. It would require adding a third R to their strategy—“reduce.” They'd need to engineer a new AI system that would recognize conspiracy content and misinformation and down-rank it.

In February, I visited YouTube's headquarters in San Bruno, California. Goodrow had promised to show me the secret of that new AI.

It was the day after the Iowa caucuses, where a vote-counting app had failed miserably. The news cycle was spinning crazily, but inside YouTube the mood seemed calm. We filed into a conference room, and Goodrow plunked into a chair and opened his laptop. He has close-cropped hair and sported a normcore middle-aged-dad style, wearing a zip-up black sweater over beige khakis. A mathematician by training, Goodrow can be intense; he was a dogged advocate of the billion-hour project and neurotically checked view stats every single day. Last winter he mounted a brief and failed run in the Democratic primary for his San Mateo County congressional district. Goodrow and I were joined by Andre Rohe, a dry-witted German who came to YouTube in 2015 to be head of Discovery engineering after three years heading Google News.

Rohe beckoned me to his screen. He and Goodrow seemed slightly nervous. The inner workings of any system at Google are closely guarded secrets. Engineers worry that if they reveal too much about how any algorithm works—particularly one designed to down-rank content—outsiders could learn to outwit it. For the first time, Rohe and Goodrow were preparing to reveal some details of the recommendation revamp to a reporter.

To create an AI classifier that can recognize borderline video content, you need to train the AI with many thousands of examples. To get those training videos, YouTube would have to ask hundreds of ordinary humans to decide what looks dodgy and then feed their evaluations and those videos to the AI, so it could learn to recognize what dodgy looks like. That raised a fundamental question: What is “borderline” content? It's one thing to ask random people to identify an image of a cat or a crosswalk—something a Trump supporter, a Black Lives Matter activist, and even a QAnon adherent could all agree on. But if they wanted their human evaluators to recognize something subtler—like whether a video on Freemasons is a study of the group's history or a fantasy about how they secretly run government today—they would need to provide guidance.

YouTube assembled a team to figure this out. Many of its members came from the policy department, which creates and continually updates the rules about the content YouTube bans outright. They developed a set of about three dozen questions designed to help a human decide whether content moved significantly in the direction of those banned areas, but didn't quite get there.

These questions were, in essence, the wireframe of the human judgment that would become the AI's smarts. These hidden inner workings were listed on Rohe's screen. They allowed me to take notes but wouldn't give me a copy to take away.

One question asks whether a video appears to “encourage harmful or risky behavior to others” or to viewers themselves. To help narrow down what type of content constitutes “harmful or risky behavior,” there is a set of check boxes pointing out various well-known self-harms YouTube has grappled with—like “pro ana” videos that encourage anorexic behaviors, or graphic images of self-harm.

“If you start by just asking, ‘Is this harmful misinformation?’ then everybody has a different definition of what's harmful,” Goodrow said. “But then you say, ‘OK, let's try to move it more into the concrete, specific realm by saying, is it about self-harm? What kinds of harm is it?’ Then you tend to get higher agreement and better results.” There's also an open-ended box that an evaluator can write in to explain their thinking.

Another question asks the evaluators to determine whether a video is “intolerant of a group” based on race, religion, sexual orientation, gender, national origin, or veteran status. But there's a supplementary question: “Is the video satire?” YouTube's policies prohibit hate speech and spreading lies about ethnic groups, for example, but they can permit content that mocks that behavior by mimicking it.

Rohe pointed to another category, one that asks whether a video is “inaccurate, misleading, or deceptive.” It then goes on to ask the evaluator to check all the possible categories of factual nonsense that might apply, like “unsubstantiated conspiracy theories,” “demonstratively inaccurate information,” “deceptive content,” “urban legend,” “fictional story or myth,” or “contradicts well-established expert consensus.” The evaluators each spend about 5 minutes assessing each video, on top of the time it takes to watch it, and are encouraged to do research to help understand its context.

Rohe and Goodrow said they had tried to reduce potential bias among the human evaluators by choosing people who were diverse in terms of age, geography, gender, and race. They also made sure each video was rated by up to nine separate evaluators so that the results were subject to the “wisdom of a group,” as Goodrow put it. Any videos with medical subjects were rated by a team of doctors, not laypeople.

This diversity among the evaluators' views can pose problems for training the AI, though. If evaluators are too divided over whether a video is deceptive or factually misleading, then their responses won't provide a clear signal. As Woojin Kim, a vice president of product management, pointed out, “If we're talking about a contentious political topic, where you do have multiple perspectives ... those would oftentimes end up being marked not as borderline content.” When the AI classifier was trained on those examples, it absorbed the same divided mentality. If it encountered a new video with the same characteristics, it would, metaphorically, shrug and not classify it as borderline either.

The evaluators processed tens of thousands of videos, enough for YouTube engineers to begin training the system. The AI would take data from the human evaluations—that a video called “Moon Landing Hoax—Wires Footage” is an “unsubstantiated conspiracy theory,” for example—and learn to associate it with features of that video: the text under the title that the creator uses to describe the video (“We can see the wires, people!”); the comments (“It's 2017 and people still believe in moon landings ... help ... help”); the transcript (“the astronaut is getting up with the wire taking the weight”); and, especially, the title. The visual content of the video itself, interestingly, often wasn't a very useful signal. As with videos about virtually any topic, misinformation is often conveyed by someone simply speaking to the camera or (as with Sargent's flat-earth material) over a procession of static images.

Another useful training feature for the AI was “co-watches,” or the fare users typically watch before or after the video in question. In a sense, it was a measure of the company a video keeps. If National Geographic posts a video titled “Round Earth vs. Flat Earth,” an AI might recognize it as having words very similar to a flat-earth video. But the co-watches would likely be an interview with the astrophysicist Neil deGrasse Tyson or a scientist's TED talk, while a flat-earth conspiracy video might pair with a rant on the CIA's UFO cover-up.

The AI classifier does not produce a binary answer; it doesn't say whether a video is or isn't “borderline.” Instead, it generates a score, a mathematical weight that represents how likely the video is to approach the borderline. That weight is incorporated into the overall recommendation AI and becomes one of the many signals used when recommending the video to a particular user.

In January 2019, YouTube began rolling out the system. That's when Mark Sargent noticed his flat-earth views take a nose dive. Other types of content were getting down-ranked, too, like moon-landing conspiracies or videos perseverating on chemtrails. Over the next few months, Goodrow and Rohe pushed out more than 30 refinements to the system that they say increased its accuracy. By the summer, YouTube was publicly declaring success: It had reduced by 50 percent the watch time of borderline content that came from recommendations. By December it reported a reduction of 70 percent.

The company won't release its internal data, so it's impossible to confirm the accuracy of its claims. But there are several outside indications that the system has had an effect. One is that consumers and creators of borderline stuff complain that their favorite material is rarely boosted any more. “Wow has anybody else noticed how hard it is to find ‘Conspiracy Theory’ stuff on YouTube lately? And that you easily find videos ‘debunking’ those instead?” one comment noted in February of this year. “Oh yes, youtubes algorithm is smashing it for them,” another replied.

Then there's the academic research. Berkeley professor Hany Farid and his team found that the frequency with which YouTube recommended conspiracy videos began to fall significantly in early 2019, precisely when YouTube was beginning its updates. By early 2020, his analysis found, those recommendations had gone down from a 2018 peak by 40 percent. Farid noticed that some channels weren't merely reduced; they all but vanished from recommendations. Indeed, before YouTube made its switch, he'd found that 10 channels—including that of David Icke, the British writer who argues that reptilians walk among us—comprised 20 percent of all conspiracy recommendations (as Farid defines them); afterward, he found that recommendations for those sites “basically went to zero.”

Another study that somewhat backs up YouTube's claims was conducted by the computer scientist Mark Ledwich and Anna Zaitsev, a postdoctoral scholar and lecturer at Berkeley. They analyzed YouTube recommendations, looking specifically at 816 political channels and categorizing them into different ideological groups such as “Partisan Left,” “Libertarian,” and “White Identitarian.” They found that YouTube recommendations mostly now guide viewers of political content to the mainstream. The channels they grouped under “Social Justice,” on the far left, lost a third of their traffic to mainstream sources like CNN; conspiracy channels and most on the reactionary right—like “White Identitarian” and “Religious Conservative”—saw the majority of their traffic slough off to commercial right-wing channels, with Fox News being the hugest beneficiary.

If Zaitsev and Ledwich's analysis of YouTube “mainstreaming” traffic holds up—and it's certainly a direction that YouTube itself endorses—it would fit into a historic pattern. As law professor Tim Wu noted in his book The Master Switch, new media tend to start out in a Wild West, then clean up, put on a suit, and consolidate in a cautious center. Radio, for example, began as a chaos of small operators proud to say anything, then gradually coagulated into a small number of mammoth networks aimed mostly at pleasing the mainstream.

For critics like Farid, though, YouTube has not gone far enough, quickly enough. “Shame on YouTube,” he told me. “It was only after how many years of this nonsense did they finally respond? After public pressure just got to be so much they couldn't deal with it.”

Even the executives who set up the new “reduce” system told me it wasn't perfect. Which makes some critics wonder: Why not just shut down the recommendation system entirely? Micah Schaffer, the former YouTube employee, says, “At some point, if you can't do this responsibly, you need to not do it.” As another former YouTube employee noted, determined creators are adept at gaming any system YouTube puts up, like “the velociraptor and the fence.”

Still, the system appeared to be working, mostly. It was a real, if modest, improvement. But then the floodgates opened again. As the winter of 2020 turned into a spring of pandemic, a summer of activism, and another norm-shattering election season, it looked as if the recommendation engine might be the least of YouTube's problems.

A month after I visited YouTube, the new coronavirus pandemic was in full swing. It had itself become a fertile field for new conspiracy theories. Videos claimed that 5G towers caused Covid-19; Mark Sargent had interrupted his flat-earth musings to upload a few videos in which he said the pandemic lockdown was an ominous preparation for social control. He told me the government would use a vaccine to inject everyone with an invisible mark, and “then it goes to the whole Christian mark of the beast,” the prophesy from the Book of Revelations.

On March 30, I talked to Mohan again, but this time on Google Hangouts. He was ensconced in a wood-paneled room at his home, clad in a blue polo shirt, while the faint sounds of his children echoed from elsewhere in the house.

YouTube, he told me, had been moving aggressively to clamp down on disinformation about the pandemic and to counteract it. The platform created an “info panel” to run under any video mentioning Covid-19, linking to the Centers for Disease Control and other global and local health officials. By late August, these panels had received more than 300 billion impressions. YouTube had been removing videos with dangerous “medical” information every day, including those promoting “harmful cures,” as Mohan says, and videos telling people to flout stay-at-home rules. To raise up useful information, the company arranged for several popular YouTubers to interview Anthony Fauci, the director of the National Institute of Allergy and Infectious Diseases who had become a regular presence on TV and a voice of scientific reason.

Mohan had also been meeting with YouTube's “intel desk,” whose researchers had been trying to root out the latest Covid conspiracies. Goodrow and Rohe would use those videos to help update their AI classifier at least once a week, so it could help down-rank new strains of borderline Covid content.

But even as we spoke, YouTube videos with wild-eyed claims were being uploaded and amassing views. An American chiropractor named John Bergman got more than a million views for videos suggesting that hand sanitizer didn't work and urging people to use essential oils and vitamin C to treat the contagion. On April 16, a conspiracy channel named the Next News Network uploaded a video claiming that Fauci was a “criminal,” that coronavirus was a false-flag operation to impose “mandatory vaccines,” and that if anyone refused to be vaccinated, they'd be “shot in the head.” It racked up nearly 7 million views in two weeks, before YouTube finally took it down. Then came ever more unhinged uploads, including the infamous “Plandemic” video—alleging a conspiracy to push a vaccine—or the so-called “white coat summit” of July 27, in which a group of doctors assembled in front of the Supreme Court to falsely claim that hydroxychloroquine could cure Covid and that masks were unnecessary.

YouTube was playing a by-now familiar game of social media whack-a-mole. A video that violated YouTube's rules would emerge and rapidly gain views, then YouTube would take it down. But it wasn't clear that recommendations were key to these sudden viral spikes. On August 14, a 90-minute video by Millie Weaver, a contributor to the far-right conspiracist site Infowars, went online, filled with claims of a deep state arrayed against President Trump. It was linked and shared in a number of right-wing circles. Dozens of Reddit threads passed it on (“Watch it before it's gone,” one redditor wrote), and it was shared more than 53,000 times on Facebook, as well as on scores of right-wing YouTube channels, including by many followers of QAnon, one of the fastest-growing—and most dangerous—conspiracy theories in the nation. YouTube took it down a day later, saying it violated its hate-speech rules. But within that 24 hours, it amassed over a million views.

This old-fashioned spread—a mix of organic link-sharing and astroturfed, bot-propelled promotion—is powerful and, say observers, may sideline any changes to YouTube's recommendation system. It also suggests that users are adapting and that the recommendation system may be less important, for good and ill, to the spread of misinformation today. In a study for the think tank Data & Society, the researcher Becca Lewis mapped out the galaxy of right-wing commentators on YouTube who routinely spread borderline material. Many of those creators, she says, have built their often massive audiences not only through YouTube recommendations but also via networking. In their videos they'll give shout-outs to one another and hype each other's work, much as YouTubers all enthusiastically promoted Millie Weaver's fabricated musings.

“If YouTube completely took away the recommendations algorithm tomorrow, I don't think the extremist problem would be solved. Because they're just entrenched,” Lewis tells me. “These people have these intense fandoms at this point. I don't know what the answer is.”

One of the former Google engineers I spoke to agreed: “Now that society is so polarized, I'm not sure YouTube alone can do much,” as the engineer noted. “People who have been radicalized over the past few years aren't getting unradicalized. The time to do this was years ago.”
https://www.wired.com/story/youtube-...racy-theories/





Facebook's Former Director of Monetization Says Facebook Intentionally Made its Product as Addictive as Cigarettes — and Now he Fears it could Cause 'Civil War'
Aaron Holmes

• A former Facebook director lashed out at the company's business model during testimony before Congress on Thursday, saying Facebook's focus on driving engagement outweighs its consideration of potential harms.
• In his prepared remarks before a House committee hearing, Facebook's former director of monetization Tim Kendall said Facebook "took a page from Big Tobacco’s playbook, working to make our offering addictive at the outset."
• Now, Kendall says he's worried Facebook is contributing to extremism in the US and is "pushing ourselves to the brink of a civil war."

Facebook's former director of monetization Tim Kendall says he had a role in making Facebook as addictive as cigarettes — and worries that Facebook could be just as damaging to its users.

In a testimony before the House Consumer Protection and Commerce Subcommittee published Thursday, Kendall accused Facebook of building algorithms that have facilitated the spread of misinformation, encouraged divisive rhetoric, and laid the groundwork for a "mental health crisis."

"We took a page from Big Tobacco's playbook, working to make our offering addictive at the outset," Kendall said in prepared remarks submitted to lawmakers ahead of Thursday's hearing. "The social media services that I and others have built over the past 15 years have served to tear people apart with alarming speed and intensity. At the very least, we have eroded our collective understanding — at worst, I fear we are pushing ourselves to the brink of a civil war."

Kendall, who is now CEO of the time management app Moment, joined Facebook as its first director of monetization in 2006 and remained in the role until 2010. Kendall said he initially thought his role would involve balancing Facebook's interest in revenue with the wellbeing of its users, but that Facebook was interested in profits over everything.

"We sought to mine as much attention as humanly possible and turn into historically unprecedented profits," Kendall said.

Facebook's algorithm rewards shocking content and divisive rhetoric in order to evoke extreme emotional responses from users in order to hold users' attention and generate more ad revenue, Kendall told lawmakers.

"These algorithms have brought out the worst in us. They've literally rewired our brains so that we're detached from reality and immersed in tribalism," he said.

Facebook did not immediately respond to Business Insider's request for comment in response to Kendall's testimony.

Kendall isn't the first former Facebook employee to raise concerns about the platform's capacity to sow division. A Facebook engineer quit in protest last month, accusing the company of "profiting off hate." More recently, a fired Facebook data scientist reportedly wrote a whistleblower memo accusing the company of failing to direct enough resources to fighting misinformation.

Facebook has also faced activist campaigns urging it to more robustly crack down on misinformation and hate speech. More than 1,000 companies joined an advertiser boycott of the platform this summer led by civil rights activists, and this month, influencers staged a day of protest over hate speech on Facebook and Instagram.

At the subcommittee hearing Kendall testified at on Thursday, lawmakers said the spread of misinformation on Facebook could be cause for future government regulation of social media platforms.

"Driven by profit and power and in the face of obvious harm, these mega-companies successfully have convinced governments all over the world to essentially leave them alone ... big tech has helped divide our nations and has stoked genocide in others," said Rep. Jan Schakowsky, an Illinois Democrat who chairs the House Consumer Protection and Commerce Subcommittee.

Meanwhile, Republicans on the subcommittee focused primarily on claims of anti-conservative bias, a frequent talking point of President Donald Trump. They pointed to social media platforms' occasional fact-checking of Trump's posts that violate their policies on spreading misinformation as censorship. While Trump has frequently railed against these fact-checks, Republicans have provided minimal evidence of broader censorship of conservative ideas.

"Free speech is increasingly under attack," said Rep. Cathy Rodgers of Washington, the ranking Republican on the subcommittee. "I am extremely concerned when platforms apply inconsistent content moderation policies for their own purposes ... there's no clearer example of a platform using its power for political purposes than Twitter, singling out President Trump."

Republicans and Democrats alike said they supported reforming Section 230, a law that makes social media platforms immune to legal liability for the content of users' posts. Attorney General William Barr announced Wednesday that the Department of Justice has urged Congress to amend the law, but did not immediately elaborate on how the law should be replaced.
https://www.businessinsider.com/form...kendall-2020-9





Facebook’s Oversight Board Won’t Launch in Time to Oversee the Election — and Activists Aren’t Happy

‘This is an emergency response’
Russell Brandom

It’s been more than a year since Facebook pledged to launch its independent Oversight Board — but with the US election approaching fast, tech critics are getting antsy.

On Friday, a coalition of academics and legal experts announced the formation of the “Real Facebook Oversight Board,” an informal group that will publicly call out Facebook’s slow action in advance of the election, including early Facebook investor Roger McNamee and Harvard professor Shoshana Zuboff. The group also includes leaders of the #StopHateForProfit campaign, which organized a boycott by Facebook advertisers earlier this year.

The group plans to hold regular “board meetings” to discuss failures of platform policy, with the first scheduled to be hosted by Kara Swisher on October 1. In a statement, Zuboff described Facebook as “a roiling cauldron of lies, violence and danger destabilizing elections and democratic governance around the world.”

The group also include Guardian journalist Carole Cadwalladr, known for her work on the Cambridge Analytica story. “This is an emergency response,” Cadwalladr told NBC News this morning. “We know there are going to be a series of incidents leading up to the election and beyond in which Facebook is crucial.”

The board will hold no power and is largely meant as a symbolic gesture. Still, it has placed new pressure on Facebook’s Oversight Board, which was initially scheduled for launch this summer. Oversight Board members now estimate that the project will launch in October. That will be too late to hear cases related to the US election, given the months-long process for fully adjudicating a case.

“We are currently testing the newly deployed technical systems that will allow users to appeal and the Board to review cases,” the Oversight Board said. “Assuming those tests go to plan, we expect to open user appeals in mid to late October. Building a process that is thorough, principled and globally effective takes time and our members have been working aggressively to launch as soon as possible.”

Still, the delay has brought forth a new wave of skepticism about the project.

Accountable Tech, a nonprofit that has long criticized Facebook for its moderation failures, said the failure to oversee campaign content underscored the broader failure of the project. “If Oversight Board members want to enact meaningful change, rather than continuing to prop up Facebook’s Potemkin court, they should demand real authority or resign and speak out,” the group said in a statement.
https://www.theverge.com/2020/9/25/2...cism-activists





Senate’s Encryption Backdoor Bill is ‘Dangerous for Americans,’ Says Rep. Lofgren
Zack Whittaker

A Senate bill that would compel tech companies to build backdoors to allow law enforcement access to encrypted devices and data would be “very dangerous” for Americans, said a leading House Democrat.

Law enforcement frequently spars with tech companies over their use of strong encryption, which protects user data from hackers and theft, but the government says makes it harder to catch criminals accused of serious crime. Tech companies like Apple and Google have in recent years doubled down on their security efforts by securing data with encryption that even they cannot unlock.

Senate Republicans in June introduced their latest “lawful access” bill, renewing previous efforts to force tech companies to allow law enforcement access to a user’s data when presented with a court order.

“It’s dangerous for Americans, because it will be hacked, it will be utilized, and there’s no way to make it secure,” Rep. Zoe Lofgren, whose congressional seat covers much of Silicon Valley, told TechCrunch at Disrupt 2020. “If we eliminate encryption, we’re just opening ourselves up to massive hacking and disruption,” she said.

Lofgren’s comments echo those of critics and security experts, who have long criticized efforts to undermine encryption, arguing that there is no way to build a backdoor for law enforcement that could not also be exploited by hackers.

Several previous efforts by lawmakers to weaken and undermine encryption have failed. Currently, law enforcement has to use existing tools and techniques to find weaknesses in phones and computers. The FBI claimed for years that it had thousands of devices that it couldn’t get into, but admitted in 2018 that it repeatedly overstated the number of encrypted devices it had and the number of investigations that were negatively impacted as a result.

Lofgren has served in Congress since 1995 during the first so-called “Crypto Wars,” during which the security community fought the federal government to limit access to strong encryption. In 2016, Lofgren was part of an encryption working group on the House Judiciary Committee. The group’s final report, bipartisan but not binding, found that any measures to undermine encryption “works against the national interest.”

Still, it’s a talking point that the government continues to push, even as recently as this year when U.S. Attorney General William Barr said that Americans should accept the security risks that encryption backdoors pose.

“You cannot eliminate encryption safely,” Lofgren told TechCrunch. “And if you do, you will create chaos in the country and for Americans, not to mention others around the world,” she said. “It’s just an unsafe thing to do, and we can’t permit it.”
https://techcrunch.com/2020/09/20/en...erous-lofgren/





Wave-Share

A Proof-of-Concept for WebRTC Signaling Using Sound. Works with all Devices that have Microphone + Speakers

Runs in the browser

Nearby devices negotiate the WebRTC connection by exchanging the necessary Session Description Protocol (SDP) data via a sequence of audio tones. Upon successful negotiation, a local WebRTC connection is established between the browsers allowing data to be exchanged via LAN.

How it works

The WebRTC technology allows two browsers running on different devices to connect with each other and exchange data. There is no need to install plugins or download applications. To initiate the connection, the peers exchange contact information (ip address, network ports, session id, etc.). This process is called "signaling". The WebRTC specification does not define any standard for signaling - the contact exchange can be achieved by any protocol or technology.

In this project the signaling is performed via sound. The signaling sequence looks like this:

• Peer A broadcasts an offer for a WebRTC connection by encoding the session data into audio tones
• Nearby peer(s) capture the sound emitted by peer A and decode the WebRTC session data
• Peer B, who wants to establish connection with peer A, responds with an audio answer. The answer has peer B's contact information encoded in it. Additionally, peer B starts trying to connect to peer A
• Peer A receives the answer from peer B, decodes the transmitted contact data and allows peer B to connect
• Connection is established

The described signaling sequence does not involve a signaling server. Therefore, an application using signaling through sound can be, for example, served by a static web page. The only requirement is to have control over the audio output/capture devices.

An obvious limitation (feature) of the current approach is that only nearby devices (e.g. within the same room) can establish connection with each other. Moreover, the devices have to be connected in the same local network, because NAT is not available.
https://github.com/ggerganov/wave-share





Internet: Old TV Caused Village Broadband Outages for 18 Months
BBC

The mystery of why an entire village lost its broadband every morning at 7am was solved when engineers discovered an old television was to blame.

An unnamed householder in Aberhosan, Powys, was unaware the old set would emit a signal which would interfere with the entire village's broadband.

After 18 months engineers began an investigation after a cable replacement programme failed to fix the issue.

The embarrassed householder promised not to use the television again.

The village now has a stable broadband signal.

Openreach engineers were baffled by the continuous problem and it wasn't until they used a monitoring device that they found the fault.

The householder would switch their TV set on at 7am every morning - and electrical interference emitted by their second-hand television was affecting the broadband signal.

The owner, who does not want to be identified, was "mortified" to find out their old TV was causing the problem, according to Openreach.

"They immediately agreed to switch it off and not use it again," said engineer Michael Jones.

Engineers walked around the village with a monitor called a spectrum analyser to try to find any "electrical noise" to help pinpoint the problem.

"At 7am, like clockwork, it happened," said Mr Jones.

"Our device picked up a large burst of electrical interference in the village.

"It turned out that at 7am every morning the occupant would switch on their old TV which would, in turn, knock out broadband for the entire village."

The TV was found to be emitting a single high-level impulse noise (SHINE), which causes electrical interference in other devices.

Mr Jones said the problem has not returned since the fault was identified.
What else can cause broadband problems?

Suzanne Rutherford, Openreach chief engineer's lead for Wales, said anything with electric components - from outdoor lights to microwaves - can potentially have an impact on broadband connections.

"We'd just advise the public to make sure that their electric appliances are properly certified and meet current British standards," she said.

"And if you have a fault, report it to your service provider in the first instance so that we can investigate."
https://www.bbc.com/news/uk-wales-54239180

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

September 19th, September 12th, September 5th, August 29th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - November 24th, '12 JackSpratts Peer to Peer 0 21-11-12 09:20 AM
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 01:31 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)