P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 04-12-19, 08:11 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - December 7th, ’19

Since 2002































December 7th, 2019




Archivists Are Trying to Make Sure a ‘Pirate Bay of Science’ Never Goes Down

A new project aims to make LibGen, which hosts 33 terabytes of scientific papers and books, much more stable.
Matthew Gault

It’s hard to find free and open access to scientific material online. The latest studies and current research huddle behind paywalls unread by those who could benefit. But over the last few years, two sites—Library Genesis and Sci-Hub—have become high-profile, widely used resources for pirating scientific papers.

The problem is that these sites have had a lot of difficulty actually staying online. They have faced both legal challenges and logistical hosting problems that has knocked them offline for long periods of time. But a new project by data hoarders and freedom of information activists hopes to bring some stability to one of the two “Pirate Bays of Science.”

Library Genesis (LibGen) contains 33 terabytes of books, scientific papers, comics, and more in its scientific library. That’s a lot of data to host when countries and science publishers are constantly trying to get you shut down.

Last week, redditors launched a project to better seed, or host, LibGen's files.

“It's the largest free library in the world, servicing tens of thousands of scientists and medical professionals around the world who live in developing countries that can't afford to buy books and scientific journals. There's almost nothing else like this on Earth. They're using torrents to fulfill World Health Organization and U.N. charters. And it's not just one site index—it's a network of mirrored sites, where a new one pops up every time another gets taken down,” user shrine said on Reddit. Shrine is helping to start the project.

Two seedbox companies (services that provide high-bandwidth remote servers for uploading and downloading data), Seedbox.io and UltraSeedbox, stepped in to support the project. A week later, LibGen is seeding 10 terabytes and 900,000 scientific books thanks to help from Seedbox.io and UltraSeedbox.

LibGen also teamed up with another massive online archiving project, The-Eye, to facilitate the tracking, storage, and seeding of LibGen’s scientific archive. The-Eye is run by a user named -Archivist, who has previously tried to archive a petabyte of porn and the entirety of Instagram. and has archived 80 gigabytes of Apple videos deleted by YouTube in addition to the terabytes of data archived on The-Eye, which include conspiracy theory documents, old software, video game roms, books, and a lot more.

“We're not only trying to get the Library Genesis main collection torrents healthier, but also trying to get the complete collection so that The-Eye can properly back it up AND distribute it out in all its glory,” shrine said on Reddit. “There is currently no one doing that, so I think it's a big step towards keeping the collection safe as well as making it available to more developers who want to do something with the collection.”

Library Genesis is powered by Sci-Hub, an embattled website that provides users free access to scientific papers. Created in 2011 by hacker and scientific researcher Alexandra Elbakyan, Sci-Hub scrapes data from behind the paywalls of the world’s scientific journals and posts them for free online. Governments and private companies have attempted repeatedly to shut down Sci-Hub and sue Elbakyan, but the site remains.
https://www.vice.com/en_us/article/p...ever-goes-down





How BISS-CA is Allowing Broadcasters to Fight Back Against Piracy
Julien Mandel

According to research, there were 190 billion visits to pirate sites globally in 2018, with almost half relating to TV piracy and 60% of all visits going to unlicensed streaming sites. For broadcasters, these figures are unwelcome, particularly as many are ill-prepared to deal with piracy. In 2016, the cost of online piracy of movies and TV shows was revealed to be $31.8 billion globally. However, by 2022 this figure is expected to increase dramatically to reach $52 billion. Piracy is big business, particularly with regards to sports, and often sees huge organisations operating on a global scale illegally pirating streams and sharing them over the Internet. As piracy poses such a huge threat to their businesses, it’s vital that broadcasters find a solution to the problem of piracy and BISS-CA could be just what they need to fight back.

What is BISS-CA?

Developed by the European Broadcasting Union in collaboration with network equipment vendors ATEME and Nevion, BISS-CA is a protocol that enables real-time entitlement management for content streams over any network. The BISS-CA mode is based on symmetric AES and asymmetric RSA cryptography and carries all entitlement credentials in-stream. The protocol allows media rights holders to grant and revoke points of reception dynamically in real-time to safeguard their content and can be used in conjunction with additional safety measures such as watermarking. BISS-CA has drastically increased the level of content protection available to broadcasters. The standard has three powerful key advantages over a private solution: it’s interoperable, secure and simple to operate.

How can broadcasters use BISS-CA to combat piracy?

As an open, royalty-free standard, BISS-CA can be used on any production equipment, from decoders and encoders to multiplexers and transcoders. As the keys (of encryption and entitlement) are transmitted in-bound, it can also be used anywhere and doesn’t require an Internet connection, meaning that producers can use BISS-CA for live broadcasts from OB vans and manage rights from the source.

BISS-CA combats piracy in two ways:

Firstly, BISS-CA makes it harder for the stream to be pirated in the first place as the protocol uses a 128-bit encryption key to protect content. The BISS-CA scrambler changes the key down to 10 seconds which is not enough time for a pirate to find the new key and enter it into their system. Today, the use of computers and complex algorithms means it would be possible to uncover a way to find a fixed key, but this would take hours and as the key is changed dynamically every 10 seconds it would be years before the video would be fully decrypted.

Secondly, as the protocol enables the cryptographic entitlements to be transported together with the live video content, the rights holder or broadcaster is able to grant and revoke usage rights in real-time. Broadcasters can publish a list of public keys before or during the event and can even seamlessly change them and who can use them at any point during events. This means they can revoke access to pirated streams and ensure only those who should have access to the live stream do.

Additionally, if BISS-CA is used alongside forensic watermarking solutions, broadcasters are able to trace where the stream is coming from. This means a mark detailing the serial number of the decoder, for instance, is added in-stream and can’t be seen by pirates, but the provider can identify who or which organisation is responsible for the leak and can revoke the rights of the decoder. While it is possible for providers to cut the stream, more often than not they choose to negotiate with the organisation that caused the leak in order to ensure it doesn’t happen again.

BISS-CA and accompanying watermarking tools are particularly useful for broadcasters fighting back against the piracy of live sports streams, as more than a third of football fans in the UK admit to regularly watching matches live via unofficial streams. By allowing broadcasters to protect content streams in real-time and enabling them to watermark their content, BISS-CA is instrumental for any broadcasters or rights holders trying to find a way to safeguard their content and ensure it is not pirated.

What could the future hold?

As the value of BISS-CA becomes more apparent and big-name broadcasters and sporting bodies adopt it as standard, we will see its popularity increase across the board. While BISS-CA is currently predominately used for the broadcast of live sports or events, thanks to its security and simplicity it could also become a valuable tool for news outlets and help to stem the tide of fake news, for example. Ultimately, BISS-CA is finally giving broadcasters back control of their streams in a bid to protect their crucial content.
https://www.v-net.tv/2019/12/05/how-...gainst-piracy/





Revolutionary Streaming Distribution Technology is Pitched as a CDN Replacement
John Moulding

Eluvio has unveiled what it claims is a game-changer for streaming video: an entirely new way of distributing content over the Internet that effectively decomposes video streams or files during ingest and then reconstitutes them at an edge node when client devices request live/linear channels or on-demand programming. Between these two points, the content is distributed and stored in its most basic form, broken down into base elements like binary data (media), metadata and the code needed for composing the streams or files at the edge.

For every consumer that requests a video stream, the base elements needed to create it are drawn from the nodes that contain them. The stream is assembled from scratch, just-in-time, to provide chunked and packetized standard ABR video in the format and bit rate needed, with the appropriate DRM.

No changes are made to content (whether live or file-based) before it is ingested into what is called the ‘Eluvio Content Fabric’. And no changes are needed on client devices, since the outputs from the edge nodes are normal streaming video (e.g. DASH and HLS).

Berkeley, California based Eluvio contrasts this to the way CDNs work, where complete video files and streams are held in origin servers and moved across the Internet and stored/cached at the edge. The company claims that its new architecture eliminates the file duplication that is typical of the CDN model and so saves on storage and bandwidth, resulting in lower costs.

The Eluvio Content Fabric can be used to publish any content type and can be harnessed by the production community as well as those distributing to consumers. Some publishers are interested in servicing full length masters, for example. This solution is for much more than adaptive bit rate streaming to multiscreen devices.

But for media owners that need to serve content directly to consumers, this is, on paper, a potential successor to CDNs. Eluvio provides an end-to-end ‘as-a-service’ solution for content owners and aggregators. Thus, Eluvio is responsible for the media processing needed to decompose video streams into their base elements, and the processing needed to assemble streams for client devices – the latter being a more familiar process that includes transcoding, packaging and the addition of DRM.

Eluvio is responsible for moving the base media elements across the Internet, using its global software overlay network. The storage and compute capacity that is needed, spread across multiple nodes in a decentralised architecture, is all provided by the company, although it is possible for third-party resources to be integrated into the fabric.

Thus, an Eluvio customer could add a node, or even a third-party like a telco could add nodes and get compensated for this, with the work performed by that node measured and remunerated. This is not something that happens today, but it is possible going forwards, Michelle Munson, CEO and Co-Founder of Eluvio, revealed recently.

Eluvio literature confirms that “distributors can offset distribution costs by contributing bandwidth or compute resources.” The company already has enough infrastructure to pitch to the biggest media companies, with nodes on multiple continents.

The Eluvio Content Fabric, which is built on standard IP, is infinitely scalable, the company claims. It includes a data layer and an application layer, with the possibility for third-party vendors to provide media applications via APIs. Media applications could include transcoding, watermarking or ad insertion, as examples.

Munson says her company is radically streamlining the traditional media distribution workflow. “There is a profound simplification, and that leads to the efficiencies,” she says.

Core technology building blocks include:

• Content routing that is led by machine learning, resulting in the use of the highest bandwidth and lowest latency paths
• Programmable, just-in-time media delivery
• Trustless content protection
• Scalable smart contracts for multi-party transactions.

Any kind of live or file-based video can be ingested into the Eluvio Content Fabric. If the content is destined to be an ABR output from the edge (e.g. serving consumer multiscreen devices or connected TV devices) the fabric creates a high bit rate version of the source media during ingest – a mezzanine format. Sometimes the mezzanine could be the same bit rate as the original source content; sometimes it will be less, in which case it is transcoded downwards.

The publisher controls a configurable profile that will determine the mezzanine bit rate used plus other characteristics like the bit rate ladder served (from the edge), the aspect ratio, DRM, etc.

The mezzanine is then decomposed into what are, in effect, a collection of base elements. These fundamental ‘elements’ (the binary media data, metadata and code as noted above, plus blockchain ledger smart contracts) are all protected using zero trust encryption.

The key point about the base elements is that they are flexible and reusable, in the sense that the same elements can be used to create multiple different stream or file outputs at the edge. This is how the bandwidth and storage savings are achieved. “A fundamental point about how the fabric works is that you never see the duplication of data that is inherent in file representations,” Munson declares.

The fabric removes the need to create additional copies of files. Eluvio confirms in its literature: “Live, linear, on-demand or hybrid channel combinations are served from the same source without pre-generating or distributing any files or versions across the network or storage facilities.”

When a client device requests content, consumable media is composed just-in-time from the fundamental elements, using a programmable software engine and the media applications inside the fabric. This means there is no need to archive multiple versions of the same programme in order to offer different language options, for example, where each of those language versions is ready for all popular platforms with the right DRM and available in multiple bit rate options.

“The fabric is radically different to a CDN because what we are sending over the network is not the final output but only the source parts, and we only send them once,” Munson explains.

Content is distributed through the Eluvio fabric in real-time with what is claimed to be broadcast-standard ultra-low latency. What little delay there is comes from standard Internet requirements; Munson says the process of decomposing and recomposing streams adds nothing to latency.

Within the Eluvio architecture, every node is an edge node capable of serving video directly to a client device. This is where the base elements are gathered and built into the typical video stream that plays out from edges. Transcoding, packaging and encryption is therefore performed on the (edge) node for the unique new streams/files that are being created.

When a consumer hits ‘play’ on their device, the client routes the request to a URL which then directs the client to the node that is best equipped to serve the video. The ‘best node’ is chosen purely on the basis of performance, not whether it has any of the base elements stored there already. Thus, the client request goes to the node offering the best bandwidth and lowest latency.

The first/best node then performs a content look-up. If it has all the base elements needed, it can start assembling the video stream. If not, it fetches the elements it needs, most likely from one other node (as the fabric uses a decentralised caching architecture), although they could be gathered up from multiple nodes. In theory, any element could be found on any node in the network. The system finds and retrieves the elements it needs instantly. All of this is a just-in-time operation.

Once all the base elements are available, the media applications can be performed, like transcoding, packaging and DRM wrapping. The video and manifest file is then played out.

The core principle behind the efficiencies of the Eluvio approach is that much of what you find in different versions of streamed video, aimed at different platforms, for example, is common.

Metadata is stored with the base elements – which Eluvio also refers to as content objects. This metadata includes time-coded tags and it is fully reusable by any tool or process that interacts with the Eluvio Content Fabric. Eluvio explains: “Metadata and runnable code stored with the video/asset are read and bound to the output content on demand, at the time of the request.

“This creates the ultimate flexibility for programming and availability windows, which can be updated without remaking or redistributing any new versions.”

Eluvio has made it clear that its managed service solution is not just about increasing the efficiency of distribution. The company believes this new approach enables more personalisation of media and will also help to monetise it, and this is where the use of blockchain comes to the fore. For example, every media asset (even when in base element form) has a built-in blockchain contract that controls access to the content, subject to user rights.

This forms the basis for content transactions. A blockchain ledger records the life of the content, from version history to usage rights, authorisations and even audience reporting. This ledger is provable and tamper-free, Eluvio says.

Eluvio has revealed that it is collaborating with various content providers to refine the features of the platform, including proof-of-concepts involving live content. An IBC demonstration showed a live feed from a tier-one broadcaster via the Eluvio Content Fabric.

MGM Studios is one of the users. The content owner is using the Eluvio Content Fabric for streaming to multiscreen devices, including via TV Everywhere services, where Eluvio has replaced typical CDN services. MGM is making use of transcoding, multi-format encryption and DRM plus access control and audience reporting. Jim Crosby, SVP Digital at MGM Studios, said in September: “The deployment of the Eluvio Content Fabric started as an experiment to test this promising new technology, and it has exceeded expectations.

“It has delivered ultra-fast video loading, high quality playback, and a cost-efficient solution eliminating separate aggregation, transcoding and CDN services. With its blockchain, it gives us the ability to transact business directly on the content. We look forward to more to come with Eluvio.”

The Eluvio Content Fabric has taken two years to develop. Michelle Munson says the bandwidth and storage savings, with associated cost reductions, are not the main benefit. The biggest win for customers is the simplification of operations. She compares her company’s technology to a smartphone that has taken on multiple functions that were once performed by different devices, including feature phones, cameras and music players.

Eluvio views its approach to content distribution as transformational. Now we must wait to see if this is, indeed, the CDN replacement the company believes it is.
https://www.v-net.tv/2019/11/19/revo...n-replacement/





Cord-Cutting Pushed to ‘Tipping Point’ as Video Streaming Grows
Ryan Vlastelica

• Analyst says 40% of cable subscribers ‘at risk’ of cutting
• Streaming stocks like Roku were main beneficiaries in 2019

The media ecosystem is undergoing a massive change as streaming video looks to extend its recent dominance over traditional distribution, according to research firm MoffettNathanson, which wrote that a large minority of cable consumers could cut their subscriptions in coming years.

“The video market is in full disruption and this year could be the cord cutting tipping point,” analyst Michael Nathanson wrote to clients. “Media companies will need to master a whole new suite of skill sets to win going forward,” with content creation, user interfaces and “churn mitigation strategies” among the factors that could determine the next generation of winners in the market.

Consumers have been abandoning traditional media bundles for years, instead looking to services like Netflix Inc. or Walt Disney Co.’s recently launched Disney+ service, which has signed up more than 10 million subscribers since launching in November. Streaming services have made in-roads into a number of major categories of video entertainment, including TV shows and movies.

In a measure of how big streaming has become, Wells Fargo Securities wrote that between November 17-23, “The Mandalorian,” a series from Disney+ set in the “Star Wars” universe, was the “most in-demand show in OTT and overall on a linear+OTT basis.” OTT stands for “over the top” content, which bypasses cable boxes. Linear TV airs at set times, as opposed to being on-demand, as with streaming. The firm cited data from Parrot Analytics in its report, which was dated Nov. 29.

Geetha Ranganathan, an analyst at Bloomberg Intelligence, on Monday said that streaming was “absolutely” contributing to a weak overall U.S. box office in 2019. “This becomes a bigger problem next year especially with a weaker slate (absence of big franchises from Disney) and the streaming wars going into high gear,” she said in an interview.

Live entertainment, especially sports, has proved to be something of an exception to this trend.

MoffettNathanson called sports viewers “the most entrenched” among those who continue to pay for traditional TV subscriptions. Citing work with analytics firm Altman Vilandrie & Co., he estimated that regular sports viewers made up 60% of current TV subscribers. They make up “the potential floor for the pay TV ecosystem, as long as the major sports leagues’ rights remain exclusive to the pay TV bundle.” He added, “that leaves 40% of today’s Pay TV universe at risk” over the next five years.

As part of its call, MoffettNathanson reiterated its sell ratings on both AT&T Inc. and Dish Network, while recommending investors buy Disney shares.

Among the biggest beneficiaries of the trend in 2019 has been Roku Inc., which acts as a service-agnostic platform for streaming content. The stock has more than quadrupled thus far this year, rising amid the “exuberance over all things streaming,” according to Morgan Stanley. The firm downgraded Roku on Monday, writing that while it was bullish on the company’s growth prospects, the risk profile looked “skewed to the downside” after the 2019 surge.
https://www.bloomberg.com/news/artic...treaming-grows





The One-Traffic-Light Town with Some of the Fastest Internet in the U.S.

Connecting rural America to broadband is a popular talking point on the campaign trail. In one Kentucky community, it’s already a way of life.
Sue Halpern

Before Shani Hays began providing tech support for Apple from her home, in McKee, Kentucky, she worked at a prison as a corrections officer assigned to male sex offenders, making nine dollars an hour. After less than a year, she switched to working nights on an assembly line at a car-parts factory, where she felt safer. More recently, Hays, who is fifty-four, was an aide at a nursing home, putting in a full workweek in a single weekend and driving eighty-five miles to get there. Then her son-in-law, who was married to Hays’s oldest daughter, got addicted to crystal meth and became physically abusive. Hays’s daughter started using, too. The son-in-law went to jail. Their kids were placed in foster care. Then Hays’s stepmother got cancer. “There was a lot going on,” Hays told me. “I was just trying to keep it all together.” She began working from home last summer, which has allowed her to gain custody of her three grandchildren. (Her daughter has since completed treatment for her addiction.) During Hays’s half-hour lunch break, she makes supper. “I wouldn’t be able to do this without the Internet we have here,” she said.

McKee, an Appalachian town of about twelve hundred tucked into the Pigeon Roost Creek valley, is the seat of Jackson County, one of the poorest counties in the country. There’s a sit-down restaurant, Opal’s, that serves the weekday breakfast-and-lunch crowd, one traffic light, a library, a few health clinics, eight churches, a Dairy Queen, a pair of dollar stores, and some of the fastest Internet in the United States. Subscribers to Peoples Rural Telephone Cooperative (P.R.T.C.), which covers all of Jackson County and the adjacent Owsley County, can get speeds of up to one gigabit per second, and the coöperative is planning to upgrade the system to ten gigabits. (By contrast, where I live, in the mountains above Lake Champlain, we are lucky to get three megabytes.) For nearly fifteen million Americans living in sparsely populated communities, there is no broadband Internet service at all. “The cost of infrastructure simply doesn’t change,” Shirley Bloomfield, the C.E.O. of the Rural Broadband Association, told me. “It’s no different in a rural area than in Washington, D.C. But we’ve got thousands of people in a square mile to spread the cost among. You just don’t in rural areas.”

Keith Gabbard, the C.E.O. of P.R.T.C., had the audacious idea of wiring every home and business in Jackson and Owsley Counties with high-speed fibre-optic cable. Gabbard, who is in his sixties, is deceptively easygoing, with a honeyed drawl and a geographically misplaced affection for the Pittsburgh Pirates. He grew up in McKee and attended Eastern Kentucky University, thirty-five miles down Route 421; he lives with his wife, a retired social worker, in a house next door to the one in which he grew up. “I’ve spent my whole life here,” he said. “I’m used to people leaving for college and never coming back. The ones who didn’t go to college stayed. But the best and the brightest have often left because they felt like they didn’t have a choice.”

When Gabbard returned to his home town after college, in 1976, he took an entry-level job at the telephone coöperative. “I had this degree in business management that I thought was really cool, but I got a job answering the phones,” he said. “At the time, we were all on party lines, and everybody was calling and complaining about somebody on their line and they couldn’t get the phone. I was taking those complaints. And I remember thinking that, once we got everyone their own lines, we won’t have any more problems. I didn’t have a clue what was coming.”

At the time, telephone service itself was relatively new in Gabbard’s corner of eastern Kentucky. The area was served by an electric co-op, created in the nineteen-thirties to take advantage of the Rural Electrification Act, New Deal legislation that brought electricity to the most isolated parts of the country. But no commercial telephone company wanted to spend the money to plant the poles and string the wires to connect Jackson and Owsley Counties to the rest of the world. When the R.E.A. was amended, in 1949, to enable co-ops to take advantage of low-interest loans to build and operate telephone services, a group of local businesspeople went door to door assessing the desire and asking residents to demonstrate their commitment by paying a modest membership fee. With a loan from the federal government, they built a telephone company, as Gabbard describes it, “from scratch.” In 1953, Peoples Rural Telephone Cooperative began providing party lines to five hundred and seventy-five subscribers. There are now around seven thousand active members.

After a few years fielding customer complaints at P.R.T.C., Gabbard became a dispatcher, sending out repair crews and scheduling installations. He dabbled a bit in engineering, spent a few years assisting the C.E.O., and, in 1996, replaced him. As chief executive, Gabbard moved the company into the cable-television business, added dial-up Internet, and partnered with four regional telecommunications companies to create Appalachian Wireless, a cell service that now covers twenty-seven Kentucky counties. These upgrades, however, did little to improve the local economy. In 2005, a fire at a manufacturing plant in McKee put seven hundred people out of work overnight. “Our economy fell off a cliff that day,” Jackson County’s chief elected officer, Shane Gabbard, who is no relation to Keith Gabbard, told me when we met in his office in the county courthouse, a redoubt of taxidermy and crucifixes. “The car lot next door to the factory went out of business. The gas station went out. Every business in town was affected.”

By 2009, unemployment in Jackson County was more than sixteen per cent. (In Owsley County, which sits at the edge of coal country, it was about twelve per cent.) Few places in the country were as down-and-out—and even fewer had fibre-optic service to the home. But, as Gabbard and his crew saw it, when it came time to upgrade infrastructure in parts of both counties, it made no sense to replace old copper wiring with new copper wires, which don’t have the capacity for broadband. “It’s no more difficult to build fibre than it is copper,” he said. “It was just a matter of money and time.” With twenty million dollars borrowed from the U.S. Department of Agriculture, and twenty-five million dollars in Obama-era stimulus—some of it a grant and some of it a loan—P.R.T.C. pulled a thousand miles of cable, to all seven thousand structures in the county. In the most rugged terrain around McKee, the crews relied on a mule named Old Bub to haul the cable two or three miles a day. “We’ve got mountains and rocks and not the greatest roads, and there were places we couldn’t get a vehicle to,” Gabbard told me. “Farmers here have been using mules for centuries. It just made sense that, if a place was hard to get to, you went with the mules.” Old Bub, he said, was able to do the work of eight to ten men.

The effort took six years, at a cost of fifty thousand dollars per mile. “Someone has to build to the last mile,” he said. “The big telecom companies aren’t going to do it, because it’s not economical and they have shareholders to answer to. We’re a co-op. We’re owned by our members. We answer to each other.” The grants they got, he said, were a matter of good timing and good luck. P.R.T.C. failed the first time it applied for stimulus money but got it on the second round, and with better terms than it had asked for originally. “One of the things we pitched was how impoverished our region was, how high our unemployment was, and how much this would help us,” Gabbard said. Even so, P.R.T.C. was initially five million dollars short of what it cost to wire the last, most remote residences with fibre-optic broadband; profits from Appalachian Wireless supplied the remaining capital that it needed to finish the job. “Our board and staff, we really wanted to do it all,” Gabbard said. “We wanted everyone to have the same thing.”

Once Jackson and Owsley Counties were wired, Gabbard was approached by the Eastern Kentucky Concentrated Employment Program (EKCEP), to see if they could use P.R.T.C.’s broadband to bring Internet-based jobs to the region. In 2015, Teleworks U.S.A., a job-training nonprofit, opened a branch in Jackson County. It is a collaboration between EKCEP, the phone coöperative, and a number of other civic groups. P.R.T.C. supplies the hub with Internet connectivity and gives three months of free service to anyone who completes a workshop there. In nearly five years, it has created more than six hundred work-at-home jobs in the county. Participants learn enough basic computer skills to get placed at companies such as Hilton Hotels, Cabela’s, U-Haul, Harry & David, and Apple.

Shani Hays, who knew nothing about computers six months ago, is now fielding calls about iPads, AirPods, iPhones, and Apple Watches. “The training was really extensive and really, really hard,” she said. “There was all this technical stuff I knew nothing about, but I just kinda nickel-and-dimed my way through.” Hays has received two raises so far, and now earns more than fourteen dollars an hour. She will soon be eligible for health insurance, paid vacation time, and other benefits. Working at home saves her money, too. When we talked, she had a hard time remembering the last time she had to put gas in her car. “And there’s none of that stopping to get gas and driving away with a coffee and a candy bar and there goes another ten dollars,” she said.

The Teleworks office is in a small industrial park about ten miles south of McKee, in a one-story brick building that sits on a rise looking out on the Daniel Boone National Forest. Inside is a warren of cubicles where people who can’t work from home sit with headsets on, talking and typing, and a conference room where job fairs and workshops are held. On the morning I visited, I spoke with Betty Hays, the operations manager, who has been with the program from the beginning. “The first workshop we had, five years ago, was supposed to be straight-up customer service, like, how to deal with people on the phone,” she recalled. “But I tossed in a little computer tech, because I realized people didn’t know how to do simple things like open tabs or copy and paste.”

There were fifteen people in the class, all of them women whom Betty Hays had worked with at BAE Systems, a defense contractor, sewing military backpacks. In 2014, the company shut its factory in McKee, taking two hundred jobs with it. By the time the workshop ended, all fifteen had been offered jobs paying more than ten dollars an hour, plus benefits. (The minimum wage in Kentucky is $7.25.) Once the placement agencies understood how reliable and fast the Internet was in Jackson, and that there was an untapped workforce, they started offering more jobs. A call center moved in. A factory where helicopter rotors are fabricated was expanded. Hays began taking advantage of the county’s fast, lag-free Internet herself. Between five and eight every morning, before she heads to Teleworks, she talks with schoolchildren in China who are trying to improve their English. The conversations each last twenty minutes and Hays is paid twenty-five dollars an hour. “We joke that there are going to be all these kids in China with Southern accents,” she said.

P.R.T.C. has also partnered with the Department of Veterans Affairs to create a telemedicine office and private lounge inside the county library, where veterans can talk discreetly to mental-health providers and hang out with one another. (The space doubles as a G.E.D. testing center on Mondays, when the V.A. does not schedule appointments. The librarian proctors the exam.) P.R.T.C. not only paid to outfit the room with comfortable furniture; it provides Internet to the entire library. Because so many people sit in their cars after hours and log onto the library’s Wi-Fi, the library now beams it out to the parking lot, too. Shane Gabbard, the Jackson County executive, told me that more people were moving into the county than away from it. “Land is cheap here, taxes are low, and we have more jobs than we can fill,” he said. Unemployment in Jackson County is now under five and a half per cent.

“Rural broadband seemed wonkish to people for a long time, but they’re starting to see it in kitchen-table terms,” the F.C.C. commissioner Jessica Rosenworcel told me. “It doesn’t matter if you’re from red-state America or blue-state America—you’re going to want your kids to be able to do their homework and to succeed in the digital economy.” What this has meant, in real terms, is that the F.C.C. and a number of other federal agencies, most notably the U.S.D.A., now consider broadband to be infrastructure, just as roads and bridges were in the twentieth century. “We used to have to beat our way through policy doors to talk to people about our issues,” Bloomfield, of the Rural Broadband Association, told me. “Suddenly people are focussing on this in a bipartisan fashion.”

Candidates, too, have latched onto rural broadband, seeing it, perhaps, as a way to woo voters in the hinterlands. But it goes beyond the transactional business of electoral politics. The widening rural-urban digital divide is leaving behind whole swaths of the country, exacerbating educational and economic inequalities and thwarting innovations in agriculture. Elizabeth Warren, Joe Biden, Pete Buttigieg, and Tom Steyer have each offered plans to bridge the gap. Amy Klobuchar has been writing legislation to expand rural Internet services for years.

Meanwhile, in April, the Trump Administration, led by the F.C.C.’s chair, Ajit Pai, announced its own broadband initiative, the Rural Digital Opportunity Fund, which, as critics have pointed out, is essentially a renaming and repurposing of an Obama-era program called the Connect America Fund. That program uses a portion of the Universal Service Fund, a pool of money collected from customers by their service providers and passed along to the F.C.C., to subsidize, among other things, phone and broadband service in places where it is not otherwise economical. Some companies receive more money back from the U.S.F. than they contribute. Others pay in more than they receive. P.R.T.C., for example, gets a U.S.F. subsidy every month that enables the coöperative to avoid passing along the real—and prohibitive—cost of service to its members, which Gabbard estimates to be two or three times what P.R.T.C. actually charges.

The big telecom companies also receive U.S.F. money, often taking advantage of a loophole in the law that lets them claim to be operating in an underserved area as long as they are providing service to a single customer in a rural census block. These “false positives,” Bloomfield told members of the House of Representatives in September, too often result in areas without service appearing on maps as if they were covered. (As a case in point, many of the residents of Lee County, Kentucky, which is adjacent to Jackson and Owsley Counties, while “served” by A.T. & T., are still only offered dial-up Internet.) The solution, Bloomfield told me, is better mapping. “It’s the No. 1 thing,” she said. “We really need to get carriers to really be honest about what areas they’re serving, what they’re not serving, and what the speeds are.” Better maps will enable U.S.F. money to be distributed more equitably, freeing up funds for coöperatives, municipalities, and smaller, regional companies to build the necessary infrastructure to deliver broadband to otherwise overlooked communities.

Fibre-optic wiring looped beneath a street light in Jackson County.

Owsley County, even more than Jackson County, might seem the least likely community in the country to be wired with fibre-optic cable. In 2016, Al Jazeera found it to be the “poorest white county” in the United States. Even now the median household income is about twenty-three thousand dollars a year, and a third of Owsley residents live at or below the poverty line. The county has been hit hard by the Appalachian trifecta of opioid addiction, the collapse of the coal industry, and the decline of tobacco farming. Tim Bobrowski, the county’s school superintendent, estimated that thirty-five per cent of his students were being raised by their grandparents or someone else because their parents were in jail, addicted, or dead. It was hard, he said, to get adults to care much about education. “It’s not different here than in urban areas: where there’s poverty, there’s apathy. Where there’s apathy, there’s poverty.”

Bobrowski, the son of a Methodist minister, grew up in Booneville, a town of about a hundred and the county seat. He returned after college to teach science and social studies before becoming principal and then superintendent. A few years ago, the school district gave every student, starting in the third grade, a Chromebook computer in lieu of textbooks. “Sometimes, kids will open their computers in class and roaches will crawl out,” Bobrowski said, putting a fine point on the hardships faced by his students. But he’s clear that Internet access has helped close the homework gap and exposed young people to resources outside of their community. Last year, for the fifth year in a row, a student was able to earn an associate’s degree while enrolled at the local high school. (Only about a fifth of Owsley adults have an associate’s degree or more.)

The Owsley County school district has been able to take advantage of the Internet in other ways, too. It has established a telemedicine connection with an area clinic that gives students and staff access to on-call pediatricians and mental-health practitioners. And when the weather is bad, or there is a flu outbreak, teachers are able to stream their classes to their students at home. It’s called a nontraditional-instruction day, and it has allowed the school district to collect needed state funds even when the schools themselves are closed. Bobrowski is now looking into the possibility of capitalizing on broadband to create remote internships for his students. “I want this technology to give them a sense of hope,” he said.

A few years ago, a Teleworks hub opened in a former strip mall in Booneville. So far, it has created three hundred jobs. “Three hundred people employed in a small county like this makes a big difference,” Carla Gabbard (no relation to either Keith or Shane) told me. (Owsley Couny has around forty-five hundred residents.) She mentioned a woman who had been making five dollars an hour at a gas station but who is now making eighteen dollars an hour plus health insurance, and another who had a drug-related felony conviction and couldn’t get a job until Teleworks found a company that didn’t require background checks. “Three years later, she’s still working and still off drugs,” Gabbard said. And, although three hundred jobs and high-speed Internet can’t undo decades of poverty, they have lowered the unemployment rate by four percentage points since the Teleworks hub opened, three years ago.

“I don’t think having broadband is necessarily going to make a five-hundred-job factory move in to Owsley, but it certainly can make people’s lives better and keep them from having to drive a hundred miles a day, back and forth, to work,” Keith Gabbard said. “You can’t make everybody magically go from making twenty-five thousand dollars a year to seventy-five thousand. Broadband is not going to create higher-paying jobs for everyone in the county. But it can help education. It can help entertainment. It can help the economy. It can help health care. And I even think that people’s mind-set—how they feel about themselves—can be improved just by not always saying ‘We don’t have nothing here.’ In this case, we have something to be proud of. We have something everyone else wants.”
https://www.newyorker.com/tech/annal...rnet-in-the-us





Building a More Honest Internet

What would social media look like if it served the public interest?
Ethan Zuckerman

Over the course of a few short years, a technological revolution shook the world. New businesses rose and fell, fortunes were made and lost, the practice of reporting the news was reinvented, and the relationship between leaders and the public was thoroughly transformed, for better and for worse. The years were 1912 to 1927 and the technological revolution was radio.

Radio began as bursts of static used to convey dots and dashes of information over long distances. As early as 1900, sound was experimentally broadcast over the airwaves, but radio as we know it—through which anyone with an AM receiver can tune in to hear music and voices—wasn’t practical until 1912, when teams around the world independently figured out how to use the triode vacuum tube as an amplifier. In the years that followed, three countries—the United States, the United Kingdom, and the Soviet Union—developed three distinct models for using the technology.

In the US, radio began as a free-market free-for-all. More than five hundred radio stations sprang up in less than a decade to explore the possibilities of the new medium. Some were owned by radio manufacturers, who created broadcasts so they could sell radio receivers; others were owned by newspapers, hotels, or other businesses, which saw radio as a way to promote their core product. But 40 percent were noncommercial: owned by churches, local governments, universities, and radio clubs. These stations explored the technical, civic, and proselytizing possibilities of radio. Then came 1926, and the launch of the National Broadcasting Corporation by the Radio Corporation of America, followed in 1927 by the Columbia Broadcasting System. These entities, each of which comprised a network of interlinked stations playing local and national content supported by local and national advertising, became dominant players. Noncommercial broadcasters were effectively squeezed out.

In the Soviet Union, meanwhile, ideology prevented the development of commercial broadcasting, and state-controlled radio quickly became widespread. Leaders of the new socialist republics recognized the power of broadcasting as a way to align the political thinking of a vast land populated primarily by illiterate farmers. Radio paralleled industrialization: in the 1920s, as workers moved to factories and collective farms, they were met with broadcasts from loudspeakers mounted to factory walls and tall poles in town squares. And once private radios became available, “wired radio”—a hardwired speaker offering a single channel of audio—connected virtually every building in the country.

The United Kingdom went a different route, eschewing the extremes of unfettered commercialism and central government control. In the UK’s model, a single public entity, the British Broadcasting Company, was licensed to broadcast content for the nation. In addition to its monopoly, the BBC had a built-in revenue stream. Each radio set sold in the UK required the purchase of an annual license, a share of which went to fund the BBC. The BBC’s first director, John Reith, was the son of a Calvinist minister and saw in his leadership a near-religious calling. The BBC’s mission, he thought, was to be the British citizen’s “guide, philosopher, and friend,” as Charlotte Higgins writes in This New Noise (2015), her book on the BBC. Under Reith, the BBC was the mouthpiece of an empire that claimed dominion over vast swaths of the world, and it was socially conservative and high-minded in ways that could be moralistic and boring. But it also invented public service media. In 1926, when a national strike shut down the UK’s newspapers, the BBC, anxious to be seen as independent, earned credibility by giving airtime to both government and opposition leaders. Over the subsequent decades, the BBC—rechristened the British Broadcasting Corporation in 1927—has built a massive international news-gathering and distribution operation, becoming one of the most reliable sources of information in the world.

Those models, and the ways they shaped the societies from which they emerged, offer a helpful road map as we consider another technological revolution: the rise of the commercial internet. Thirty years after the invention of the World Wide Web, it’s increasingly clear that there are significant flaws in the global model. Shoshana Zuboff, a scholar and activist, calls this model “surveillance capitalism”; it’s a system in which users’ online movements and actions are tracked and that information is sold to advertisers. The more time people spend online, the more money companies can make, so our attention is incessantly pulled to digital screens to be monitored and monetized. Facebook and other companies have pioneered sophisticated methods of data collection that allow ads to be precisely targeted to individual people’s consumer habits and preferences. And this model has had an unintended side effect: it has turned social-media networks into incredibly popular—some say addictive—sources of unregulated information that are easily weaponized. Bad-faith actors, from politically motivated individuals to for-profit propaganda mills to the Russian government, can easily harness social-media platforms to spread information that is dangerous and false. Disinformation is now widespread across every major social-media platform.

In response to the vulnerabilities and ill effects associated with large-scale social media, movements like Time Well Spent seek to realign tech industry executives and investors in support of what they call “humane tech.” Yes, technology should act in the service of humanity, not as an existential threat to it. But in the face of such a large problem, don’t we need something more creative, more ambitious? That is, something like radio? Radio was the first public service media, one that still thrives today. A new movement toward public service digital media may be what we need to counter the excesses and failures of today’s internet.

The dominant narrative for the growth of the World Wide Web, the graphical, user-friendly version of the internet created by Tim Berners-Lee in 1989, is that its success has been propelled by Silicon Valley venture capitalism at its most rapacious. The idea that currently prevails is that the internet is best built by venture-backed startups competing to offer services globally through category monopolies: Amazon for shopping, Google for search, Facebook for social media. These companies have generated enormous profits for their creators and early investors, but their “surveillance capitalism” business model has brought unanticipated harms. Our national discussions about whether YouTube is radicalizing viewers, whether Facebook is spreading disinformation, and whether Twitter is trivializing political dialogue need to also consider whether we’re using the right business model to build the contemporary internet.

As in radio, the current model of the internet is not the inevitable one. Globally, we’ve seen at least two other possibilities emerge. One is in China, where the unfettered capitalism of the US internet is blended with tight state oversight and control. The result is utterly unlike sterile Soviet radio—conversations on WeChat or Weibo are political, lively, and passionate—but those have state-backed censorship and surveillance baked in. (Russia’s internet is a state-controlled capitalist system as well; platforms like LiveJournal and VKontakte are now owned by Putin-aligned oligarchs.)

The second alternative model is public service media. Wikipedia, the remarkable participatory encyclopedia, is one of the ten most-visited websites in the world. Wikipedia’s parent company, Wikimedia, had an annual budget of about $80 million in 2018, but it spent just a quarter of 1 percent of what Facebook spent that year. Virtually all of Wikimedia’s money comes from donations, the bulk of it in millions of small contributions rather than large grants. Additionally, Wikimedia’s model is made possible by millions of hours of donated labor provided by contributors, editors, and administrators.

Wikipedia’s success has been difficult to extend beyond encyclopedias, though. Wikinews, an editable, contributor-driven daily newspaper, often finds itself competing with its far larger sibling; breaking news is often reported in Wikipedia articles even before it enters Wikimedia’s newsroom. Wikibooks, which creates open-source textbooks, and Wikidata, which hosts open databases, have had more success, but they don’t dominate a category the way Wikipedia does. Of the world’s top hundred websites, Wikipedia is the sole noncommercial site. If the contemporary internet is a city, Wikipedia is the lone public park; all the rest of our public spaces are shopping malls—open to the general public, but subject to the rules and logic of commerce.

For many years, teachers warned their students not to cite Wikipedia—the information found there didn’t come from institutional authorities, but could be written by anyone. In other words, it might be misinformation. But something odd has happened in the past decade: Wikipedia’s method of debating its way to consensus, allowing those with different perspectives to add and delete each other’s text until a “neutral point of view” is achieved, has proved surprisingly durable. In 2018, when YouTube sought unbiased information about conspiracy theories to provide context for controversial videos, it added text sourced from Wikipedia articles. In the past decade, we’ve moved from Wikipedia being the butt of online jokes about unreliability to Wikipedia being one of the best definitions we currently have of consensus reality.

If the contemporary internet is a city, Wikipedia is the lone public park; all the rest of our public places are shopping malls—open to the general public, but subject to the rules and logic of commerce.

While it’s true that public service media like Wikipedia have had to share the landscape with increasingly sophisticated commercial companies, it’s also true that they fill a void in the marketplace. In 1961, Newt Minow, who had just been appointed Federal Communications Commissioner, challenged the National Association of Broadcasters to watch a full day’s worth of their insipid programming. “I can assure you that what you will observe is a vast wasteland,” he declared. Rather than simply limiting entertainment, Minow and his successors focused on filling the holes in educational, news, and civic programming—those areas left underserved by the market—and by the early 1970s, public service television and radio broadcasters like PBS and NPR were bringing Sesame Street and All Things Considered to the American public.

A public service Web invites us to imagine services that don’t exist now, because they are not commercially viable, but perhaps should exist for our benefit, for the benefit of citizens in a democracy. We’ve seen a wave of innovation around tools that entertain us and capture our attention for resale to advertisers, but much less innovation around tools that educate us and challenge us to broaden our sphere of exposure, or that amplify marginalized voices. Digital public service media would fill a black hole of misinformation with educational material and legitimate news.

Recently, President Trump referenced a widely discredited study to make the absurd claim that Google manipulated search results in order to swing the 2016 presidential election toward Hillary Clinton. Though Trump’s claim is incorrect (and was widely shared with his massive following on Twitter, demonstrating the untrustworthiness of social media), it rests atop some uncomfortable facts. Research conducted by Facebook in 2013 demonstrated that it may indeed be possible for the platform to affect election turnout. When Facebook users were shown that up to six of their friends had voted, they were 0.39 percent more likely to vote than users who had seen no one vote. While the effect is small, Harvard Law professor Jonathan Zittrain observed that even this slight push could influence an election—Facebook could selectively mobilize some voters and not others. Election results could also be influenced by both Facebook and Google if they suppressed information that was damaging to one candidate or disproportionately promoted positive news about another.

This sort of manipulation would be even harder to detect than Russia’s disinformation campaign during the 2016 US election, because evidence would consist not of inaccurate posts, but subtle differences in the ranking of posts across millions of users. Furthermore, it may be illegal to audit systems to determine if manipulation is taking place. Computer science professor Christian Sandvig and a team of academics are currently suing the Department of Justice for the right to investigate racial discrimination on online platforms. That work could fall afoul of the Computer Fraud and Abuse Act, which imposes severe penalties on anyone found guilty of accessing a system like Facebook or Google in a way that “exceeds authorized access.”

One way to avoid a world in which Google throws our presidential election would be to allow academics or government bureaucrats to regularly audit the search engine. Another way would be to create a public-interest search engine with audits built in. The idea is not quite as crazy as it sounds. From 2005 to 2013, the French government spearheaded a collaborative project called Quaero, a multimedia search engine designed to index European cultural heritage. The project faltered before it could become a challenger to platforms like Google, but had it continued, EU law would have mandated that it have a high degree of transparency. In 2015, Wikimedia began planning a new search engine, the Wikimedia Knowledge Engine, to compete with systems like Wolfram Alpha and Siri, both of which provide data-driven, factual analysis in response to queries. A key part of the project’s design goals was auditability. (The project was abandoned when it created dissension within the Wikimedia community.)

We can imagine a search engine with more transparency about, for example, why it ranks certain sites above others in search results, with a process in place to challenge disputed rankings. But it might be more exciting to imagine services and tools that don’t yet exist, ones that will never be created by for-profit companies.

Consider social media. Research suggests that social platforms may be increasing political polarization, straining social ties, and causing us anxiety and depression. Facebook is criticized for creating echo chambers and “filter bubbles” in which people only encounter content—sometimes inaccurate content—that reinforces their prejudices. The resulting disinformation is, in part, a fault of its financial model. It happens because the platform optimizes for “engagement,” measured in time spent on the site and interactions with content, so the company has a disincentive to challenge users with difficult or uncomfortable information. The key reason misinformation spreads so fast and far is that people like sharing it. The stories that offer the biggest opportunities for engagement—and thus the stories that Facebook is built to direct our attention to—are stories that reinforce existing prejudices and inspire emotional reactions, whether or not they are accurate.

Can we imagine a social network designed in a different way: to encourage the sharing of mutual understanding rather than misinformation? A social network that encourages you to interact with people with whom you might have a productive disagreement, or with people in your community whose lived experience is sharply different from your own? Imagine a social network designed to allow constituents in a city to discuss local bills and plans before voting on them, or to permit recent immigrants to connect with potential allies. Instead of optimizing for raw engagement, networks like these would measure success in terms of new connections, sustained discussions, or changed opinions. These networks would likely be more resilient in the face of disinformation, because the behaviors necessary for disinformation to spread—the uncritical sharing of low-quality information—aren’t rewarded on these networks the way they are on existing platforms.

Can we imagine a social network designed a different way: to encourage the sharing of mutual understanding rather than misinformation? Illustration by James Yang

What’s preventing us from building such networks? The obvious criticisms are, one, that these networks wouldn’t be commercially viable, and, two, that they won’t be widely used. The first is almost certainly true, but this is precisely why public service models exist: to counter market failures.

The second is more complicated. The two biggest obstacles to launching new social networks in 2019 are Facebook and… Facebook. It’s hard to tear users away from a platform they are already accustomed to; then, if you do gain momentum with a new social network, Facebook will likely purchase it. A mandate of interoperability could help. Right now, social networks compete for your attention, asking you to install specific software on your phone to interact with them. But just as Web browsers allow us to interact with any website through the same architecture, interoperability would mean we could build social media browsers that put existing social networks, and new ones, in the same place.

The question isn’t whether a public social media is viable. It is if we want it to be. The question is what we’d want to do with it. To start, we need to imagine digital social interactions that are good for society, rather than corrosive. We’ve grown so used to the idea that social media is damaging our democracies that we’ve thought very little about how we might build new networks to strengthen societies. We need a wave of innovation around imagining and building tools whose goal is not to capture our attention as consumers, but to connect and inform us as citizens.
https://www.cjr.org/special_report/b...c-interest.php





Bernie Sanders Unveils $150 Billion Plan to Expand High-Speed Internet Access

And dismantle internet and cable monopolies
Makena Kelly

On Friday, Sen. Bernie Sanders (D-VT) announced a new plan aimed at expanding broadband internet access across the country and dismantling what he referred to as “internet and cable monopolies.”

In his sweeping “High-Speed Internet for All” proposal, Sanders calls for broadband to be considered a public utility, much like electricity, and calls access “a basic human right.” The plan would provide $150 billion in grants and technical assistance to states and communities for the purpose of building out their own “democratically controlled, co-operative, or open access broadband networks.”

As part of the new plan, Sanders defines “broadband” as 100 Mbps down and 10 Mbps up, which is significantly higher than the Federal Communications Commission standard of 25 Mbps down and 3 Mbps up. If elected president, Sanders said he would also work to restore net neutrality and ban internet and cable companies from instituting data caps and throttling consumer access to the internet.

Sanders’ plan also outlines a broader antitrust effort against internet and cable companies. If elected, he would use existing antitrust law to “bar service providers from also providing content and unwind anticompetitive vertical conglomerates.” This policy could potentially impact every major US carrier, particularly Comcast’s ownership of NBCUniversal, AT&T’s ownership of WarnerMedia, and Verizon’s ownership of AOL.

“We will break these monopolies up and closely regulate them to ensure they are providing consumers with acceptable service, and eliminate hidden fees, surprise bills, and other consumer-gouging practices,” Sanders said.

Sanders is just the latest 2020 Democratic primary contender to put out a plan to expand broadband access. In August, Sen. Elizabeth Warren (D-MA) proposed an $85 billion grant program to increase internet access for rural communities and Sound Bend, Indiana mayor Pete Buttigieg put out his own $80 billion plan. Sen. Amy Klobuchar’s (D-MN) trillion-dollar infrastructure proposal also includes a pledge to make internet service universal.

“Access to the internet is a necessity in today’s economy, and it should be available for all,” Sanders said.
https://www.theverge.com/2019/12/6/2...-election-2020





Spying Tools Website Taken Down after UK Raids
BBC

A website selling hacking tools that let attackers take over victims' computers has been closed down after an international investigation.

The UK's National Crime Agency (NCA) said 14,500 people had bought spying tools from the Imminent Methods site.

Police searched more than 80 properties across the world to find those selling the tools.

They were also able to trace people who had bought the software and charge them with computer misuse offences.

'Serious criminality'

Imminent Methods sold a tool known as the Imminent Monitor Remote Access Trojan (Imrat) for about $25 (£19).

It gave the attacker full access to an infected device, letting them steal data, monitor what the victim was doing and even access their webcam.

The NCA said properties in Hull, Leeds, London, Manchester, Merseyside, Milton Keynes, Nottingham, Somerset and Surrey were among those searched.

The international operation was led by the Australian Federal Police.

The authorities were able to take down the website selling the software, which subsequently stopped the cyber-stalking tools from working.

The NCA's Phil Larratt said the tools had been used "to commit serious criminality" including "fraud, theft and voyeurism".

Police said 14 people had been arrested worldwide in connection with the sale and use of the software.

By seizing control of the website, police will have been able to "take a good look at what the site has been up to, including who has bought the illegal items", said Prof Alan Woodward, a cyber-security expert from the University of Surrey.

"The authorities now know how many users bought the malware on offer. They will now be working to unmask the 14,500 who were daft enough to buy this malware."

Crime as a service

"Organised crime gangs, as well as more petty criminals, are switching their attention to cyber-crime rather than, say, drugs, because it is perceived there will be a significant return on their investment and much lower risk," said Prof Woodward.

He said in addition to selling hacking tools, criminals also provide access to the infrastructure to power their malware, including so-called bulletproof hosting.

"They set themselves up in jurisdictions and in such a technical manner that they think they are untouchable by law enforcement agencies in the countries where their clients conduct their crimes," he told the BBC.

"All of the above is called crime as a service, and has been a significant trend in recent years."
https://www.bbc.com/news/technology-50601905





All New Cell Phone Users in China Must Now Have their Face Scanned
MIT

The news: Customers in China who buy SIM cards or register new mobile-phone services must have their faces scanned under a new law that came into effect yesterday. China’s government says the new rule, which was passed into law back in September, will “protect the legitimate rights and interest of citizens in cyberspace.”

A controversial step: It can be seen as part of an ongoing push by China’s government to make sure that people use services on the internet under their real names, thus helping to reduce fraud and boost cybersecurity. On the other hand, it also looks like part of a drive to make sure every member of the population can be surveilled.

How do Chinese people feel about it? It’s hard to say for sure, given how strictly the press and social media are regulated, but there are hints of growing unease over the use of facial recognition technology within the country. From the outside, there has been a lot of concern over the role the technology will play in the controversial social credit system, and how it’s been used to suppress Uighur Muslims in the western region of Xinjiang.

Knock-on effect: How facial recognition plays out in China might have an impact on its use in other countries, too. Chinese tech firms are helping to create influential United Nations standards for the technology, The Financial Times reported yesterday. These standards will help shape rules on how facial recognition is used around the world, particularly in developing countries.
https://www.technologyreview.com/f/6...-face-scanned/





Cops Are Running Ring Camera Footage Through Their Own Facial Recognition Software Because Who's Going To Stop Them
Tim Cushing

Ring may be holding off on adding facial recognition tech to its already-problematic security cameras, but that's not stopping any of its not-exactly-end-users from doing it for themselves.

Ring is swallowing up the doorbell camera market with aggressive marketing that includes the free use of taxpayer-funded services. It calls over 600 law enforcement agencies "partners." In exchange for agency autonomy and free cameras, police departments all over the nation are pushing cameras on citizens and asking them to upload anything interesting to Ring's "I saw someone brown in my neighborhood" app, Neighbors.

The company that has someone in charge of its facial recognition division Ring claims it's not using to implement facial recognition tech is handing out cameras like laced candy. Law enforcement agencies are snatching the cameras up. And they're snatching the footage up, using subpoenas to work around recalcitrant homeowners. Once they have the footage, they can keep it forever and share it with whoever they want.

They can also run the footage through whatever hardware or software they have laying around, as Caroline Haskins reports for BuzzFeed.

Amazon does not offer the ability to recognize faces in footage on its Ring doorbell cameras. But just one month after police in Chandler, Arizona, received 25 surveillance cameras for free from the company, the department's then–assistant chief discussed using its own facial recognition technology on Ring footage at a meeting of the International Association of Chiefs of Police, according to his slideshow obtained in a public records request.

In an April presentation titled “Leveraging Consumer Surveillance Systems,” Jason Zdilla discussed various consumer surveillance devices and platforms. Examples cited in the presentation included Ring cameras and the Neighbors app.

It's the perfect storm of unaccountability. Footage can be obtained from Ring with a subpoena. Ring hands it over with zero strings attached. Cop shop runs it through the Zoom Enhancer and any databases it has or has access to. Bingo: facial recognition in cameras supplied by a company that says it's not all that into facial recognition at the moment.


Now, you may be wondering why this is a big deal. Why does any of this matter when other surveillance systems with cloud storage are likely similarly responsive to subpoenas and place no restrictions on footage they hand over to law enforcement?

Well, two things: first, Ring claims all footage belongs to camera owners, but treats camera owners as if they're not a stakeholder when it comes to sharing their recordings with the government.

Second -- and far more importantly -- Ring aggressively courts police departments as "partners," turning consumer products into unofficial extensions of existing government camera networks. Ring hands out free cameras to cops and hands out even more freebies if cops convince homeowners to download the Neighbors app and share as much footage as possible. Ring also takes control of all PR efforts and official statements involving Ring doorbells that cops have given to citizens. And Ring coaches cops how to obtain footage without having to trouble the courts with a warrant.

This is unlike any other company in the home security business. Ring's assimilation of hundreds of law enforcement agencies blurs the line between public and private in the name of commerce. Taxpayers are contributing to their own co-opting into a surveillance mesh network propelled by one of the largest companies in the world. This isn't acceptable. But the longer Ring's expansion remains unchecked, the sooner its behavior will become normalized. And once it's normalized, it's over.
https://www.techdirt.com/articles/20...top-them.shtml





France Proposes Upload Filter Law, “Forgets” User Rights
Julia Reda

When the European Union adopted the new copyright directive, including its infamous Article 17, the upload filtering provision, it gave Member States time until June 2021 to introduce the new rules into their national copyright laws. France, the most fervent supporter of Article 17, apparently has no time to lose and just presented the new draft law designed to transpose Article 17 and some other parts of the copyright directive.

France’s implementation proposal is important to follow wherever you are in the EU, because it likely marks the worst-case scenario of how Article 17 could unfold if rightsholders get their way. Given that the French government has been the mouthpiece of the entertainment industry throughout the negotiations, perhaps one should not be surprised that it tries to interpret the new rules in the way most favorable to rightsholders. After all, president Emmanuel Macron personally intervened with Angela Merkel to secure Germany’s support for Article 17 in clear breach of the German coalition government agreement.

Yet the audacity with which the proposed French law ignores the safeguards included in the EU copyright directive to protect user rights should be baffling even to the most cynical commentator. The proposal needs to be adopted by the French legislator, so there is still a chance to improve it, but given that the vast majority of French Members of the European Parliament from different parties across the political spectrum voted for the EU directive, there is likely to be broad support for the national proposal as well.

Cultural Sovereignty !?

The new draft law on “audiovisual communication and cultural sovereignty in the digital age” covers a number of different subjects aside from copyright law, including the protection of minors and the regulation of video streaming platforms like Netflix. The title of the proposed law gives a glimpse into the mindset of French legislators, presenting the enforcement of copyright laws in the interest of private entertainment companies as a matter of asserting France’s “cultural sovereignty”. It frames Article 17 as a means to support the European entertainment industry in its conflicts with American tech companies. Users’ interests are at best an afterthought in this struggle for “cultural sovereignty”.

This blog post examines the part of the proposal which implements Article 17 of the Directive on copyright in the digital single market (found on pages 28 to 34 of the draft law). That part is divided into four sections, dealing with platform definition, platform obligations, transparency and user rights. The last section is a bit of a misnomer, because it ignores the vast majority of user rights included in Article 17 of the EU copyright directive, which were introduced in response to the massive protests against the potentially devastating effects of Article 17 on fundamental rights such as freedom of expression.

Platform Definition

Despite assertions by supporters of Article 17 that the law is aimed at huge social media companies like YouTube and Facebook, the French proposal still tries to extend the new obligations to as many platforms as possible. The definition included in section 1 of the proposal is mostly identical to the definition included in the EU copyright directive, which has been criticized for being exceedingly vague. No effort is made to narrow down what is meant by unclear terms from the directive such as “large amounts” of copyrighted content uploaded to a platform. Instead, the French law provides that a decree should define what is considered a large amount.

There is, however, one important change: The definition does not just include platforms that profit directly from user uploads of copyrighted content, but also those that do so indirectly. That could include platforms whose business model is not based on giving access to user uploads of copyrighted content (for example by placing advertisements next to that content), but who nevertheless allow such uploads. One example could be the dating app Tinder, which is based on a freemium business model, where users can pay for extra functionality which gives their dating profiles greater visibility. These profits are clearly not directly derived from giving users access to copyright-protected content, yet without the possibility to upload copyrighted content (pictures), the app clearly would not function, so it could be argued that it derives its profits indirectly from organizing the uploaded pictures.

The EU directive does mention indirect profit in the recitals, which are not legally binding, but not in the legal definition. It seems that the French government cherry-picks from the recitals, ignoring the guidance that is supposed to narrow down the definition and only including the parts that widen it. For example, the clarification from the recitals that Article 17 should only apply to platforms that compete with licensed content streaming services for the same audiences (which would clearly exclude platforms like Tinder) is completely missing from the French law.

Platform obligations

The core of the proposal, section 2, is mostly identical to the provisions of the directive. Platforms that fall under the definition established in section 1 are directly liable for copyright infringements by their users, unless the platform can show that it did everything in its power to obtain a license from the rightsholder and to block unauthorized user uploads of copyrighted content identified to the platform by rightsholders. Lighter obligations exist for startups that are less than three years old, as described in the EU directive.

The French draft law clarifies that rightsholders should be completely free in deciding whether to give a license to a platform, shutting down any efforts such as those discussed in Germany to avoid upload filters by introducing some kind of mandatory licensing solution. Whenever a rightsholder decides not to offer a platform a license, it will therefore have to use upload filters. This is particularly interesting given that the German government announced that it would try to cooperate with other European countries to try to find a solution that doesn’t rely on upload filters. France, one of the largest EU Member States, is clearly not interested in such a solution.

Transparency

Section 3 includes some transparency obligations that platforms have towards rightsholders (not towards users, of course!) about the types of measures used to block unauthorized content. The main difference to the EU directive is that the French proposal makes it clear that platforms do not have to reveal any trade secrets in order to comply with the transparency obligations. This addition could severely limit the chances of the public to inspect upload filters used by private companies for potential fundamental rights issues, as companies will declare the detailed functioning of their upload filters a trade secret. Apparently, the only thing that France loves more than giving authors to right to stop the flow of information is to give companies the right to stop the flow of information.

User rights

The only part of this section that’s faithful to the directive is the title. Remember when the European Commission claimed that your memes will be safe? Memes would not be deleted, the Commission argued, because Article 17 makes the exceptions for parody, caricature, pastiche and quotation mandatory and clarifies that Member States have to make sure that users can benefit from these exceptions in practice. It also states that platforms cannot be forced to generally monitor all user uploads (which is necessary for any upload filter) and that legal uploads must not be deleted as a consequence of implementing Article 17.

Well, France “forgot” to mention all of that in its national proposal. The copyright exceptions under French law stay completely unchanged, although they are notoriously patchy and do not cover all situations that may arise on online platforms, such as quoting from a video. France also completely fails to ensure that users can benefit from these exceptions in practice when they upload something to a platform. Instead of ensuring that platforms do not override existing copyright exceptions in their terms and conditions, as the directive requires, the French proposal simply asks platforms to inform users about the existence of copyright exceptions under national law. The decisive parts of Article 17, which state that platforms must allow users to actually benefit from these exceptions, and that such legal content must not be blocked in the first place, are completely missing.

It’s clear from the creatively named “user rights” section of the draft law that copyrighted content gets blocked by default and users can only benefit from copyright exceptions if they complain after their content has already been blocked. Of course, getting your reaction gif or live stream unblocked a couple of days after the fact is completely useless, which explains why very few users ever make use of such complaint mechanisms where they exist. Under the French proposal, platforms have to offer a mechanism to deal with user complaints about blocked content (so the procedure is clearly “block first, ask questions later”).

Rightsholders, unlike what the directive says, do not have to justify their initial requests to block content, but only have to respond once a user challenges the blocking of one of their uploads. During this dispute resolution, the content stays blocked. This opens the door to copyfraud, where companies falsely claim to hold rights in other people’s creations, and the original author has to complain to have their own work unblocked. Although the directive says that all decisions by a platform to block content must be subject to human review, the French proposal only requires this in cases where a user complains after their content has already been blocked. Outrageous mistakes by fully automated upload filters are likely to become a lot more common under this proposal.

To add insult to injury, when users or rightsholders want to complain about the result of the redress mechanism offered by the platform, they are supposed to turn to a new regulator called ARCOM, which is the direct successor of HADOPI, the organization best known for administering the infamous “three strikes” rule, which could block users from accessing the Internet if they repeatedly violated copyright law. This is hardly a regulator that is known for impartially weighing the competing interests of users and rightsholders.

Copyright Fight: Round 2

The French draft law confirms the worst fears of the EU copyright directive’s critics. The strictest version of the upload filter provision is proposed, while any safeguards that have been introduced to respond to the huge public protests are simply ignored. It’s hard to imagine that such a selective implementation of an EU directive would be accepted by the courts, but before it could come to a lawsuit, a lot of damage would already be done. Smaller countries often tend to copy the national implementations of EU law proposed by the larger countries, so there is a significant danger that France could set the standard for copyright enforcement in the entire EU. The European Commission should remind the French government of its obligation to implement the entire directive, not just the part that benefit large entertainment companies. With a French Commissioner in charge of copyright issues in the new European Commission, however, that is unlikely to happen.

It is therefore once again upon the users to raise the alarm bells on this most dangerous version of Article 17 yet! The French parliament can still stop this law from advancing as proposed. We must also pressure other European governments not to follow this terrible example and take user rights seriously.
https://juliareda.eu/2019/12/french_uploadfilter_law/

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

November 30th, November 23rd, November 16th, November 9th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 11:24 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)