P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 11-11-20, 07:27 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - November 14th, ’20

Since 2002



Volume XIX, Issue Number I


























November 14th, 2020




RIAA Abuses DMCA to Take Down Popular Tool for Downloading Online Videos
Elliot Harmon

"youtube-dl" is a popular free software tool for downloading videos from YouTube and other user-uploaded video platforms. GitHub recently took down youtube-dl’s code repository at the behest of the Recording Industry Association of America, potentially stopping many thousands of users, and other programs and services, that rely on it.

On its face, this might seem like an ordinary copyright takedown of the type that happens every day. Under the Digital Millennium Copyright Act (DMCA), a copyright holder can ask a platform to take down an allegedly infringing post and the platform must comply. (The platform must also allow the alleged infringer to file a counter-notice, requiring the copyright holder to file a lawsuit if she wants the allegedly infringing work kept offline.) But there’s a huge difference here with some frightening ramifications: youtube-dl doesn’t infringe on any RIAA copyrights.

youtube-dl doesn’t use RIAA-member labels’ music in any way. The makers of youtube-dl simply shared information with the public about how to perform a certain task—one with many completely lawful applications.

RIAA’s argument relies on a different section of the DMCA, Section 1201. DMCA 1201 says that it’s illegal to bypass a digital lock in order to access or modify a copyrighted work. Copyright holders have argued that it’s a violation of DMCA 1201 to bypass DRM even if you’re doing it for completely lawful purposes; for example, if you’re downloading a video on YouTube for the purpose of using it in a way that’s protected by fair use. (And thanks to the way that copyright law has been globalized via trade agreements, similar laws exist in many other jurisdictions too.) RIAA argues that since youtube-dl could be used to download music owned by RIAA-member labels, no one should be able to use the tool, even for completely lawful purposes.

This is an egregious abuse of the notice-and-takedown system, which is intended to resolve disputes over allegedly infringing material online. Again, youtube-dl doesn’t use RIAA-member labels’ music in any way. The makers of youtube-dl simply shared information with the public about how to perform a certain task—one with many completely lawful applications.
https://www.eff.org/deeplinks/2020/1...g-online-video





Biden Win Could Curb Deals, Revive Net Neutrality in FCC Pivot

Joe Biden administration likely to overturn controversial Donald Trump diversity training executive order
Todd Shields

A victory by Joe Biden in the Nov. 3 election could usher in an abrupt change in the nation’s telecommunications policy, restoring so-called net neutrality regulation and shifting the Republican drive to rein in social media outlets, among other things.

“Democrats are more comfortable with an activist role,” Cowen & Co. analyst Paul Gallant said in an interview.

Biden hasn’t talked much about the FCC during the campaign, but his party’s platform is specific. It calls for restoring net neutrality rules put in place under then-President Barack Obama when Biden served as vice president and taking a harder line on telecommunications mergers.

And the Democratic commissioners on the FCC have already objected to the agency’s steps to strip the social media companies of liability protections they have for what users post in response to alleged favoritism by the platforms for liberal points of view.

That last issue will be in the spotlight on Wednesday when the CEOs of Facebook Inc., Twitter Inc. and Alphabet Inc.’s Google are to appear before the Senate Commerce Committee that is considering changes to Section 230 of the Communications Decency Act that shields their platforms from liability.

Current FCC Chairman Ajit Pai, who was appointed to the post by President Donald Trump, has taken up a Trump administration demand for a tougher social media policy. Earlier in his term he reversed Democratic policies on net neutrality and waved through T-Mobile US Inc.’s bid to buy Sprint Corp.

If Biden wins, the FCC, which currently is at full five-member strength, could begin the new presidential term with a 2-to-1 Democratic majority, allowing it to move quickly. A Republican commissioner is leaving at the end of the current Congress and chairmen traditionally depart as a new administration arrives.

Pai hasn’t indicated what he’ll do. He can stay on as a commissioner but a new president could strip him of the chairmanship and its power to control what policies advance to a vote. An FCC spokeswoman declined to comment when asked about Pai’s plans.

If Pai stays after a Biden win, “he’s denuded of power to do much of anything except to block things,” said Andrew Jay Schwartzman, a Washington telecommunications lawyer.

The dynamic means initiatives left incomplete by Pai -- for instance the social-media rulemaking -- could be targeted for elimination by Democrats.

Companies “must brace for impacts from the 2020 election,” Bloomberg Intelligence analyst Matthew Schettenhelm said in a Sept. 30 note.

Policy areas likely to receive attention include:

Social Media

Both the Democratic commissioners -- and an outgoing Republican -- have criticized efforts by Pai and the Trump administration to weaken Section 230 legal protections.

Ultimately, changes to the law may require an act of Congress. But the FCC’s general counsel recently said he believes Pai has authority to alter the agency’s interpretation of the measure and Pai -- at the urging of Trump -- says he will.

“There is a very broad political constituency on both the right and left that want to change it,” Blair Levin, a Washington-based analyst for New Street Research said in an interview. “It’s a clear and easy target.”

There is disagreement about the changes that are needed, however. Democrats want the social media companies to do more to control disinformation and Republicans say with little evidence the sites unfairly suppress conservative views.

Net Neutrality

Restoring net neutrality rules that were eliminated by Pai in 2017 has become a popular cause on the left. The rules written by Democrats in 2015 barred broadband providers such as AT&T Inc. and Comcast Corp. from unfairly using their management of data traffic to favor their own content over rivals’ fare. But a Democratic-majority FCC would likely stop short of dictating prices charged by broadband companies.

“Are they going to reimpose net neutrality? Yep. Are they going to impose price regulation? I don’t think so,” said Gallant, the Cowen analyst.

Arguments over the demise of net neutrality rules continue. Pai recently said more and faster broadband connections serve to refute critics. Commissioner Jessica Rosenworcel, a Democrat, said the agency’s policy makes it easer for broadband companies to block websites.

Digital Divide

Another top Democratic priority is closing the digital divide that, according to the party platform, leaves more than 20 million Americans without high-speed internet access.

The problem is a centerpiece for two possible contenders to be named by a President Biden to lead the agency. Mignon Clyburn, a former FCC commissioner, has called for larger subsidies to expand broadband access. And Rosenworcel, the FCC commissioner, advocates steps to ensure internet access for all. Lobbyists and analysts cited both as possible choices to lead the agency.

Clyburn is considered to have an edge if she wants to return to the agency where she spent nine years including time as interim chairman: she is the daughter of House Majority Whip James Clyburn, whose support helped Biden win the Democratic primary in Clyburn’s South Carolina and set the former vice president on his path to the nomination.

Broadband Subsidies

On Capitol Hill, Representative Clyburn has called for investing $80 billion over five years to expand broadband access. Mignon Clyburn has backed a call to boost the current federal broadband subsidy to $50 a month, from less than $10. Since leaving the agency in 2018, she has formed a consultancy and joined the boards of TV and film company Lions Gate Entertainment Corp. and waste manager Charah Solutions Inc., according to data compiled by Bloomberg. When reached by telephone Oct. 16 she declined to comment.

Mergers

Deals to combine wireless companies would face tougher scrutiny than they have under Republican leadership. Pai’s FCC cleared T-Mobile US Inc. to buy Sprint Corp. over Democratic objections.

“It’s going to be a new day,” said Gigi Sohn, a former Democratic FCC aide. “I wouldn’t take any merger for granted.”

The T-Mobile deal, which was finalized in April, reduced the number of U.S. national mobile providers from four to three. Democrats may seek to bolster competition by finding ways to strengthen wireless offerings by Dish Network Corp. and cable providers, Gallant said.

Data Caps

The FCC may move to bar broadband providers from exempting their own entertainment or media offerings from data caps. Pai squelched an agency move toward doing so soon after taking office in 2016. Such rules would give Netflix Inc. and Dish’s streaming service a clearer path to consumers, since viewers wouldn’t risk extra fees by loading up on those companies’ videos.

Media Ownership, Spectrum

The Supreme Court is to consider letting the FCC ease some rules that limit radio and TV station ownership. A ruling may be handed down by June, and could let the rule changes go ahead. A Democratic FCC wouldn’t be likely to pursue further changes aggressively.

Democrats like Republicans will work to allocate more frequencies for use by fast 5G signals, said Gallant and Schettenhelm.
https://www.msn.com/en-us/money/comp...ot/ar-BB1aowPl





Trudeau Promises to Connect 98% of Canadians to High-Speed Internet by 2026

Announcement comes as Canadians spend more time online due to the pandemic
CBC

After some pandemic-related delays, the Liberal government says it's now on track to connect 98 per cent of Canadians to high-speed internet by 2026.

The announcement comes as more Canadians find themselves living online while stuck at home due to COVID-19 restrictions.

Prime Minister Justin Trudeau and a handful of cabinet ministers held a news conference in Ottawa to launch the $1.75 billion universal broadband fund — a program unveiled in the federal government's 2019 budget and highlighted on the campaign trail and in September's throne speech. Most of the money was announced in last year's budget.

"We were ready to go in March with the new Universal Broadband Fund and then the pandemic hit," Rural Economic Development Minister Maryam Monsef told reporters.

The prime minister said the government is now on track to connect 98 per cent of Canadians to high-speed by 2026 — an increase over the previously promised 95 per cent benchmark — and to link up the rest by 2030.

"These are ambitious targets and we're ready to meet them," Trudeau said.

About $150 million from the fund will be freed up to fund projects aimed at getting communities connected by next fall.

Senior officials with the department of Innovation, Science and Economic Development said applications will be reviewed on an ongoing basis until Jan. 15, 2021, with a goal of having projects completed by mid-November, 2021.

Deciding who gets upgraded connectivity first will depend on the service providers applying, they said.

Josh Tabish is corporate communications manager at the Canadian Internet Registration Authority, the not-for-profit agency that manages the .ca internet domain. He said he's hoping that a rapid build will bring relief to many Canadians over the next year.
"In terms of action, I think this is great news for Canadians who are stuck at home suffering from slow, crappy internet," he said.

But Tabish also said he hopes the government will look at need when deciding which projects should get approval first. His group has been working to identify the communities that have the slowest rates in Canada.

"What we really want to see happen is communities who are suffering with slow, sluggish connectivity get those upgrades first," he said.

The prime minister said the government also has reached a $600 million agreement with Telesat for satellite capacity to improve broadband service in remote areas and in the North.

"Good reliable internet isn't a luxury. It's a basic service," he said.

"Now more than ever, a video chat cutting out during a meeting or a connection that's too slow to upload a school assignment — that's not just a hassle, that's a barrier."

Tories call out timelines

The Opposition Conservatives criticized the government's timelines, arguing Canadians need better access now more than ever.

"This is absolutely unacceptable and a slap in the face to the nearly one million Canadians who don't have internet access at home, much less a reliable cell phone signal," said MP John Nater, Conservative critic for rural economic development.

"For months, Canada's Conservatives have been demanding concrete action to connect Canadians. We will continue to advocate for lower cell phone prices and for real improvements to broadband internet services, so that Canadians living in rural and remote areas have consistent access to these essential services."

The CRTC declared broadband internet a basic telecommunications service in 2016. But its data suggest just 40.8 per cent of rural Canadian households have access to download speeds of at least 50 megabits per second (Mbps) and upload speeds of 10 Mbps.

The government said those speeds will allow Canadians to work and learn online and access telehealth services.
https://www.cbc.ca/news/politics/bro...rnet-1.5794901





Voters Overwhelmingly Back Community Broadband in Chicago and Denver

Voters in both cities made it clear they’re fed up with monopolies like Comcast.
Karl Bode

Voters in both Denver and Chicago have overwhelmingly thrown their support behind local community broadband projects, joining the hundreds of U.S. communities that have embraced home-grown alternatives to entrenched telecom monopolies.

In Chicago, roughly 90 percent of voters approved a non-binding referendum question that asked: “should the city of Chicago act to ensure that all the city's community areas have access to broadband Internet?" The vote opens the door to the city treating broadband more like an essential utility, potentially in the form of community-run fiber networks.

In Denver, 83.5 percent of the city’s electorate cast ballots in favor of question 2H, which asked if the city should be exempt from a 2005 law, backed by local telecom monopolies, restricting Colorado towns and cities from being able to build their own local broadband alternatives.

Colorado is one of nearly two-dozen states that have passed laws, usually directly written by regional telecom monopolies, that hamstring or prevent the creation of such networks.

But in Colorado’s case, the state’s 2005 law included language that allows local towns and cities to opt-out of the restriction if voters agree to do so.

Christopher Mitchell is director of community broadband networks for the Institute for Local Self-Reliance, a nonprofit that advocates for local solutions for sustainable development. He told Motherboard that in the 15 years since Colorado’s law was passed, 140 communities have opted out, opening the door to citizen-built ISPs like Nextlight in Longmont.

“I think the margin in Chicago and Denver is remarkable,” Mitchell said. “When we work with communities where half the residents have a cable monopoly and the other half don't have any broadband, the demand for something better is strong among both populations.”

Studies suggest that 42 million Americans lack access to any broadband whatsoever, double official FCC estimates. Mitchell’s organization estimates that roughly 83 million more Americans live under a broadband monopoly, usually Comcast. Tens of millions more live under a duopoly, usually consisting of Comcast and a largely apathetic regional telco selling aging DSL lines.

With muted competition and regulators and lawmakers largely loyal to entrenched monopolies, the end result is a broken market in which U.S. consumers pay some of the highest prices in the developed world for slow broadband and abysmal customer service.

“For years, we have said this is a major concern for voters but local leaders remain too intimidated by the big monopoly cable and telephone companies to act on it,” Mitchell said. “Maybe now we will see it taken more seriously.”

A Harvard study found that community-based broadband alternatives routinely offer faster speeds, lower prices, and better customer service than most regional monopolies. Such networks can also drive incumbent ISPs, all too comfortable with the lack of competition, to deploy upgrades and price reductions that wouldn’t have materialized otherwise.

Mitchell’s group argues that while community-backed broadband isn’t a silver bullet, such networks also tend to be more responsive to local complaints because they’re owned and operated by local residents with a vested interest in the success of their neighborhoods.

“I will be surprised if we don't see more campaigns for non-binding referenda in communities around the country that build on what organizers in Denver and Chicago have done,” Mitchell said.
https://www.vice.com/en/article/xgzx...ago-and-denver





Broadband Power Users Explode, Making Data Caps More Profitable for ISPs

Usage increase “confirms the value” of data caps for ISP revenue, vendor says.
Jon Brodkin

The number of broadband "power users"—people who use 1TB or more per month—has doubled over the past year, ensuring that ISPs will be able to make more money from data caps.

In Q3 2020, 8.8 percent of broadband subscribers used at least 1TB per month, up from 4.2 percent in Q3 2019, according to a study released yesterday by OpenVault. OpenVault is a vendor that sells a data-usage tracking platform to cable, fiber, and wireless ISPs and has 150 operators as customers worldwide. The 8.8- and 4.2-percent figures refer to US customers only, an OpenVault spokesperson told Ars.

More customers exceeding their data caps will result in more overage charges paid to ISPs that impose monthly data caps. Higher usage can also boost ISP revenue because people using more data tend to subscribe to higher-speed packages.

"As traffic has exploded during the pandemic, data aggregated from our network management tools confirms the value of usage-based billing in prompting subscribers to self-align their speed plans with their consumption," OpenVault CEO Mark Trudeau said in a press release. This helps ISPs boost their average revenue per user, he said.

For example, ISPs that impose data caps had 25-percent more gigabit-speed subscribers than ISPs that don't impose data caps, possibly because ISPs that impose caps "often provide higher usage quotas for the gigabit tier than the slower bandwidth tiers," OpenVault said. "This provides incentive to subscribers of UBB [usage-based billing] operators to upgrade to the faster speeds." Overall, 5.6 percent of subscribers in OpenVault's dataset paid for gigabit speeds, up from 2.5 percent a year ago.

Temporary break from data caps

Customers of Comcast and other ISPs got a break from data caps for a few months this year when operators pledged to suspend the limits during the pandemic. But Comcast reinstated its data cap for cable customers on July 1, and AT&T reinstated data caps on DSL and fixed-wireless customers. Currently, AT&T is scheduled to reimpose data caps on fiber-to-the-home and fiber-to-the-node customers on January 1.

Comcast did raise its monthly cap from 1TB to 1.2TB on July 1, so not all terabyte users have to pay overage charges. Comcast also lowered the price of unlimited data from $50 to $30 a month, or $25 for customers who lease an xFi Gateway. Without the unlimited-data upgrade, Comcast overage charges are $10 for each additional block of 50GB.

AT&T imposes monthly data caps of 150GB on DSL, 250GB on fixed wireless, and 1TB on its faster wireline services.

US broadband networks have performed pretty well during the pandemic, at least outside of areas where modern broadband simply isn't available, demonstrating again that data caps are a business decision rather than a necessity for network management.

2TB users also on the rise

The number of "extreme power users," those who use at least 2TB per month, was up to about 1 percent of broadband customers in OpenVault's Q3 2020 data. That's nearly a three-fold increase since Q3 2019 when it was 0.36 percent.

OpenVault said the average US broadband household uses 384GB a month, up from 275GB a year ago. The median figures were 229GB, up from 174GB a year ago. Usage increases happen every year, but OpenVault said this year's boost was fueled partly by the pandemic.

"While bandwidth usage is remaining relatively flat quarter over quarter, it is not retreating to pre-pandemic levels, indicating that COVID-19-driven usage growth has established a new normal pattern for bandwidth usage," OpenVault said. European usage also went up during the pandemic but remained below US levels, with an average of 225GB and median of 156GB in Q3 2020.

The number of customers who have to pay overage charges may be limited somewhat by people intentionally restricting data usage to avoid the cap. Among US customers with unlimited data plans, 9.4 percent exceeded 1TB and 1.2 percent exceeded 2TB, OpenVault said in yesterday's report. For customers with data caps, 8.3 percent exceeded 1TB and 0.9 percent exceeded 2TB.

In potentially bad news for customers, OpenVault seems to be urging ISPs that haven't imposed data caps to adopt them. "The goal for network operators is to ensure that subscribers who consume the most bandwidth are in faster, higher ARPU [average revenue per user] speed tiers," OpenVault said. "Usage-based billing operators are achieving this goal more, on average, than network operators who utilize flat-rate unlimited billing."
https://arstechnica.com/tech-policy/...able-for-isps/





Ink-Stained Wretches: The Battle for the Soul of Digital Freedom Taking Place Inside Your Printer
Cory Doctorow

Since its founding in the 1930s, Hewlett-Packard has been synonymous with innovation, and many's the engineer who had cause to praise its workhorse oscillators, minicomputers, servers, and PCs. But since the turn of this century, the company's changed its name to HP and its focus to sleazy ways to part unhappy printer owners from their money. Printer companies have long excelled at this dishonorable practice, but HP is truly an innovator, the industry-leading Darth Vader of sleaze, always ready to strong-arm you into a "deal" and then alter it later to tilt things even further to its advantage.

The company's just beat its own record, converting its "Free ink for life" plan into a "Pay us $0.99 every month for the rest of your life or your printer stops working" plan.

Plenty of businesses offer some of their products on the cheap in the hopes of stimulating sales of their higher-margin items: you've probably heard of the "razors and blades" model (falsely) attributed to Gillette, but the same goes for cheap Vegas hotel rooms and buffets that you can only reach by running a gauntlet of casino "games," and cheap cell phones that come locked into a punishing, eternally recurring monthly plan.

Printers are grifter magnets, and the whole industry has been fighting a cold war with its customers since the first clever entrepreneur got the idea of refilling a cartridge and settling for mere astronomical profits, thus undercutting the manufacturers' truly galactic margins. This prompted an arms race in which the printer manufacturers devote ever more ingenuity to locking third-party refills, chips, and cartridges out of printers, despite the fact that no customer has ever asked for this.

Lexmark: First-Mover Advantage

But for all the dishonorable achievements of the printer industry's anti-user engineers, we mustn't forget the innovations their legal departments have pioneered in the field of ink- and toner-based bullying. First-mover advantage here goes to IBM, whose lawyers ginned up an (unsuccessful) bid to use copyright law to prevent a competitor, Static Controls, from modifying used Lexmark toner cartridges so they'd work after they were refilled.

A little more than a decade after its failure to get the courts to snuff out Static Controls, Lexmark was actually sold off to Static Controls' parent company. Sadly, Lexmark's aggressive legal culture came along with its other assets, and within a year of the acquisition, Lexmark's lawyers were advancing a radical theory of patent law to fight companies that refilled its toner cartridges.

HP: A Challenger Appears

Lexmark's fights were over laser-printer cartridges, filled with fine carbon powder that retailed at prices that rivaled diamonds and other exotic forms of that element. But laser printers are a relatively niche part of the printer market: the real volume action is in inkjet printers: dirt-cheap, semi-disposable, and sporting cartridges (half-) full of ink priced to rival vintage Veuve-Clicquot.

For the inkjet industry, ink was liquid gold, and they innovated endlessly in finding ways to wring every drop of profit from it. Companies manufactured special cartridges that were only half-full for inclusion with new printers, so you'd have to quickly replace them. They designed calibration tests that used vast quantities of ink, and, despite all this calibration, never could quite seem to get a printer to register that there was still lots of ink left in the cartridge that it was inexplicably calling "empty" and refusing to draw from.

But all this ingenuity was at the mercy of printer owners, who simply did not respect the printer companies' shareholders enough to voluntarily empty their bank accounts to refill their printers. Every time the printer companies found a way to charge more for less ink, their faithless customers stubbornly sought out competitors who'd patronize rival companies who'd refill or remanufacture their cartridges, or offer compatible cartridges.

Security Is Job One

Shutting out these rivals became job one. When your customers reject your products, you can always win their business back by depriving them of the choice to patronize a competitor. Printer cartridges soon bristled with "security chips" that use cryptographic protocols to identify and lock out refilled, third-party, and remanufactured cartridges. These chips were usually swiftly reverse-engineered or sourced out of discarded cartridges, but then the printer companies used dubious patent claims to have them confiscated by customs authorities as they entered the USA. (We’ve endorsed legislation that would end this practice.)

Here again, we see the beautiful synergy of anti-user engineering and anti-competition lawyering. It's really heartwarming to see these two traditional rival camps in large companies cease hostilities and join forces.

Alas, the effort that went into securing HP from its customers left precious few resources to protect HP customers from the rest of the world. In 2011, the security researcher Ang Cui presented his research on HP printer vulnerabilities, "Print Me If You Dare."

Cui found that simply by hiding code inside a malicious document, he could silently update the operating system of HP printers when the document was printed. His proof-of-concept code was able to seek out and harvest Social Security and credit-card numbers; probe the local area network; and penetrate the network's firewall and allow him to freely roam it using the compromised printer as a gateway. He didn't even have to trick people into printing his gimmicked documents to take over their printers: thanks to bad defaults, he was able to find millions of HP printers exposed on the public Internet, any one of which he could have hijacked with unremovable malware merely by sending it a print-job.

The security risks posed by defects in HP's engineering are serious. Criminals who hack embedded systems like printers and routers and CCTV cameras aren't content with attacking the devices' owners—they also use these devices as botnets for devastating denial of service and ransomware attacks.

For HP, though, the "security update" mechanism built into its printers was a means for securing HP against its customers, not securing those customers against joining botnets or having the credit card numbers they printed stolen and sent off to criminals.

In March 2016, HP inkjet owners received a "security update available" message on their printers' screens. When they tapped the button to install this update, their printers exhibited the normal security update behavior: a progress bar, a reboot, and then nothing. But this "security update" was actually a ticking bomb: a countdown timer that waited for five months before it went off in September 2016, activating a hidden feature that could detect and reject all third-party ink cartridges.

HP had designed this malicious update so that infected printers would be asymptomatic for months, until after parents had bought their back-to-school supplies. The delay ensured that warnings about the "security update" came too late for HP printer owners, who had by then installed the update themselves.

HP printer owners were outraged and told the company so. The company tried to weather the storm, first by telling customers that they'd never been promised their printers would work with third-party ink, then by insisting that the lockouts were to ensure printer owners didn't get "tricked" with "counterfeit" cartridges, and finally by promising that future fake security updates would be clearly labeled.

HP never did disclose which printer models it attacked with its update, and a year later, they did it again, once again waiting until after the back-to-school season to stage its sneak attack, stranding cash-strapped parents with a year's worth of useless ink cartridges for their kids' school assignments.

You Don't Own Anything

Other printer companies have imitated HP's tactics but HP never lost its edge, finding new ways to transfer money from printer owners to its tax-free offshore accounts.

HP's latest gambit challenges the basis of private property itself: a bold scheme! With the HP Instant Ink program, printer owners no longer own their ink cartridges or the ink in them. Instead, HP's customers have to pay a recurring monthly fee based on the number of pages they anticipate printing from month to month; HP mails subscribers cartridges with enough ink to cover their anticipated needs. If you exceed your estimated page-count, HP bills you for every page (if you choose not to pay, your printer refuses to print, even if there's ink in the cartridges).

If you don't print all your pages, you can "roll over" a few of those pages to the next month, but you can't bank a year's worth of pages to, say, print out your novel or tax paperwork. Once you hit your maximum number of "banked" pages, HP annihilates any other pages you've paid for (but continues to bill you every month).

Now, you may be thinking, "All right, but at least HP's customers know what they're getting into when they take out one of these subscriptions," but you've underestimated HP's ingenuity.

HP takes the position that its offers can be retracted at any time. For example, HP's “Free Ink for Life” subscription plan offered printer owners 15 pages per month as a means of tempting users to try out its ink subscription plan and of picking up some extra revenue in those months when these customers exceeded their 15-page limit.

But Free Ink for Life customers got a nasty shock at the end of last month: HP had unilaterally canceled their "free ink for life" plan and replaced it with "a $0.99/month for all eternity or your printer stops working" plan.

Ink in the Time of Pandemic

During the pandemic, home printers have become far more important to our lives. Our kids' teachers want them to print out assignments, fill them in, and upload pictures of the completed work to Google Classroom. Government forms and contracts have to be printed, signed, and photographed. With schools and offices mostly closed, these documents are being printed from our homes.

The lockdown has also thrown millions out of work and subjected millions more to financial hardship. It's hard to imagine a worse time for HP to shove its hands deeper into its customers' pockets.

Industry Leaders

The printer industry leads the world when it comes to using technology to confiscate value from the public, and HP leads the printer industry.

But these are infectious grifts. For would-be robber-barons, "smart" gadgets are a moral hazard, an irresistible temptation to use those smarts to reconfigure the very nature of private property, such that only companies can truly own things, and the rest of us are mere licensors, whose use of the devices we purchase is bound by the ever-shifting terms and conditions set in distant boardrooms.

From Apple to John Deere to GM to Tesla to Medtronic, the legal fiction that you don't own anything is used to force you to arrange your affairs to benefit corporate shareholders at your own expense.

And when it comes to "razors and blades" business-model, embedded systems offer techno-dystopian possibilities that no shaving company ever dreamed of: the ability to use law and technology to prevent competitors from offering their own consumables. From coffee pods to juice packets, from kitty litter to light-bulbs, the printer-ink cartridge business-model has inspired many imitators.

HP has come a long way since the 1930s, reinventing itself several times, pioneering personal computers and servers. But the company's latest reinvention as a wallet-siphoning ink grifter is a sad turn indeed, and the only thing worse than HP’s decline is the many imitators it has inspired.
https://www.eff.org/deeplinks/2020/1...e-your-printer





Slingbox Discontinued, Services Sunsetting
Chris Burns

Slingbox will no longer function as of November 9, 2022. Sling Media, a subsidiary of DISH, announced that they were closing shop on November 9, 2020. At that time, they said that all Slingbox servers will be permanently taken offline 24 months after the discontinued announcement date (which is, again, November 9, 2020). When November arrives in 2022, ALL Slingbox devices and services “will become inoperable.”

Per the announcement, Sling said “most Slingbox models will continue to work normally, but the number of supported devices for viewing will steadily decrease as versions of the SlingPlayer apps become outdated and/or lose compatibility.” That goes for every single Sling device you’ve ever owned or (barring madness) will ever own in the future.

In a general Q&A session posted by Sling Media, the sunsetting of services was explained. “We’ve had to make room for new innovative products so that we can continue to serve our customers in the best way possible.” This might seem like an absolutely goofy way of saying there’ll be different sorts of Sling products in the future, but the company went on to say that they will not be releasing any new products, point blank.

Sling Media’s Slingbox products will be discontinued. Sling Media is owned by DISH. It’s entirely possible that some of the functionality included in Sling products in the past will be moved to DISH products in the future. But don’t hold your breath.

If you somehow managed to purchase a Slingbox product any time recently, Sling suggested that “The Slingbox warranty is for 1 year,” and that “if you purchased your Slingbox from an authorized dealer in the United States or Canada and have a copy of the receipt, your warranty will be covered under the original terms and conditions.”

Slingbox will not be shipping any product from this point forward. Per the release this week, “most authorized resellers have been out of stock for a couple years.”

If you’re using any sort of Sling product right now, be it hardware or software, we’d recommend you stop as soon as possible. This is a period of time where the ending of development for products (especially software) means malicious parties can potentially gain a foothold in the security of said services. As Sling is ending services, they’ll be far less concerned with the concerns of users than they would have been in the distant past. Time to look for Slingbox alternatives – starting with NVIDIA SHIELD Android TV.
https://www.slashgear.com/slingbox-d...ting-09646412/





HDD Components Maker Hoya Describes 22TB & 24TB Hard Drives

Hoya: 22TB & 24TB HDDs will need more platters or HAMR
Anton Shilov

Hoya, a maker of glass substrates for hard drive platters, has revealed some insights regarding plans of HDD makers. The company is confident that demand for glass media hard disk drives will increase no matter which magnetic recording technology HDD producers adopt. Meanwhile, Hoya says that microwave assisted magnetic recording (MAMR) technology will provide limited improvements in recording density.

Competing Technologies

Perpendicular magnetic recording (PMR) technology, which has served the industry for well over a decade, is approaching its physical areal density limits. Technologies like two-dimensional magnetic recording (TDMR) can slightly increase capabilities of PMR without requiring any changes to the software stack of the datacenter. Usage of shingled magnetic recording (SMR) can increase areal density even further, but owners of datacenters must tailor their software for this technology.

To continue using conventional magnetic recording (CMR) methods HDD makers need to adopt various types of energy-assisted magnetic recording technologies (EAMR) that use different kinds of energy to alter coercivity of magnetic disks before recording. Hard drive manufacturers have different opinions about the most viable EAMR technology for today and tomorrow. Seagate is confident that heating the media using laser (HAMR) is the best solution possible, while Toshiba and Western Digital believe that using microwaves to change coercivity of magnetic disks (MAMR) is more viable for the next several years. Furthermore, Western Digital even uses 'halfway-to-MAMR' energy-assisted perpendicular magnetic recording (ePMR) for its latest HDDs. Meanwhile, everyone agrees that HAMR is the best option for the long term.

HAMR requires new heads and immediate transition to glass platters with all-new new coating, whereas MAMR only needs new heads and can continue using aluminum media with a known coating. Even if HAMR offers a higher areal density than MAMR, it is possible to increase platter count to expand the capacity of a MAMR drive to match that of a HAMR-based HDD. There is a catch though: thin MAMR platters will have to rely on a glass substrate.

Hoya: MAMR Is Progressing Slower Than Expected

As a maker of glass substrates, Hoya does not care which magnetic recording technology is used by a particular HDD as long as it uses glass platters. The company says that to build 22 TB and 24 TB drives that are already in the roadmaps of drive makers they need to either use more than nine platters, or adopt HAMR technology.

"For the following 20 TB models, HDD manufacturers are trying to maintain the current layer count without increases," said Eiichiro Ikeda, CTO of Hoya. "However, roadmaps going forward for 22 TB and 24 TB and such show layer increases or adopting HAMR. Development is unchanged here with regard to all customers."

As it turns out, adopters of MAMR are shifting their efforts towards HAMR as microwave assisted recording technology provides limited benefits as far as areal recording density is concerned, according to Hoya.

"Despite ongoing delays in our customers' HAMR installation timeframes, the MAMR camp is shifting toward HAMR development due to limited improvements expected in recording density, with backup development moving toward more layers," said Ikeda. "Therefore, there is no change in our scenario that glass substrates will be necessary regardless, since to increase HDD capacity, HAMR would be used to improve the recording density, and/or layer count would increase to increase area."

Western Digital started to ship its 20 TB SMR HDDs for revenue just last month, whereas Seagate says it is on track to ship its 20 TB HAMR drives by the end of the year. So far, neither of the companies disclosed when they plan to ship 22 TB or 24 TB HDDs, but some market observers believe that 24 TB drives will be available in 2022.
https://www.tomshardware.com/news/hoya-hdd-22tb-24tb





Micron Wants to Kill Hard Disk Drives with New Super Cheap Flash Memory

176-layer NAND delivers a 37% improvement on current technology
Desire Athow

Micron has just unveiled its next generation 176-layer 3D NAND. The new chip offers a 37% improvement on its nearest competitor, Kioxia/Western Digital’s 112-layer BiCS5.

The company says the new NAND will improve both read and write latency by more than a third compared to last generation’s 96-layer floating-gate NAND, which could mean faster and cheaper solid state drives.

The manufacturer also claims to offer a die that’s about 30% smaller than competitive offerings and hits a whopping 1.6 gigatransfers per second (1600 MT/s) on the Open NAND Flash interface bus - a double digit improvement on past generations.

3D NAND

And here’s the kicker: the 3D NAND chips are already in volume production and shipping to customers, including in Micron's own Crucial SSD product lines.

Ultimately, what the announcement means for the end user is more power efficient, faster, smaller and cheaper SSD designs. For the datacenter audience, meanwhile, the new chips will deliver endurance improvements, which are particularly beneficial in write-intensive use cases.

The 100TB ExaDrive DC SSD, the largest solid state drive currently on the market, is likely to use 64-layer SLC NAND, which explains its eye-watering price of $40,000. For comparison, the cheaper ExaDrive NL 64TB from the same company is likely to use 96-layer TLC NAND chips, which slashes its price to a mere $10,900 - less than half the cost per TB.

Micron’s new technology could either mean more SSD for your money (e.g. 100TB for $10,000) or far lower price points. Ultimately, the firm wants to drive aggressive, industry-leading cost reductions that will hopefully trickle down to the end user.

That means hard disk drive vendors like Seagate, Toshiba or Western Digital may want to watch their back in the datacenter/nearline market. Micron's new chips could usher in a new generation of extremely high capacity 3.5-inch SSD drives at relatively low entry points to replace existing spinning drives.
https://www.techradar.com/news/apple...books-incoming





Western Digital's Ultrastar DC ZN540 Is the World's First ZNS SSD

Western Digital's Ultrastar DC ZN540 can replace four conventional SSDs
Anton Shilov

Western Digital is one of the most vocal proponents of the Zoned Namespaces (ZNS) storage initiative, so it is not surprising that the company this week became the first SSD maker to start sampling of a ZNS SSD. When used properly, the Ultrastar DC ZN540 drive can replace up to four conventional SSDs, provide higher performance and improve quality of service (QoS).

ZNS SSDs have a number of advantages over traditional block-based SSDs. For one, they place data sequentially into zones and have better control over write amplification, since the software 'knows' what it is dealing with. This means that ZNS SSDs don't need as much overprovisioning as traditional enterprise drives. Many enterprise drives rated for 3DWPD (drive writes per day) reserve up to 28% of their raw capacity for overprovisioning. ZNS needing as little as a tenth of that significantly increases usable SSD capacity.

Second, since ZNS manages large zones rather than a bunch of 4KB blocks and doesn't need to perform garbage collection as often as traditional SSDs, it also improves real-world read and write performance.

Finally, ZNS substantially reduces DRAM requirements.

Western Digital's Ultrastar DC ZN540 SSD is based on the company's own dual-port NVMe 1.3c-compliant controller, as well as 96-layer 3D TLC NAND memory. The controller fully supports ZNS Command Set 1.0 specification, and the drive is ready to be deployed by companies running software that supports Zoned Namespaces. The drives come in an industry-standard U.2 form-factor, so they're drop-in compatible with existing servers.

ZNS promises to be particularly useful for hard drives based on shingled magnetic recording (SMR) technology, as well as SSDs powered by 3D QLC NAND. Note that since 3D QLC NAND yet has to gain traction in the datacenter, Western Digital decided to use proven 3D TLC memory.

Western Digital claims that because of all the advantages that ZNS brings to SSDs, the Ultrastar DC ZN540 and its successors increase drive utilization and reduce total cost of ownership (TCO), which is something every operator of a datacenter cares about.

Right now, the Ultrastar DC ZN540 is sampling with select customers only, so to a large degree this is a test vehicle. It remains to be seen whether these drives will ever be deployed more broadly.
https://www.tomshardware.com/news/we...-first-zns-ssd





Rights Activists Slam EU Plan for Access to Encrypted Chats
Frank Jordans

Digital rights campaigners on Monday criticized a proposal by European Union governments that calls for communications companies to provide authorities with access to encrypted messages.

The plan, first reported by Austrian public broadcaster FM4, reflects concern among European countries that police and intelligence services can’t easily monitor online chats that use end-to-end encryption, such as Signal or WhatsApp.

A draft proposal dated Nov. 6 and circulated by the German government, which holds the EU’s rotating presidency, proposes creating a “better balance” between privacy and crime fighting online.

The confidential draft, obtained independently by The Associated Press, states that “competent authorities must be able to access data in a lawful and targeted manner, in full respect of fundamental rights and the data protection regime, while upholding cybersecurity.”

It adds that “technical solutions for gaining access to encrypted data must comply with the principles of legality, transparency, necessity and proportionality.”

German Left party lawmaker Anke Domscheit-Berg accused European governments of using anxiety caused by recent extremist attacks, such as those in France and Austria, as an excuse for greater surveillance measures, and argued that providing authorities with a key to unlock all forms of encrypted communications would pose a grave security risk to all users.

“Anyone who finds an open back door into my house can enter it, the same is true for back doors in software,” Domscheit-Berg said. “The proposed EU regulation is an attack on the integrity of digital infrastructure and therefore very dangerous.”

Patrick Breyer, a member of the European Parliament with Germany’s Pirate Party, said enabling governments to intercept encrypted communications “would be the end of secure encryption altogether and would open back doors also for hackers, foreign intelligence, etc.”

The proposal, which would still need to be adopted by EU governments later this month, is not legally binding. But it sets out the political position that EU member states want the bloc’s executive commission to pursue in its dealings with technology companies and the European Parliament.
https://apnews.com/article/technolog...9f48e38d379a94





Computer Scientists Achieve ‘Crown Jewel’ of Cryptography

A cryptographic master tool called indistinguishability obfuscation has for years seemed too good to be true. Three researchers have figured out that it can work.
Kiel Mutschelknaus

In 2018, Aayush Jain, a graduate student at the University of California, Los Angeles, traveled to Japan to give a talk about a powerful cryptographic tool he and his colleagues were developing. As he detailed the team’s approach to indistinguishability obfuscation (iO for short), one audience member raised his hand in bewilderment.

“But I thought iO doesn’t exist?” he said.

At the time, such skepticism was widespread. Indistinguishability obfuscation, if it could be built, would be able to hide not just collections of data but the inner workings of a computer program itself, creating a sort of cryptographic master tool from which nearly every other cryptographic protocol could be built. It is “one cryptographic primitive to rule them all,” said Boaz Barak of Harvard University. But to many computer scientists, this very power made iO seem too good to be true.

Computer scientists set forth candidate versions of iO starting in 2013. But the intense excitement these constructions generated gradually fizzled out, as other researchers figured out how to break their security. As the attacks piled up, “you could see a lot of negative vibes,” said Yuval Ishai of the Technion in Haifa, Israel. Researchers wondered, he said, “Who will win: the makers or the breakers?”

“There were the people who were the zealots, and they believed in [iO] and kept working on it,” said Shafi Goldwasser, director of the Simons Institute for the Theory of Computing at the University of California, Berkeley. But as the years went by, she said, “there was less and less of those people.”

Now, Jain — together with Huijia Lin of the University of Washington and Amit Sahai, Jain’s adviser at UCLA — has planted a flag for the makers. In a paper posted online on August 18, the three researchers show for the first time how to build indistinguishability obfuscation using only “standard” security assumptions.

All cryptographic protocols rest on assumptions — some, such as the famous RSA algorithm, depend on the widely held belief that standard computers will never be able to quickly factor the product of two large prime numbers. A cryptographic protocol is only as secure as its assumptions, and previous attempts at iO were built on untested and ultimately shaky foundations. The new protocol, by contrast, depends on security assumptions that have been widely used and studied in the past.

“Barring a really surprising development, these assumptions will stand,” Ishai said.

While the protocol is far from ready to be deployed in real-world applications, from a theoretical standpoint it provides an instant way to build an array of cryptographic tools that were previously out of reach. For instance, it enables the creation of “deniable” encryption, in which you can plausibly convince an attacker that you sent an entirely different message from the one you really sent, and “functional” encryption, in which you can give chosen users different levels of access to perform computations using your data.

The new result should definitively silence the iO skeptics, Ishai said. “Now there will no longer be any doubts about the existence of indistinguishability obfuscation,” he said. “It seems like a happy end.”

The Crown Jewel

For decades, computer scientists wondered if there is any secure, all-encompassing way to obfuscate computer programs, allowing people to use them without figuring out their internal secrets. Program obfuscation would enable a host of useful applications: For instance, you could use an obfuscated program to delegate particular tasks within your bank or email accounts to other individuals, without worrying that someone could use the program in a way it wasn’t intended for or read off your account passwords (unless the program was designed to output them).

But so far, all attempts to build practical obfuscators have failed. “The ones that have come out in real life are ludicrously broken, … typically within hours of release into the wild,” Sahai said. At best, they offer attackers a speed bump, he said.

In 2001, bad news came on the theoretical front too: The strongest form of obfuscation is impossible. Called black box obfuscation, it demands that attackers should be able to learn absolutely nothing about the program except what they can observe by using the program and seeing what it outputs. Some programs, Barak, Sahai and five other researchers showed, reveal their secrets so determinedly that they are impossible to obfuscate fully.

These programs, however, were specially concocted to defy obfuscation and bear little resemblance to real-world programs. So computer scientists hoped there might be some other kind of obfuscation that was weak enough to be feasible but strong enough to hide the kinds of secrets people actually care about. The same researchers who showed that black box obfuscation is impossible proposed one possible alternative in their paper: indistinguishability obfuscation.

On the face of it, iO doesn’t seem like an especially useful concept. Instead of requiring that a program’s secrets be hidden, it simply requires that the program be obfuscated enough that if you have two different programs that perform the same task, you can’t distinguish which obfuscated version came from which original version.

But iO is stronger than it sounds. For example, suppose you have a program that carries out some task related to your bank account, but the program contains your unencrypted password, making you vulnerable to anyone who gets hold of the program. Then — as long as there is some program out there that could perform the same task while keeping your password hidden — an indistinguishability obfuscator will be strong enough to successfully mask the password. After all, if it didn’t, then if you put both programs through the obfuscator, you’d be able to tell which obfuscated version came from your original program.

Over the years, computer scientists have shown that you can use iO as the basis for almost every cryptographic protocol you could imagine (except for black box obfuscation). That includes both classic cryptographic tasks like public key encryption (which is used in online transactions) and dazzling newcomers like fully homomorphic encryption, in which a cloud computer can compute on encrypted data without learning anything about it. And it includes cryptographic protocols no one knew how to build, like deniable or functional encryption.

“It really is kind of the crown jewel” of cryptographic protocols, said Rafael Pass of Cornell University. “Once you achieve this, we can get essentially everything.”

In 2013, Sahai and five co-authors proposed an iO protocol that splits up a program into something like jigsaw puzzle pieces, then uses cryptographic objects called multilinear maps to garble the individual pieces. If the pieces are put together correctly, the garbling cancels out and the program functions as intended, but each individual piece looks meaningless. The result was hailed as a breakthrough and prompted dozens of follow-up papers. But within a few years, other researchers showed that the multilinear maps used in the garbling process were not secure. Other iO candidates came along and were broken in their turn.

“There was some worry that maybe this is just a mirage, maybe iO is simply impossible to get,” Barak said. People started to feel, he said, that “maybe this whole enterprise is doomed.”

Hiding Less to Hide More

In 2016, Lin started exploring whether it might be possible to get around the weaknesses of multilinear maps by simply demanding less of them. Multilinear maps are essentially just secretive ways of computing with polynomials — mathematical expressions made up of sums and products of numbers and variables, like 3xy + 2yz2. These maps, Jain said, entail something akin to a polynomial calculating machine connected to a system of secret lockers containing the values of the variables. A user who drops in a polynomial that the machine accepts gets to look inside one final locker to find out whether the hidden values make the polynomial evaluate to 0.

For the scheme to be secure, the user shouldn’t be able to figure out anything about the contents of the other lockers or the numbers that were generated along the way. “We would like that to be true,” Sahai said. But in all the candidate multilinear maps people could come up with, the process of opening the final locker revealed information about the calculation that was supposed to stay hidden.

Since the proposed multilinear map machines all had security weaknesses, Lin wondered if there was a way to build iO using machines that don’t have to compute as many different kinds of polynomials (and therefore might be easier to build securely). Four years ago, she figured out how to build iO using only multilinear maps that compute polynomials whose “degree” is 30 or less (meaning that every term is a product of at most 30 variables, counting repeats). Over the next couple of years, she, Sahai and other researchers gradually figured out how to bring the degree down even lower, until they were able to show how to build iO using just degree-3 multilinear maps.

On paper, it looked like a vast improvement. There was just one problem: From a security standpoint, “degree 3 was actually as broken” as the machines that can handle polynomials of every degree, Jain said.

The only multilinear maps researchers knew how to build securely were those that computed polynomials of degree 2 or less. Lin joined forces with Jain and Sahai to try to figure out how to construct iO from degree-2 multilinear maps. But “we were stuck for a very, very long time,” Lin said.

“It was kind of a gloomy time,” Sahai recalled. “There’s a graveyard filled with all the ideas that didn’t work.”

Eventually, though — together with Prabhanjan Ananth of the University of California, Santa Barbara and Christian Matt of the blockchain project Concordium — they came up with an idea for a sort of compromise: Since iO seemed to need degree-3 maps, but computer scientists only had secure constructions for degree-2 maps, what if there was something in between — a sort of degree-2.5 map?

The researchers envisioned a system in which some of the lockers have clear windows, so the user can see the values contained within. This frees the machine from having to protect too much hidden information. To strike a balance between the power of higher-degree multilinear maps and the security of degree-2 maps, the machine is allowed to compute with polynomials of degree higher than 2, but there’s a restriction: The polynomial must be degree 2 on the hidden variables. “We’re trying to not hide as much” as in general multilinear maps, Lin said. The researchers were able to show that these hybrid locker systems can be constructed securely.

But to get from these less powerful multilinear maps to iO, the team needed one last ingredient: a new kind of pseudo-randomness generator, something that expands a string of random bits into a longer string that still looks random enough to fool computers. That’s what Jain, Lin and Sahai have figured out how to do in their new paper. “There was a wonderful last month or so where everything came together in a flurry of phone calls,” Sahai said.

The result is an iO protocol that finally avoids the security weaknesses of multilinear maps. “Their work looks absolutely beautiful,” Pass said.

The scheme’s security rests on four mathematical assumptions that have been widely used in other cryptographic contexts. And even the assumption that has been studied the least, called the “learning parity with noise” assumption, is related to a problem that has been studied since the 1950s.

There is likely only one thing that could break the new scheme: a quantum computer, if a full-power one is ever built. One of the four assumptions is vulnerable to quantum attacks, but over the past few months a separate line of work has emerged in three separate papers by Pass and other researchers offering a different potential route to iO that might be secure even from quantum attacks. These versions of iO rest on less established security assumptions than the ones Jain, Lin and Sahai used, several researchers said. But it is possible, Barak said, that the two approaches could be combined in the coming years to create a version of iO that rests on standard security assumptions and also resists quantum attacks.

Jain, Lin and Sahai’s construction will likely entice new researchers into the field to work on making the scheme more practical and to develop new approaches, Ishai predicted. “Once you know that something is possible in principle, it makes it psychologically much easier to work in the area,” he said.

Computer scientists still have much work to do before the protocol (or some variation on it) can be used in real-world applications. But that is par for the course, researchers said. “There’s a lot of notions in cryptography that, when they first came out, people were saying, ‘This is just pure theory, [it] has no relevance to practice,’” Pass said. “Then 10 or 20 years later, Google is implementing these things.”

The road from a theoretical breakthrough to a practical protocol can be a long one, Barak said. “But you could imagine,” he said, “that maybe 50 years from now the crypto textbooks will basically say, ‘OK, here is a very simple construction of iO, and from that we’ll now derive all of the rest of crypto.’”
https://www.quantamagazine.org/compu...aphy-20201110/





'It's the Screams of the Damned!' The Eerie AI World of Deepfake Music

Artificial intelligence is being used to create new songs seemingly performed by Frank Sinatra and other dead stars. ‘Deepfakes’ are cute tricks – but they could change pop for ever
Derek Robertson

‘It’s Christmas time! It’s hot tub time!” sings Frank Sinatra. At least, it sounds like him. With an easy swing, cheery bonhomie, and understated brass and string flourishes, this could just about pass as some long lost Sinatra demo. Even the voice – that rich tone once described as “all legato and regrets” – is eerily familiar, even if it does lurch between keys and, at times, sounds as if it was recorded at the bottom of a swimming pool.

The song in question not a genuine track, but a convincing fake created by “research and deployment company” OpenAI, whose Jukebox project uses artificial intelligence to generate music, complete with lyrics, in a variety of genres and artist styles. Along with Sinatra, they’ve done what are known as “deepfakes” of Katy Perry, Elvis, Simon and Garfunkel, 2Pac, Céline Dion and more. Having trained the model using 1.2m songs scraped from the web, complete with the corresponding lyrics and metadata, it can output raw audio several minutes long based on whatever you feed it. Input, say, Queen or Dolly Parton or Mozart, and you’ll get an approximation out the other end.

“As a piece of engineering, it’s really impressive,” says Dr Matthew Yee-King, an electronic musician, researcher and academic at Goldsmiths. (OpenAI declined to be interviewed.) “They break down an audio signal into a set of lexemes of music – a dictionary if you like – at three different layers of time, giving you a set of core fragments that is sufficient to reconstruct the music that was fed in. The algorithm can then rearrange these fragments, based on the stimulus you input. So, give it some Ella Fitzgerald for example, and it will find and piece together the relevant bits of the ‘dictionary’ to create something in her musical space.”

Admirable as the technical achievement is, there’s something horrifying about some of the samples, particularly those of artists who have long since died – sad ghosts lost in the machine, mumbling banal cliches. “The screams of the damned” reads one comment below that Sinatra sample; “SOUNDS FUCKING DEMONIC” reads another. We’re down in the Uncanny Valley.

Deepfake music is set to have wide-ranging ramifications for the music industry as more companies apply algorithms to music. Google’s Magenta Project – billed as “exploring machine learning as a tool in the creative process” – has developed several open source APIs that allow composition using entirely new, machine-generated sounds, or human-AI co-creations. Numerous startups, such as Amper Music, produce custom, AI-generated music for media content, complete with global copyright. Even Spotify is dabbling; its AI research group is led by François Pachet, former head of Sony Music’s computer science lab.

It’s not hard to foresee, though, how such deepfakes could lead to ethical and intellectual property issues. If you didn’t want to pay the market rate for using an established artist’s music in a film, TV show or commercial, you could create your own imitation. Streaming services could, meanwhile, pad out genre playlists with similar sounding AI artists who don’t earn royalties, thereby increasing profits. Ultimately, will streaming services, radio stations and others increasingly avoid paying humans for music?

Legal departments in the music industry are following developments closely. Earlier this year, Roc Nation filed DMCA takedown requests against an anonymous YouTube user for using AI to mimic Jay-Z’s voice and cadence to rap Shakespeare and Billy Joel. (Both are incredibly realistic.) “This content unlawfully uses an AI to impersonate our client’s voice,” said the filing. And while the videos were eventually reinstated “pending more information from the claimant”, the case – the first of its kind – rumbles on.

Roc Nation declined to comment on the legal implications of AI impersonation, as did several other major labels contacted by the Guardian: “As a public company, we have to exercise caution when discussing future facing topics,” said one anonymously. Even UK industry body the BPI refused to go on the record with regard to how the industry will deal with this brave new world and what steps might be taken to protect artists and the integrity of their work. The IFPI, an international music trade body, did not respond to emails.

Perhaps the reason is, in the UK at least, there’s a worry that there’s not actually a basis for legal protection. “With music there are two separate copyrights,” says Rupert Skellett, head of legal for Beggars Group, which encompasses indie labels 4AD, XL, Rough Trade and more. “One in the music notation and the lyrics – ie the song – and a separate one in the sound recording, which is what labels are concerned with. And if someone hasn’t used the actual recording” – if they’ve created a simulacrum using AI – “you’d have no legal action against them in terms of copyright with regards to the sound recording.”

There’d be a potential cause of action with regards to “passing off” the recording, but, says Skellett, the burden of proof is onerous, and such action would be more likely to succeed in the US, where legal protections exist against impersonating famous people for commercial purposes, and where plagiarism cases like Marvin Gaye’s estate taking on Blurred Lines have succeeded. UK law has no such provisions or precedents, so even the commercial exploitation of deepfakes, if the creator was explicit about their nature, might not be actionable. “It would depend on the facts of each case,” Skellett says.

Some, however, are excited by the creative possibilities. “If you’ve got a statistical model of millions of songs, you can ask the algorithm: what haven’t you seen?” says Yee-King. “You can find that blank space, and then create something new.” Mat Dryhurst, an artist and podcaster who has spent years researching and working with AI and associated technology, says: “The closest analogy we see is to sampling. These models allow a new dimension of that, and represent the difference between sampling a fixed recording of Bowie’s voice and having Bowie sing whatever you like – an extraordinary power and responsibility.”

Deepfakes also pose deeper questions: what makes a particular artist special? Why do we respond to certain styles or types of music, and what happens when that can be created on demand? Yee-King imagines machines able to generate the perfect piece of music for you at any time, based on settings that you select – something already being pioneered by the startup Endel – as well as pop stars using an AI listening model to predict which songs will be popular or what different demographics respond to. “Just feeding people an optimised stream of sound,” he says, “with artists taken out of the loop completely.”

But if we lose all sense of emotional investment in what artists do – and in the human side of creation – we will lose something fundamental to music. “These systems are trained on human expression and will augment it,” says Dryhurst. “But the missing piece of the puzzle is finding ways to compensate people, not replace them.”
https://www.theguardian.com/music/20...-frank-sinatra





The Uneasy Afterlife of Our Dazzling Trash

Where do CDs go to die?
Sandra E. Garcia

Every day, for the past 14 years, Bruce Bennett has received packages filled with CDs. Sometimes a few at a time and sometimes in packs of hundreds, shiny old discs arrive at his CD Recycling Center of America in Salem, N.H., a 300-foot blue trailer tucked behind a commercial strip, to ascend to the CD afterlife.

The CD recycling process requires Mr. Bennett, 55, to store a truckload, or approximately 44,000 pounds, of CDs in a warehouse before the discs can be granulated into raw polycarbonate plastic, resulting in a white and clear powdery material that glints and resembles large snowflake crystals stuck together.

The material, which takes one million years to decompose in a landfill, can eventually be used to mold durable items for cars, home building materials and eyeglasses.

But that’s assuming anybody buys the raw material.

The polycarbonate granules used to be sold mostly to China, where the United States sent the bulk of its recycling until 2018 before China restricted imports of mixed paper and most plastic. The price that China was willing to pay per pound of granulated polycarbonate began to dip in 2008, Mr. Bennett said, and by 2011 it had plummeted.

Mr. Bennett did find polycarbonate buyers in India, but now, because of lockdowns caused by the pandemic, he doesn’t break even. Still, as a self-professed lifelong environmentalist — he began to recycle CDs in 1988 because, as a CD manufacturer, he had to learn how to properly dispose of damaged batches — Mr. Bennett is hopeful that CD recycling will catch on.

“I realized that I know how to recycle this,” Mr. Bennett said in an interview. “But I don’t think the world knows.”

A CD’s Journey

CDs may seem like a relic, but when they entered consumer homes in the 1980s, they were a revelation in information sharing.

“In the early ’80s, information storage was mainly in magnetic tape and magnetic devices,” said Kees Immink, who was one of eight engineers to create the CD in 1979. “The CD was groundbreaking.”

His team had started with the goal of making a disc capable of storing music longer than Beethoven’s “Symphony No. 9,” which is close to 70 minutes long. What resulted was something that could save “other digital media and essentially all software,” he recalled.

“Mechanical engineers who produced excellent gramophones became instantly obsolete,” Mr. Immink said.

CDs were less than half the size of 12-inch vinyls, and could rewind or skip forward at the press of a button, unlike tapes, which required winding. Consumers could also travel with their CDs, thanks to Sony’s invention of the portable CD player in 1984. The sound quality was better, and discs could hold a lot more information than cassettes could.

CDs became ubiquitous: In the 1990s, AOL sent them to potential internet subscribers. In the mid-’90s, makers of video games began to shift away from cartridges and toward discs. By 2000, more than 900 million music CDs were sold, a record number that was never surpassed again, according to the Recording Industry Association of America. (Eminem, Destiny’s Child and Britney Spears were all top sellers.)

And then, just a year later, Apple released the it’s first iPod, which allowed users to carry 1,000 CD-quality songs in a six-ounce device in their pocket. Compact discs began their shift from being innovative and covetable to clunky. This month brings another small blow to CDs as Sony and Microsoft are releasing the latest editions of their game consoles, the PlayStation 5 and the Xbox Series X, without disc drives.

Mr. Immink — who now researches ways to store information in DNA — said that he has no feelings about the fact that the CD is slowly phasing out of production and use. It’s a cycle he understands. Just as he made the engineers of the gramophone obsolete, it is now his turn.

“It was a long time ago,” Mr. Immink said. “All those people that worked so hard on the radio are now obsolete. My colleagues and I had so much fun and we laughed a lot while we created the CD. We knew we were making history.”

Who Will Buy Broken-Up CDs?

Many organizations, like GreenDisk, provide drop boxes for castoff CDs and other outdated tech. David Beschen founded GreenDisk in 1992, after a stint marketing Microsoft products, and he still runs it.

“I saw an opportunity to basically clean up some of the stuff I had been responsible for marketing,” Mr. Beschen said. “All of this stuff was just being incinerated or buried.”

He ships the CDs accumulated from the tech drop boxes to the National Industries for the Blind, where they are sorted and ground into polycarbonate flakes. That raw plastic is then shipped to manufacturers to make plastic materials to sell, including spools for producing 3-D printing filament. The filament is then sent back to the N.I.B., where it is packaged to be sold to the federal government, Mr. Beschen said. (He said the government has used the 3-D filament for many things, including repairing broken parts on Humvees and nuclear missiles.)

GreenDisk also works with companies including Warner Brothers, Disney and the Library of Congress to dispose of CDs, because GreenDisk will delete the information from them first.

“Once a CD is in a trash dump, it can be published to the public domain and people can legally take that, sell it and re-market it as well,” Mr. Beschen said. Industries were burning millions of units of CDs to avoid that, he added.

In a global sense, recycling CDs is not a big environmental priority right now, according to Judith Enck, a former E.P.A. regional administrator, who founded Beyond Plastics, an anti-plastic project based at Bennington College in Vermont.

“Plastic recycling has been an abysmal failure,” she said, adding that the rate for recycling plastics in the United States has been significantly low. “That is an issue that definitely needs attention.”

“You look at other materials, like cardboard and glass and aluminum, and that’s all included in curbside recycling programs because there are businesses that will buy all of that for a reliable market,” Ms. Enck said. “There just aren’t markets for this type of plastic.”

So, for now, old CDs languish in basement or attics, or just end up with other plastics — in the trash.

Glory Days

In a recent interview, Janice Brandt, a former senior consultant at AOL and the marketing guru behind the company’s 1990s campaign that produced millions of CDs for potential customers, reflected on how much has changed, technologically, in just a few decades.

The AOL campaign, which at one point in the late 1990s had a budget of $750 million, was a huge moneymaker for AOL that brought millions of new users to the internet. Ms. Brandt said she thought that probably ever other CD in existence is an AOL CD. (Mr. Bennett still receives AOL CDs to be recycled daily at his plant.)

“I thought that the best way was for people to actually see it,” Ms. Brandt said, of what AOL had to offer. She orchestrated the placement of CDs in magazines, college campuses, offices, bookstores and banks. At one point AOL was flash freezing CDs and packaging them with Omaha Steaks.

She knows how crazy that sounds, and was thoughtful about the possible environmental impact of her marketing. But Ms. Brandt has no regrets. “It really is remarkable, and those things don’t sound so remarkable now because it is all at our fingertips,” she said, of the internet.

At one point, an AOL chat room or instant message was cutting-edge virtual gathering. The fact that virtual society is so much more advanced today makes it easy to forget how far we have come.

“We drive around, but we don’t have a sense of what it took for us to get to the first car,” Ms. Brandt said.
https://www.nytimes.com/2020/11/07/s...ing-trash.html





Origins of the youtube-dl Project
Ricardo García

As you may know, as of the time this text is being written youtube-dl’s repository at GitHub is blocked due to a DMCA takedown letter received by GitHub on behalf of the RIAA. While I cannot comment on the current maintainers' plans or ongoing discussions, in light of the claims made in that letter I thought it would be valuable to put in writing the first years of youtube-dl as the project creator and initial maintainer.

Copper thieves

All good stories need at least a villain so I have arbitrarily chosen copper thieves as the villains of the story that set in motion what youtube-dl is today. Back in 2006 I was living in a town 5 to 10 kilometers away from Avilés, which is itself a small city or town in northern Spain. While people in Avilés enjoyed some nice infrastructures and services, including cable and ADSL Internet access, the area I lived in lacked those advantages. I was too far away from the telephone exchange to enjoy ADSL and copper thieves had been stealing copper wires along the way to it for years, causing telephone service outages from time to time and making the telephone company replace those wires with weaker and thinner wires, knowing they would likely be stolen again. This had been going on for several years at that point.

This meant my only choice for home Internet access so far had been a dial-up connection and a 56k V.90 modem. In fact, connection quality was so poor I had to limit the modem to 33.6 kbps mode so the connection would be at least stable. Actual download speeds rarely surpassed 4 KB/sec. YouTube was gaining popularity then to the point it was purchased by Google at the end of that year.

Up all night to get some bits

Watching any YouTube video on the kind of connection I described above was certainly painful, as you can imagine. Any video that was moderately big would take ages to download. For example, a short 10 MB video would take, if you do the math, 40 minutes to download, making streaming impossible. A longer and higher-quality video would take several hours and render the connection unusable for other purposes while you waited for it to be available, not to mention the possibility of the connection being interrupted and having to start the download process again. Now imagine liking a specific video a lot after watching it and wanting to watch it a second or third time. Going through that process again was almost an act of masochism.

This situation made me interested in the possibility of downloading the videos I was trying to watch: if the video was interesting, having a copy meant I could watch it several times easily. Also, if the downloader was any good, maybe the download process could be resumed if the connection was interrupted, as it frequently was.

At the time, there were other solutions to download videos from YouTube, including a quite popular Greasemonkey script. By pure chance, none of the few I tested were working when I did, so I decided to explore the possibility of creating my own tool. And that is, more or less, how youtube-dl was born. I made it a command-line program so it would be easy to use for me and wrote it in Python because it was easy thanks to its extensive standard library, with the nice side effect that it would be platform independent.

An Ethereal start

The initial version of the program only worked for YouTube videos. It had almost no internal design whatsoever because it was not needed. It did what it had to do as a simple script that proceeded straight to the point. Line count was merely 223, with only 143 being actual lines of code, 44 for comments and 36 of them blank. The name was chosen out of pure convenience: youtube-dl was an obvious name, hard to forget, and it could be intuitively typed as “Y-O-U-TAB” in my terminal.

Having been using Linux for several years at that point, I decided to publish the program under a free software license (MIT for those first versions) just in case someone could find it useful. Back then, GitHub did not exist and we had to “make do” with SourceForge, which had a bit of a tedious form that you needed to fill to create a new project. So, instead of going to SourceForge, I quickly published it under the web space that my Internet provider gave me. While not usual today, it was common for ISPs to give you an email address and some web space you could upload stuff to using FTP. That way, you could have your own personal website on the net. The first ever version made public was 2006.08.08, although I probably had been using the program for a few weeks at that point.

To create the program, I studied what the web browser was doing when watching a YouTube video using Firefox. If I recall correctly, Firefox didn’t yet have the development tools it has today to analyze network activity. Connections were mostly HTTP and Wireshark, known as “Ethereal” up to that year, proved invaluable to inspect the network traffic coming in and out of my box when loading a YouTube video. I wrote youtube-dl with the specific goal of doing the same things the web browser was doing to retrieve the video. It even sent out a User-Agent string that was verbatim copied from Firefox for Linux, as a way to make sure the site would send the program the same version of video web pages that were used to study what the web browser was doing.

In addition, YouTube used Adobe Flash back then for the player. Videos were served as Flash Video files (FLV), and this all meant a proprietary plugin was required to watch them on the browser (many will remember the dreaded libflashplayer.so library), which would have made any browser development tools useless. This proprietary plugin was a constant source of security advisories and problems. I used a Firefox extension called Flashblock that prevented the plugin from being loaded by default and replaced embedded content using the plugin, in web pages, with placeholder elements containing a clickable icon so content would be loaded only on demand and the plugin library was not used unless requested by the user.

Flashblock had two nice side effects apart from making the browsing experience more secure. On the one hand, it removed a lot of noisy and obnoxious ads from many web pages, which could also be a source of security problems when served by third parties. On the other hand, it eased analyzing how videos were being downloaded by the video player. I would wait until the video page had finished downloading completely and then start logging traffic with Wireshark just before clicking on the embedded video player placeholder icon, allowing it to load. This way, the only traffic to analyze was related to the plugin downloading the video player application and the application itself downloading the video.

It’s also worth noting the Flash Player plugin back then was already downloading a copy of those videos to your hard drive (they were stored in /tmp under Linux) and many users relied on that functionality to keep a copy of them without using additional tools. youtube-dl was simply more convenient because it could retrieve the video title and name the file more appropriately in an automated way, for example.

Ahh, fresh meat!

The Flash Player plugin was eventually modified so videos wouldn’t be so easily available to grab. One of the first measures was to unlink the video file after creating it, so the i-node would still exist and be available to the process using it (until it was closed) while keeping the file invisible from the file system point of view. It was still possible to grab the file by using the /proc file system to examine file descriptors used by the browser process, but with every one of those small steps youtube-dl turned to be more and more convenient.

As many free and open source enthusiasts back then, I used Freshmeat to subscribe to new releases of projects I was interested in. When I created youtube-dl, I also created a project entry for it in that website so users could easily get notifications of new releases and a change log listing new features, fixes and improvements. Freshmeat could also be browsed to find new and interesting projects and its front page contained the latest updates, which usually amounted to only a few dozens a day. It’s only my guess that’s the way Joe Barr (rest in peace), an editor for linux.com, found out about the program and decided to write an article about it back in 2006. Linux.com was a bit different then and I think it was one of the frequently-visited sites for Linux enthusiasts together with other classics like Slashdot or Linux Weekly News. At least, it was for me.

From that point on, youtube-dl’s popularity started to grow and I started getting some emails from time to time to thank me for creating and maintaining the program.

Measuring buckets of bits

Fast forward to the year 2008. youtube-dl’s popularity had kept growing slowly and users frequently asked me to create similar programs to download from more sites, a request I had conceded a few times. It was at that point that I decided to rewrite the program from scratch and make it support multiple video sites natively. I had some simple ideas that would separate the program internals into several pieces. To simplify the most important parts: one would be the file downloader, common for every website, and another one would be the information extractors: objects (classes) that would contain code specific to a video site. When given a URL or pseudo-URL, the information extractors would be queried to know which one could handle that type of URL and then requested to extract information about that video or list of videos, with the primary goal of obtaining the video URL or a list of video URLs with available formats, together with some other metadata like the video titles, for example.

I also took the chance to switch version control systems and change where the project would be hosted. At that moment, Git was winning the distributed version control systems war for open source projects, but Mercurial also had a lot of users and, having tested both, I decided I liked it a bit more than Git. I started using it for youtube-dl and moved the project to Bitbucket, which was the natural choice. Back then, Bitbucket could only host Mercurial repositories, while GitHub only hosted Git repositories. Both were launched in 2008 and were a breath of fresh air compared to SourceForge. The combination of compartmentalized per-user project namespaces (i.e. the name of your project did not have to be globally unique but unique for your projects) with distributed source control systems meant you could publish your personal projects in a matter of minutes to any of the two sites. In any case, migrating the project history to Git and moving the project to GitHub was still a couple of years away in the future.

When rewriting the project I should have taken the chance to rename it, no doubt, but I didn’t want to confuse existing users and kept the name in an effort to preserve the little popularity the program had.

The technological context at home also switched a bit that year. Mobile data plans started to gain traction and, at the end of that year, I got myself a 3G modem and data plan that, for the first time, allowed me to browse the web at decent speeds. In any case, that didn’t make me stop using youtube-dl. I was paying 45 euros a month but the monthly data cap was limited to 5GB. Connection speed was finally great but, doing the math, I could only use an average of around 150MB a day, which meant I had to be selective when using the network and avoid big downloads if possible. youtube-dl helped a lot to prevent me from downloading large video files multiple times.

Episode: a new home

Some time later, at the end of 2009, I moved and finally started living with my girlfriend (now my wife and the mother of my two children) in Avilés. For the first time, I started accessing the Internet using the type of connection and service that had been the standard for many of my friends and family for many years. I remember it was a 100/10 Mbps (down/up) cable connection with no monthly cap. That change definitely marked a turning point in how often I used youtube-dl and how much attention I paid to the project.

Not much later, I finally moved it to Git and GitHub, when the market had spoken and both tools were the way to go. YouTube also started experimenting with HTML5 video, even if it wouldn’t become the default option until around 2015. In 2011 I had been working a full-time job as a software engineer for several years and, in general, I was not eager to get home to code a bit more tuning youtube-dl or implementing the most popular feature request I was probably not going to use personally.

In the second half of 2011 I was in the middle of another important personal software project and decided to step down as the youtube-dl maintainer, knowing I hadn’t been up to the task for several months. Philipp Hagemeister had proved to be a great coder and had some pending pull requests in GitHub with several fixes many people were interested in. I gave him commit access to my youtube-dl repo and that’s mostly the end of the story on my side. The project’s Git master branch log shows I had a continuous stream of commits until March 2011, when they jump to August 2011 to merge a fix by Philipp. Since then, a single clerical commit in 2013 to change rg3.github.com to rg3.github.io in the source code, which was needed when GitHub moved user pages from USERNAME.github.com to USERNAME.github.io in order to, if I recall correctly, avoid security problems with malicious user web pages being served from their own official github.com domain.

While I was basically not involved as a developer of youtube-dl, for years the official project page kept sitting under my username at https://github.com/rg3/youtube-dl and https://rg3.github.io/youtube-dl/. I only had to show up when Philipp or other maintainers asked me to give commit access to additional developers, like Filippo Valsorda at the time or Sergey, one of the current maintainers. Unfortunately, in 2019 we had a small troll problem in the project issue tracker and only project owners were allowed to block users. This made us finally move the project to a GitHub organization where everyone with commit access was invited (although not everyone joined). The GitHub organization has allowed project maintainers to act more freely without me having to step in for clerical tasks every now and then.

I want to reiterate my most sincere thanks to the different project maintainers along these years, who greatly improved the code, were able to create an actual community of contributors around it and who made the project immensely more popular than it was when I stepped down almost 10 years ago, serving the needs of thousands of people along the way.

Offline and free

I’d like to remark one more time that the purpose of youtube-dl as a tool has barely changed along its 14 years of existence. Before and after the RIAA’s DMCA letter was received, many people have explained how they use youtube-dl with different goals in mind.

For me, it has always been about offline access to videos that are already available to the general public online. In a world of mobile networks and always-on Internet connections, you may wonder if that’s really needed. It must be, I guess, if Netflix, Amazon, Disney or HBO have all implemented similar functionality in their extremely popular streaming applications. For long road trips, or trips abroad specially with kids, or underground or on an airplane, or in a place with poor connectivity or metered connections, having offline access to that review, report, podcast, lecture, piece of news or work of art is incredibly convenient.

An additional side-effect of youtube-dl is online access when the default online interface is not up to the task. The old proprietary Flash plugin was not available for every platform and architecture, depending on what your choice was. Nowadays, web browsers can play video but may sometimes not take advantage of efficient available GPU decoding, wasting large amounts of battery power along the way. youtube-dl can be combined with a native video player to make playing some videos possible and/or efficient. For example, mpv includes native youtube-dl support. You only need to feed it a supported video site URL and it will use youtube-dl to access the video stream and play it without storing anything in your hard drive.

The default online interface may also lack accessibility features, may make content navigation hard for some people or lack color blind filters that, again, may be available from a native video player application.

Last, but not least, tools like youtube-dl allow people to access online videos using only free software. I know there are not many free, libre and open source software purists out there. I don’t even consider myself one, by a long shot. Proprietary software is ever present in our modern lives and served to us every day in the form of vast amounts of Javascript code for our web browser to run, with many different and varied purposes and not always in the best interest of users. GDPR, with all its flaws and problems, is a testament to that. Accessing online videos using youtube-dl may give you a peace of mind incognito mode, uBlock Origin or Privacy Badger can only barely grasp.
https://rg3.name/202011071352.html

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

November 7th, October 31st, October 24th, October 17th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - November 24th, '12 JackSpratts Peer to Peer 0 21-11-12 09:20 AM
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 12:27 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)