P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 30-08-17, 07:37 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - September 2nd, ’17

Since 2002




































































September 2nd, 2017




File-Sharing Technology Goes Mainstream
John-Paul Rooney

File-sharing technologies, which were originally developed to allow consumers to access and download copyrighted material illegally, are now being used to find viable alternatives to cloud-based services.

Originating in the 1980s and 90s, peer-to-peer (P2P) file-sharing technologies were originally developed by organisations such as Napster and LimeWire to facilitate the illegal sharing of music, films and computer games. While such technologies were geared to individual users, they are now being adapted to construct private networks capable of sharing data across a number of different computers among a defined group of users.

This interest in traditional P2P file-sharing technology is being driven by growing demand for cheaper, easy-to-use data-storage and file-sharing services. To date, cloud-based services have had the edge over P2P file-sharing networks because their server farms can store vast amounts of data that can be accessed anytime, anywhere. By contrast, P2P file-sharing networks have relied on the memory and processing capacity of a number of individual computers and require users to own access rights.

Despite some minor drawbacks in terms of functionality, the latest P2P file-sharing networks can benefit from significantly lower storage costs than cloud-based services. The latter have significant overheads in the form of hardware management and maintenance costs. In the case of P2P file-sharing networks, storage costs can be spread across each of the host computers. Eliminating central dependency also means they can be more reliable as a failure by one peer cannot undermine the whole system.

The main drawbacks for P2P file-sharing networks to date have been derived from their decentralisation. For example, they can be difficult to manage because there is no single administrator responsible for switching on or off every machine, or overseeing access rights and settings across the network. Back up and data recovery can also be more difficult and viral disruption can be harder to control. Instead of putting off innovators, however, these drawbacks have presented a new engineering challenge.

Recognising the potential of P2P file-sharing technology, some of the best known service providers, such as Dropbox and Hightail, are now enriching their patent portfolios with new takes on technologies that were first developed for illegal file-sharing applications. In 2016, Dropbox filed a patent for a technology that would allow users to share and use documents that have not been saved onto a file-sharing network. A hybrid cloud and P2P file-sharing solution, this technology could be developed to allow encrypted data to be shared quickly and easily without having to go through a central resource.

Other companies are also moving in this direction. VMware, a subsidiary of Dell Technologies, has recently had a patent application published for a scheme which uses P2P technology when issuing software updates. In this case, when client devices on a local network need to update, they are first directed to peers for the software update, rather than to a central server. When applied to corporate networks, this technology could help to improve bandwidth utilisation and streamline systems management.

In a separate move, technology giant, IBM is attempting to patent technology that would allow enterprise network groups to share group files on a P2P basis without needing to access a central server. The company appears to be showing more interest in cloud-based services and is moving towards offering hybrid storage solutions.

AirWatch, a leading provider of mobile device management and enterprise mobility management solutions, has also been granted patent protection in the US for technology that allows enterprise users to access restricted files from peers, rather than from a central repository, but only once appropriate verification steps have been completed.

Second-guessing the direction that innovators will take in any field of research and development is never easy. However, in this case it is possible to imagine a world where P2P file-sharing networks could automatically direct certain documents or files to an individual user in the same way as photos taken on an iPhone are automatically backed up to the cloud. Such technological enhancements could start to reverse the rush to the cloud; giving P2P file-sharing networks a competitive edge in terms of both ease-of-use and cost.

In this dynamic area of R&D, as file-sharing technologies find mainstream application, there is an opportunity for innovators to patent a technology that is later adopted as a new operating ‘standard’. This could lead to lucrative licensing deals as other companies seek to adopt the same technology. In order to take advantage of this opportunity, innovators should focus on developing technology that finds widespread use and which makes P2P file-sharing efficient, easy-to-use, secure and reliable.
http://www.businesscomputingworld.co...es-mainstream/





Kim Dotcom Demos Micro-Payment Service to Help Stop Piracy
Mary-Ann Russon

Internet entrepreneur Kim Dotcom has demonstrated a new micro-payments service that is designed to let people charge small amounts of money for any content they create.

Bitcache will let users make and receive Bitcoin payments.

Mr Dotcom is currently fighting extradition to the US to stand trial for copyright infringement and fraud.

He said the platform will reduce online piracy by letting people pay for content from anywhere in the world.

Micro-payments

The idea behind Bitcache is to turn any file uploaded to the platform into its own "shop".

Creators can upload any type of content to the service - such as a video, a song, images or computer code - and then choose how much money they want to charge.

That can be anything from $1 (£0.77) up. Bitcache will help to distribute the file across file storage websites, torrent sites and community file-sharing sites.

Even if the file is downloaded multiple times, it is encrypted and cannot be opened unless the user pays the required amount of money.

The service, which will eventually include a web browser extension and a mobile app, would also let media organisations, YouTube vloggers and bloggers accept micro-payments from viewers.

So for example, when reading an online newspaper, watching videos on YouTube or reading a recipe, users could press a button on the page and pay a few cents for each piece of content they consume, using their Bitcache wallet.

Crowdfunded investment

Over $1m was raised on crowdfunding investment platform Bank To The Future in October 2016 to fund Bitcache, which is still under development.

The demo went live on Tuesday, and 185,000 requests were received asking for access, but only 10,000 invitations were sent out.

The service is set to launch in mid-to-late 2018.

The Bitcache project has so far received a lot of support from users on Twitter.

Mr Dotcom, the founder of content-sharing website Megaupload, said his technology will enable copyright holders to gain more revenue by making their content accessible in many countries.

He thinks this would act as a deterrent to piracy for people who are willing to pay for content, but are currently unable to get it from firms such as Netflix or Apple.

Using Bitcoin would let people make anonymous payments quickly, but the technology is not geared up to make millions of payments a minute.

Bitcache was built to let this many transactions be performed at speed.

"Content often becomes available in one place in the world, and when people are willing to pay, and they try to, they get the message that the content is not available in their country," Mr Dotcom told the BBC.

"I think the solution to the piracy problem is to offer content globally at the same time, at the same price.

"There will always be people who will pirate content - you can't stop that, but you can get to all the people who have the money to pay for content, but have no way to access it. That's about $10bn worth of revenue that is just being left on the side."

Ernesto van der Sar, editor of piracy news website TorrentFreak, told the BBC:

"I think Bitcache could help independent artists to spread their work to a larger audience and get paid for it at the same time. The more exposure the better.

"That said, I don't think that most people who currently pirate content are suddenly going to pay. They will look for free alternatives instead. These are often readily available, especially for mainstream entertainment."

Takedown requests

Mr Dotcom has been fighting extradition to the US since 2012, when his mansion was raided and his assets were seized.

The US Department of Justice has said Mr Dotcom and his associates enabled copyright infringement by letting users store pirated files in free cloud lockers.

Users posted links to the pirated content for others to download for free, but Megaupload would not close down lockers containing infringing content.

Mr Dotcom has long argued that he did not aid piracy because he had a takedown system that enabled copyright holders to delete links to pirated files, and without the link, a user could not reach the file.

To prevent such a situation occurring with Bitcache, the technology lets copyright holders to take control over files made of their content.

"If someone pirates Game of Thrones and charges money for it, the content provider can find the link, report it to the system and then claim that content," said Dotcom.

"They can change the price point so the user will have to pay the real price, and no matter where the [file] is located online, the content provider will now receive all the payments."

This will not work on pirated files that have been uploaded via torrents, but he believes it will give content providers back control over their content, if they partner with his platform.

"The next generation of smartphones will transfer files in seconds," said Mr Dotcom.

"If content holders want to have any chance to combat piracy in a world that makes it increasingly easy to pirate, the best way is to turn every file into a shop.

"This is truly new - something like this doesn't exist yet."
http://www.bbc.co.uk/news/technology-41094797





b00tl3g kr3w is a Free Game About Pirating Software in the '90s

Features some hilarious riffs on Sonic and Mario.
Shaun Prescott

When my family first got a modem for our 486 it took us a while to figure out that, with this modem, it was possible to acquire software for free. But once we made this discovery, we went at it hard: setting up downloads that could sometimes take weeks in order to get our hands on [redacted]. The fact that, most of the time, these downloads wouldn't work, or they'd be riddled with viruses, was almost part of the fun.

Anyway, this free itch.io game b00tl3g kr3w is a harkening back to that era. You play as a pirate and must collaborate with other pirates in order to upload cracked games including riffs on Mario and Sonic. It's not a long game, but it's a nice little period piece and it's created by the guy behind Shower With Your Dad Simulator. So you know it's going to be good. It's very funny.

The game was made as part of the Awful Summer Jam 2017, which had the theme "bootleg". Check the game out over here. Cheers, Rock Paper Shotgun.
http://www.pcgamer.com/b00tl3g-kr3w-...re-in-the-90s/





The Mayweather-McGregor Fight Shows It’s Impossible to Stop Social Media Streaming of Big Events

You could pay $100 to watch on Pay Per View, or hope your mom’s neighbor has a big projector in the backyard that she can Facetime you.
Kaleigh Rogers

Watching illegal livestreams of big ticket events is time honored tradition of the internet. But this weekend's fight between boxing legend Floyd Mayweather Jr. and MMA star Conor McGregor opened a new chapter in our online piracy. Newer social media livestreaming capability has made it far easier to instantly share live feeds, and if an event is hyped enough, people watching it illegally has now become an unstoppable inevitability.

Nearly 3 million viewers are estimated to have watched the fight this weekend via online streams, according to Irdeto, a digital security firm. Though many of these were slick, traditional streaming websites, there was also a new surge in social streams. Between Periscope, Instagram live, Facebook live, YouTube, Twitch, and smaller platforms like Kodi, Irdeto identified 239 streams of the fight over the weekend. And with the option to have private, share-with-just-your-friends streams (like private Facebook Live feeds), it's likely there are many more streams of the fight that were running than Irdeto wasn't able to track.

Social media livestreaming has exploded in recent years, creating a whole new avenue for illegal sharing. In 2015, when Mayweather squared off against Manny Pacquiao in another much-anticipated fight, Periscope was only two months' old. Facebook and Instagram's live feed functions were still a year away. Now, they're as ubiquitous as the platforms that host them.

Plus, with every smartphone now equipped with a high definition camera, most homes connected to high-speed internet, and the ease of streamable services on already-familiar social media sites, it's no wonder there was such a torrent of pirated feeds.

Often, these feeds were simply live videos pointed at a TV owned by someone who had paid the $100 pay-per-view fee to watch the fight live in HD. There's an entire subreddit dedicated to listing feeds of the fight, and even more people tuned in via streaming websites specifically advertising the fight, according to TorrentFreak.

Showtime was one of few US broadcasters licensed to air the match, and was well aware that livestreams were going to take a bite out of its audience. In an effort to combat their impact, Showtime got an injunction last week against a number of streaming websites, forbidding them from streaming the fight. The venue didn't sell out for the fight: only about 14,000 attendees paid between $2,500 and $10,000 to watch it in person at the 20,000-seat T-Mobile Arena in Las Vegas.

But early estimates show the fight broadcast to theaters brought in $2.6 million alone, and many bars around the world charged admission for patrons to come watch the fight. Numbers from Showtime later this week will likely show that even with an avalanche of live-streaming options, you can still turn a profit from a fight between the best boxer in the world and a dude who has literally never boxed before. If anything, the number of livestreams are a testament to how popular the match was.

Gone are the days when you needed to have your hacker cousin set up an online mirror to illegally stream a live event. Now, thanks to social media, we're in an era where it will be impossible to prevent illegal viewing of any event that's hyped enough to warrant the audience. Savvy event organizers would do well to embrace a compromise: $99 for the pay-per-view, $9 for the pay-per-Periscope, perhaps?
https://motherboard.vice.com/en_us/a...-of-big-events





Pay-Per-View Issues Delay Start of Floyd Mayweather-Conor McGregor Fight

The Floyd Mayweather-Conor McGregor main event Saturday night in Las Vegas was temporarily delayed due to pay-per-view outages across the country before both fighters entered the ring shortly after midnight ET.

"Due to the overwhelming demand, capacity of cable systems around the country are being overwhelmed. They are shutting down and rebooting some of these cable systems," Showtime executive vice president Stephen Espinoza told ESPN's Sal Paolantonio.

In a text message to Paolantonio shortly before midnight ET, Espinoza said, "We are a go. Enough systems have rebooted to end the delay." The fighters entered the ring shortly thereafter, before Mayweather beat McGregor by 10th-round TKO just before 1 a.m. ET.

Speaking after the fight, Mayweather said PPV servers in Florida and California crashed, leading to the outages.

"We wanted to make sure everything was in the right place (for fans)," the undefeated boxing champion said of the decision to delay the start so that fans would be able to watch.

It was unclear how many people were affected by the outages. Among the cable carriers affected were Xfinity, Atlantic Broadband and Frontier.

Some cable carriers told customers that if the reboot was successful and they did receive a feed, it would be in standard definition and at a cost of $89.95, or $10 less than the $99.95 for high definition.

UFC officials and WME/IMG did not immediately reply to ESPN's repeated requests for comment.

Similar issues caused a delay in Mayweather's fight with Manny Pacquiao in 2015.

McGregor seemed unfazed by the delay, with UFC president Dana White telling ESPN's Brett Okamoto that the UFC star was calm during the delay.

Mayweather was said to be the same.

"Floyd is relaxed," boyhood friend Rod Carswell said in a text message to Paolantonio.

Added Mayweather spokesman Kelly Swanson: "If we have to wait, we wait."

Information from ESPN's Darren Rovell was used in this report.
http://www.espn.com/boxing/story/_/i...y-ppv-problems





Showtime Hit With Class-Action Lawsuit Over Failed Mayweather-McGregor Streams

Grainy video, errors and buffering streams weren't what fans paid $99 to see, according to the lawsuit.
Ashley Cullins

Before the sweat was dry in the ring following Floyd Mayweather Jr.'s defeat of UFC champion Conor McGregor on Saturday, Showtime had another major fight on its hands — a class-action lawsuit from customers unhappy because of streaming issues that plagued the fight and the lead-up bouts.

Portland, Ore., boxing fan Zack Bartel paid to stream the fight in high-definition through the Showtime app but says all he saw was "grainy video, error screens, buffer events, and stalls."

Bartel is suing Showtime for unlawful trade practices and unjust enrichment, alleging the network rushed its pay-per-view streaming service to the market without securing the bandwidth necessary to support the scores of cable-cutting fans.

"Instead of being upfront with consumers about its new, untested, underpowered service, defendant caused likelihood of confusion and misunderstanding as to the source and quality of the HD video consumers would see on fight night," writes attorney Michael Fuller in the complaint filed late Saturday in Oregon federal court. "Defendant intentionally misrepresented the quality and grade of video consumers would see using its app, and knowingly failed to disclose that its system was defective with respect to the amount of bandwidth available, and that defendant’s service would materially fail to conform to the quality of HD video defendant promised."

The complaint, which is largely composed of screenshots and tweets, is seeking for each member of the class actual damages or $200 in statutory damages, whichever is greater. (Read the complaint below.)

The proposed class includes Oregon consumers who viewed Showtime's app advertisement on iTunes and paid $99.99 to stream the fight, but were unable to view the fight live on the app "in HD at 1080p resolution and at 60 frames per second, and who experienced ongoing grainy video, error screens, buffer events, and stalls instead."

Showtime senior vp sports communications Chris DeBlasio says anyone who had issues with a cable or satellite feed should contact their provider, but Showtime will handle complaints from anyone who bought the fight through Showtimeppv.com and the ShowtimePPV app.

“We have received a very limited number of complaints and will issue a full refund for any customer who purchased the event directly from Showtime and were unable to receive the telecast,” he says.

Pay-Per-View Live Events Inc. also sent The Hollywood Reporter an email that directed dissatisfied customers to their service providers. "Unfortunately, we are receiving a huge number of complaints from a large number of customers who are not using our services but a different provider (UFC)," says the message. "We can only express that we understand your pain for not being able to see the special event but again we are not the company that provided the stream or actual event. You will need to contact the actual provider such as Xfinity, Showtime, HBO, UFC.tv etc. to request your refund."

The plaintiffs are also represented by Geragos & Geragos.
http://www.hollywoodreporter.com/thr...treams-1033373





Box Office Disaster: Lackluster Releases, Mayweather-McGregor, Hurricane Harvey Create Slowest Weekend in Over 15 Years
Seth Kelley

In the grand scheme, it can seem like a small issue when compared with Hurricane Harvey -- the deadly natural disaster that tore through the Gulf Coast of Texas on Friday, dumping more than 20 inches of rain, according to the National Weather Service. But Harvey also had at least some impact on the business, forcing theater closures in South Texas. Still, the degree to which the storm hurt the bottom line of moviegoing is up for debate.

Another factor under inspection is Saturday evening’s UFC match which saw Floyd Mayweather beat Conor McGregor with a 10th-round TKO. The fight was estimated to reap as much as $1 billion in revenues, and among the biggest pay-per-view draws in history. Numbers regarding the amount of viewers will be released later in the week, but some analysts predicted the highly-anticipated brawl could keep those who would ordinarily see a movie, out of theaters.

All that said, no amount of outside factors can excuse the reality that no major releases this weekend managed to connect with audiences in a significant way. The overall box office this weekend is not expected to pass $65 million, and the top 12 films will gross less than $50 million. Those figures are the lowest in more than 15 years.

There have been lulls around this time in recent years. In 2014, the first weekend in September made $66 million overall. Two years before that, the Sept. 7-9 frame made $67 million overall and $51.9 from the top 12. 2008 saw a similar slump in the Sep. 5-7 frame.

But not since late September in 2001 have they dropped quite so low.* The Sept. 21-23 frame in 2001 earned $59 million overall and the top 12 made $43.5 million. The year before, Sept. 15-17 fell to $53.7 million for the weekend and $37.9 for the top 12.

Back to the current day, once again, “Hitman’s Bodyguard” and “Annabelle: Creation” will top the charts for Lionsgate and Warner Bros., respectively. “Bodyguard” is expected to earn $10.1 million from 3,377 theaters -- combined with last weekend, its total domestic gross should be $39.6 million. And “Annabelle” will make $7.4 million from 3,565 locations, raising its current domestic cume to $77.9 million.

“We expect it to continue to perform well right into September,” said Lionsgate’s distribution president David Spitz.

Otherwise, TWC made two of the weekend’s biggest plays with the animated feature “Leap!” and the expansion of Taylor Sheridan’s “Wind River.” The former opened at 2,575 locations in North America, and is expected to take in $5 million. The film was acquired for a low cost of $3 million, and under its title in every market outside of the U.S., “Ballerina,” has already picked up $58.2 million from foreign locations. It’s billed as a musical adventure comedy about an orphan girl who aspires to become a dancer. The voice cast is led by Elle Fanning, and also includes Maddie Ziegler, Carly Rae Jepsen, Nat Wolff, Kate McKinnon, and Mel Brooks. Critics smushed it to 37 percent on Rotten Tomatoes, but audiences earned the film an A CinemaScore.

“It’s a tough weekend out there in the marketplace when a $5 million movie is ranked third,” remarked Laurent Ouaknine, distribution boss at TWC. “On our side, we have a film that audiences love,” he said, adding that, while the audience is predominantly young and female, they’re seeing that boys “that are coming with their family like it too.”

“Wind River,” meanwhile should make an additional $4.4 million this weekend from 2,095 locations. The film, now in its fourth week of release, is intended as the conclusion of a trilogy that includes “Sicario” and “Hell or High Water.” During its first weekend at four theaters, the thriller scored one of the year’s best per-screen averages, but its mass appeal seems more questionable. “Hell or High Water,” which earned a best picture nomination at the Oscars, also made $4.4 million during its fourth weekend, but from fewer locations (1,303).

“We did decide to go a little bit wider,” Ouaknine said. “We saw the room in the marketplace, and that there was nothing new out there for the intended audience,” he added, touting that TWC is responsible for two of the top five films in the marketplace.

Also, "Birth of the Dragon" is opening at 1,618 locations to $2.5 million. That's below the $3.25 million goal set by the distributor. BH Tilt and WWE Studios co-acquired the film after its premiere at the 2016 Toronto Film Festival. The marketing campaign was inexpensive and focused on digital promotion, and targeted events. The movie -- an homage to Bruce Lee's style of martial arts films -- lends its inspiration's name to the main character, played by Philip Ng. Set in 1960s San Francisco, Lee challenges kung fu master Wong Jack Man (Xia Yu) to an epic fight.

And Sony's "All Saints," from Affirm Films and Provident Films should earn $1.55 million from 846 locations. The faith-based film has a low budget, and is generally embraced by critics (89 percent on Rotten Tomatoes) and audiences (A- CinemaScore). John Corbett and Cara Buono lead the cast of the flick, directed by Steve Gomer. Steve Armour wrote the script, based on a true story, that centers on a salesman-turned-pastor and a group of refugees from Southeast Asia.

Despite the recent popular assertion that movie releases are moving to a year-round schedule with fewer dead zones, August remains a predictably sleepy month for theaters. Still, years past have managed bigger successes than we are seeing in 2017. Last year at this time, for example, Sony's Screen Gems launched "Don't Breathe," which grossed $26.4 million in its opening weekend. While a similar sort of horror hit would be difficult to position between "Annabelle" and September release "It," there is potential for movies to perform well at the tail end of summer. That "Wonder Woman" and "Baby Driver" saw their theater counts upped only adds as further emphasis that studios see the hole in the schedule -- they just aren't quite sure how to properly fill it.
https://uk.reuters.com/article/us-us...-idUKKCN1B70RZ





Hollywood is Suffering its Worst-Attended Summer Movie Season in 25 Years
Ryan Faughnder

As Hollywood wraps up the all-important summer box office season this Labor Day weekend, a sobering reality has gripped the industry.

The number of tickets sold in the United States and Canada this summer is projected to fall to the lowest level in a quarter-century.

The results have put the squeeze on the nation’s top theater chains, whose stocks have taken a drubbing. AMC Theatres Chief Executive Adam Aron this month called his company’s most recent quarter “simply a bust.”

Such blunt language reflects some worrisome trends. Domestic box-office revenue is expected to total $3.78 billion for the first weekend of May through Labor Day — a key period that generates about 40% of domestic ticket sales — down nearly 16% from the same period last year, according to comScore. That’s an even worse decline than the 10% drop some studio executives predicted before the summer began.

And the number of actual tickets sold this summer paints a bleaker picture, with total admissions likely to clock in at about 425 million, the lowest level since 1992, according to industry estimates.

No one can fully explain why. Studio executives, movie theater operators and analysts cited the usual explanations for the summer slump. There are the obvious reasons: Too many bad movies, including sequels, reboots and aging franchises that no one wanted to see. Some point to rising ticket prices, which hit a record high in the second quarter, according to the National Assn. of Theatre Owners. Then there are long-term challenges, including competition from streaming services such as Netflix and the influence of the movie review site Rotten Tomatoes. How about all of the above?

What is clear: This summer was marred with multiple high-profile films that flopped stateside, including “The Mummy,” “Baywatch,” “The Dark Tower” and “King Arthur: Legend of the Sword.” Sequels in the “Alien,” “Transformers” and “Pirates of the Caribbean” franchises also disappointed. (International ticket sales are helping to ease some of the pain.)

The business is also reckoning with broader, longer-term threats that have kept Americans from flocking to theaters the way they used to. People now have more entertainment options than ever, and cinemas have struggled to keep up, despite efforts to adapt with improved technology and services, industry analysts say. The problem is exacerbated by an unforgiving social media environment in which bad movies are immediately punished by online word of mouth.

Some worry that summer movies have simply lost their place as the top entertainment touchstones American consumers are talking about, as acclaimed shows such as “Game of Thrones” on HBO and “The Handmaid’s Tale” on Hulu dominate the cultural conversation.

“The floor beneath the entertainment market is not as stable as it was 10 years ago,” said Jeff Bock, box-office analyst with tracking firm Exhibitor Relations. “There's a lot of different things that monopolize people’s discussions, and most of them are not movies. The product is just not worth talking about.”

The long-term challenges are pushing studios to adapt. They’re discussing ways to make movies available for streaming earlier after their theatrical releases through iTunes and video-on-demand services, despite resistance from theater chains. MoviePass, a New York-based company that sells subscriptions to let people see a virtually unlimited number of movies, became a topic of heated debate when it recently lowered its monthly fee to $9.95.

Overall, the industry has been too slow to embrace changing viewer habits, some analysts say.

“The rest of the entertainment industry has evolved, and movies haven't,” said Doug Creutz, media analyst at Cowen & Co. “People are only going to see movies they think they have to see in theaters, and there aren’t that many of them.”

To be sure, the summer wasn’t all bad. The movies that succeeded did so by achieving critical acclaim, satisfying the desires of underserved audiences, and offering something fresh and original. Warner Bros.’ DC Comics film “Wonder Woman,” the summer’s top movie, grossed more than $400 million domestically by finally bringing a female superhero to the big screen. “Spider-Man: Homecoming,” Sony’s Marvel collaboration, was also a hit. Raunchy comedy “Girls Trip,” from Universal Pictures, collected $108 million by targeting black women. Christopher Nolan’s “Dunkirk” and Sony’s “Baby Driver” proved that original concepts can still draw big crowds to the theaters.

But those hits didn’t make up for the big misses. The explanations for the movies that didn’t work run the gamut and often contradict each other.

Some said audiences have tired of seeing the same old characters. Indeed, Universal’s “The Mummy” failed to deliver, and 20th Century Fox’s “Alien: Covenant” and “War for the Planet of the Apes” did significantly worse than their predecessors.

But so-called sequel fatigue doesn’t explain the success of “Guardians of the Galaxy Vol. 2” and “Despicable Me 3,” which were both big moneymakers.

R-rated comedies, usually a reliable source of studio profits, also fell on hard times this summer. Four out of the five major releases disappointed: Fox’s “Snatched,” Sony’s “Rough Night,” Paramount’s “Baywatch” and Warner Bros.’ “The House.” “Girls Trip” was the one exception, notably, after earning critical acclaim.

“It's not the genre itself that wasn't working,” said Nick Carpou, domestic distribution president for Comcast Corp.’s Universal Pictures. “People respond to good movies.”

To that point, some studio executives and filmmakers have blamed Rotten Tomatoes’ aggregated review scores for sinking certain movies before they even hit theaters. But how does that jibe with the success of Sony’s “The Emoji Movie,” which scored $77 million despite an overwhelming drubbing from critics (7% on Rotten Tomatoes)?

Many studio executives still chalked up the abysmal summer to the feast-or-famine nature of the box office, cautioning people not to overreact to short-term fluctuations that can be caused by a single flop or a weak month. August was unusually bereft of big studio films, with the exception of New Line’s “Conjuring” spinoff “Annabelle: Creation,” clocking in at $79 million so far. August ticket sales plummeted 35% from the same month last year.

“It's tough to say if there's a trend,” said Adrian Smith, president of domestic distribution at Sony Pictures. “There are a lot of movies on the horizon that audiences are going to respond to.”

The summer slump wiped out gains posted earlier in the year, when successes including “Get Out” and “Beauty and the Beast” propelled grosses. Since Jan. 1, 2017, films have done $7.5 billion in ticket sales from the United States and Canada, down 6% from a year earlier. That makes it unlikely this year will surpass the record $11.4-billion industrywide haul of 2016, even with a new “Star Wars” movie due in December. A much-needed winner could come Sept. 8, when Warner Bros. and New Line release their much-anticipated Stephen King adaptation, “It.” There are also high hopes for “Pitch Perfect 3” and “Justice League.”

Global film revenue — which hit a record $38.6 billion last year — continues to be a silver lining. Certain films have made up ground by doing well overseas after underperforming at home, particularly in China, the second-largest box-office market.

Still, the overseas grosses haven’t been able to completely offset weakness in the United States. The most recent “Transformers” movie grossed $228 million in China, nearly 30% less than the prior installment in the franchise. Disney/Pixar’s recently released “Cars 3” had a weak debut in the country, where overall ticket sales have cooled.

To Cowen & Co.’s Creutz, it’s a sign that Hollywood can’t rely on international sales to buttress its business forever.

“That’s pretty much run its course, too,” he said. “The trends we’re seeing here are also true in the rest of the world.”
http://www.latimes.com/business/holl...830-story.html





'Star Wars' Box Office: 'Phantom Menace' Remains One Of The Leggiest Blockbusters Ever
Scott Mendelson

Today is “Force Friday II,” which essentially means that a bunch of companies dropped their first wave of The Last Jedi toys and related merchandise into stories at 12:01 am last night. No word on whether or not you can get your new Porgs wet or feed them after midnight, but courage to the first soul who tempts fate. Anyway, there certainly was no new The Last Jedi trailer, and kudos to Walt Disney on that note. Why get one huge day of Star Wars news when you can have two days of Star Wars news?

But on this odd Star Wars-themed day, where fans celebrate where the real money from the movie is made, I wanted to take a moment to note something. I’ve written quite a bit about the legs for major openers, specifically in terms of Wonder Woman and the big openers that legged it out well past the debut weekend. And, at least over the last 15 years, few mega openers, or even movies that opened above $30 million, have been as leggy as Star Wars Episode One: The Phantom Menace.

Yes, I know, we all allegedly hate The Phantom Menace. It destroyed our childhood, wrecked Star Wars, etc., etc. But it also made an unholy amount of money in the summer of 1999. It earned $431 million in domestic release (around $754m adjusted for inflation) and $924m worldwide (on a $115m budget) sans any kind of IMAX or 3D upcharges, which was the third-biggest domestic grosser (behind Titanic's $600m gross in 1997/1998 and Star Wars lifetime total of $460m) and the second-biggest global grosser (behind Titanic's $1.8 billion gross) ever at the time.

As George Lucas said when promoting Attack of the Clones (paraphrasing): “I made More American Graffiti, I know what happens when people don’t like a sequel, no one goes.” But audiences did go and see The Phantom Menace in theaters that summer. More importantly, at least to the conventional wisdom that folks hated the movie as much as the hardcore fan base did, that movie didn’t just snag a huge total box office sum 18.5 years ago. It played all summer long, becoming one of the leggiest blockbusters of the modern era.

Now there are two caveats going forward. First, the Liam Neeson/Natalie Portman/Ewan McGregor/Jake Lloyd sci-fi prequel opened on a Wednesday (with a then-record $28 million single-day gross), which slightly skews the multiplier in a positive direction. Second, at the time, theaters playing the anticipated summer blockbuster were required to keep the film in their higher-end auditoriums for longer than normal, which certainly played a role in its summer legs. But even absent that advantage, the film was incredibly leggy especially when you consider the film’s reputation as a kind of Hollywood disaster.

To wit, Star Wars Episode One: The Phantom Menace earned $28 million in its debut Wednesday, which led to a $64.82m Fri-Sun and $105m Wed-Sun gross (in just 2,970 theaters). While the five-day debut was a record, its Fri-Sun gross was actually below the $74m Fri-Sun opening (of a $92m Fri-Mon Memorial Day bow) for The Lost World: Jurassic Park. And yeah, there was hand-wringing about whether merely scoring the second-biggest opening weekend of all time constituted a disappointment, but much of that talk went the way of the dodo after the film earned another $66m over its second Fri-Mon Memorial Day weekend. It crossed $200m in just 13 days, a record at the time.

Whether general audiences didn’t (comparatively speaking) want to deal with opening weekend crowds, or whether hyperventilating reports of sold-out theaters made everything think they couldn’t a ticket, the film had a remarkable hold in its second weekend, dropping just 20% in its second Fri-Sun weekend. It fell 36% in its third weekend and then didn’t have another drop above 35% until its 17th weekend of release. In its first 23 weekends, it had exactly two frames where it dropped more than 35%. Yes, there were other big hits that summer (Austin Powers: The Spy Who Shagged Me, Tarzan, The Sixth Sense, etc.), but The Phantom Menace stood high above its relative competition.

And while we nerds may have been dissatisfied with the juvenile tone, the emphasis on plotting over character and a lack of action, general audiences were happy to take their friends and family to another Star Wars movie. And they did so, all summer long. The Phantom Menace earned just 15% of its initial domestic total (not counting the 2012 3D reissue) via its $64m Fri-Sun frame, giving it a 6.73x multiplier. That’s leggier than The Hangover, Shrek, Twister, Batman, and (among holiday openers) Terminator 2: Judgment Day, Independence Day, Armageddon, The Matrix, every Lord of the Rings/Hobbit movie, Pirates of the Caribbean: Curse of the Black Pearl and Frozen.

Among all movies that opened with at least $30 million in their Fri-Sun frames, it sits behind only (unless I missed one) Avatar ($77m/$760m), Jurassic Park ($50m/$357m), The Lion King ($41m/$312), Sing ($35m/$270m), The Blind Side ($33m/$256m), Saving Private Ryan ($30m/$216m) and Night at the Museum ($30m/$251m). As you can see, it’s the second-leggiest $60m+ opener behind only the second-biggest grossing movie of all time and the third-leggiest $50m+ opener behind the second-biggest movie of all time and a film that was once the global box office champion and was still in theaters over a year after its initial release.

It’s important to note that a blockbuster on the scale of The Phantom Menace was still a somewhat rare thing in the summer of 1999, as this was just before Harry Potter and the Sorcerer’s Stone, The Lord of the Rings: The Fellowship of the Ring and Spider-Man normalized the big-scale fantasy adventure. Even by 2002, a movie like Attack of the Clones was less of a unique snowflake. And to the extent that DVDs and piracy cut into theatrical moviegoing, both things were a much larger force by the time the second Star Wars prequel was released.

Nonetheless, the numbers speak for themselves. Lots and lots of folks saw The Phantom Menace in the summer of 1999. And, more importantly, despite its reputation as a franchise-killer, it was relatively well-liked by the general audiences (kids and otherwise) who just wanted a fun Star Wars movie. You can get a $105 million five-day debut from hype and anticipation. But you don’t get $431m total from a $64m Fri-Sun opening weekend without folks being at least somewhat satisfied, going back multiple times and/or recommending it to their friends.

I write now and then about films that become tagged as being flops or disappointments despite making oodles of money, either due to the expectation that they would make more or the overall critical narrative skewing the box office result. It was Avengers: Age of Ultron and its shameful $1.4 billion gross that coined the whole “superhero fatigue” thing, and people still treat Waterworld as an epic flop (it broke even). So, on Force Friday II, I thought it would be a good time to remind folks that The Phantom Menace didn’t destroy the Star Wars franchise or do anything other than make an unholy amount of money.

Moreover, it made that money in a way that suggested that most paying consumers liked what they saw and/or came back for seconds. And, yeah, it created a new generation of Star Wars super-fans, to whom The Phantom Menace, Attack of the Clones and Revenge of the Sith are as vital as the original trilogy was to us older folks. That doesn’t mean you have to retroactively like the prequels (it's not like Night at the Museum is a good movie just because it had an insane Christmas multipier), but we should take a moment to note that Star Wars Episode One: The Phantom Menace is one the leggiest mega-openers of all time.
https://www.forbes.com/sites/scottme...kbusters-ever/





Sharp Announces an 8K TV Now that You’ve Upgraded to 4K

The ultimate reality
Thuy Ong

Now that you’ve upgraded to a shiny new 4K TV, Sharp has revealed its latest screen to stoke your fear of missing out: a 70-inch Aquos 8K TV. That 8K (7,680 x 4,320) resolution is 16 times that of your old Full HD (1920 x 1080) TV. Sharp calls it “ultimate reality, with ultra-fine details even the naked eye cannot capture,” which doesn’t seem like a very good selling point.

Keep in mind that having a screen with more pixels doesn’t buy you much after a certain point, because those pixels are invisible from a distance — while an 8K panel would be beneficial as a monitor, where you’re sitting close, it won’t buy you much when leaning back on the couch watching TV. HDR, however, is something else entirely, and fortunately, Sharp’s new 8K set is compatible with Dolby Vision HDR and BDA-HDR (for Blu-ray players).

"Where’s the content?"

The lack of available 8K HDR content is also a problem. But there is some content floating around; last year Japan’s public broadcaster NHK began the world’s first regular satellite broadcasts in 8K resolution. Sharp also plans to develop more 8K products to form an ecosystem that includes broadcast receivers and cameras.

Sharp plans to roll out the TV later this year in China and Japan, and then Taiwan in February 2018. Sharp is repurposing its 70-inch 8K TV as an 8K monitor (model LV-70X500E) for Europe, which will be on sale in March. Sharp hasn’t indicated how much its 8K models will cost yet but a source told the Nikkei Asian Review that they’ll start at about 1 million yen, or about $9,000. The company had previously released an 85-inch 8K monitor in Japan that cost over $100,000.

There’s no detail about a US release, despite Sharp’s parent Foxconn announcing plans to build a $10 billion LCD factory in Wisconsin.
https://www.theverge.com/2017/9/1/16...k-tv-ifa-aquos





Data Analysis Identifies a Surprising Global Capital for Pirating Game of Thrones Episodes
Corinne Purtill

When “The Dragon and the Wolf” airs tonight, the finale of the penultimate season of Game of Thrones could become the HBO hit’s most-watched episode ever. As Quartz’s Ashley Rodriguez has reported, the series has set viewership records this season, despite hacks that leaked some episodes online early.

As is always the case with the most pirated television show in history, an astounding number of people will be watching the show illegally. For a sense of how big the piracy problem is for HBO, consider that in a few days following its release, the first episode of Game of Thrones season 7 was viewed through official channels more than 16 million times, and was illegally viewed over 90 million times around the globe. While HBO offers an array of options to watch the show on the right side of the law, the internet offers even more to skirt it. But where are all these pirates coming from?

An interesting explanation comes from alpha60, a University of California, Berkeley-funded project that measures piracy traffic. It’s led by the wife and husband team of Abigail De Kosnik, a Berkeley associate professor of new media, and Benjamin De Kosnik, an artist and software engineer.

The De Kosnicks analyzed traffic on BitTorrent, one of many popular sites for viewing Game of Thrones illegally, in the week following the July 16 premiere of “Dragonstone,” the first episode of the current season. By looking at VPN data, they were able to pinpoint where illegal downloads were happening. Seoul led the globe in absolute number of downloads, but Dallas, Texas, had the highest share of GOT pirates by population.


Cities with most pirated GOT downloads, July 16-22, 2017

Seoul, Republic of Korea
Athens, Greece
São Paulo, Brazil
Guangzhou, China
Mumbai, India
Bangalore, India
Shanghai, China
Riyadh, Saudi Arabia
Delhi, India
Beijing, China


Cities with highest percentage of pirated GOT downloads by population, July 16-22, 2017

Dallas, US
Brisbane, Australia
Chicago, US
Riyadh, Saudi Arabia
Seattle, US
Perth, Australia
Phoenix, US
Toronto, Canada
Athens, Greece
Guangzhou, China

A caveat: the authors noted that their analysis couldn’t determine whether traffic in those cities was real or the result of “geospoofing”—phony addresses claimed by VPN users to hide their real locations. Just like those ravens with an uncanny ability to move through time and space, Game of Thrones pirates may be manipulating locations, too.
https://qz.com/1063142/how-people-ar...ine-illegally/





Lawmakers Considering Net Neutrality Repeal Got Three Times More Donations From ISPs

Four internet service providers that support a Trump administration plan to eliminate net neutrality rules have outspent online giants such as Facebook, Amazon, and Google by a 3-to-1 margin.
Frank Bass

Four major internet service providers that support a Trump administration plan to eliminate net neutrality rules have outspent online giants such as Facebook, Amazon, and Google by a 3-to-1 margin when it comes to contributing money to members of a key House panel, according to a MapLight analysis.

The companies–Comcast Corp., AT&T, Verizon, and Charter Communications–have contributed $1.9 million to the 55 members of the House Energy and Commerce Committee since 2015. More than $1.2 million has gone to Republican lawmakers, who have scheduled a Sept. 7 hearing to consider legislative options for repealing Obama-era regulations that require internet companies to treat all content equally.

The issue also continues to simmer at regulatory agencies. The Federal Communications Commission announced last week that it will give Americans more time to weigh in on a regulatory proposal that would essentially eliminate net neutrality. The comment period, originally set to expire Wednesday, was extended to Aug. 30 after drawing more than 20 million responses.

The Trump administration’s push to eliminate net neutrality rules marks the second major policy shift with the potential to significantly erode online consumer rights. President Donald Trump signed legislation in April allowing internet providers to sell customers’ private browser histories, despite a March HuffPost/YouGov poll showing that 83% of adults considered it to be a bad idea.

Support for rolling back net neutrality rules isn’t much more popular. Sixty percent of registered voters said in a Morning Consult/Politico poll that the FCC shouldn’t allow internet providers to “block, throttle, or prioritize certain content on the internet.” The FCC proposal is generally favored by large internet service providers and opposed by major content companies such as Facebook, Alphabet, Amazon, and Netflix. Internet content providers worry that phone and cable internet providers could discriminate against them by charging extra for their services or slowing speeds to their websites.

First-Time Testimony?

The Sept. 7 hearing is expected to draw even more attention to the net neutrality issue since it’s possible that it will mark the first time that the chief executives of Facebook, Amazon, Google, and Netflix testify before Congress. Even though the four content providers have more than twice the combined market capitalization of Comcast, AT&T, Verizon, and Charter, they’ve only spent about $570,000 wooing panel members since the beginning of 2015.

When it comes to campaign contributions to committee members, the four major content providers supporting net neutrality have focused on boosting local politicians. Since early 2015, Rep. Anna Eshoo, a Palo Alto Democrat, has been the biggest beneficiary of contributions from Facebook and Google. Eshoo has received $19,100 from Menlo Park-based Facebook and $26,000 from Google, based in Mountain View. Amazon, headquartered in Seattle, has given $15,500 to Rep. Cathy McMorris Rodgers, a Spokane Republican who described the Obama-era net neutrality rules as a “heavy-handed approach.” Netflix hasn’t made any contributions to committee members since 2015, according to Federal Election Commission records.

Rep. Greg Walden, an Oregon Republican who leads the House Energy and Commerce Committee, collected more contributions from companies whose representatives have been invited to the Sept. 7 hearing than any other panel member. Walden has received $125,300 from the eight companies, including $49,600 from Comcast Corp., which has given $586,850 to panel members since the beginning of 2015.

Rep. Frank Pallone, the ranking Democrat on the panel, reported receiving $72,300 from the eight companies during the same period. Like Walden, the biggest chunk of contributions to Pallone by the eight companies came from Comcast, which gave $23,900 to his campaign fund. The New Jersey legislator said the rollback of net neutrality rules will “undermine the free and open internet, and hand its control over to a few powerful corporate interests.”
https://www.fastcompany.com/40459763...ions-from-isps





Net Neutrality Advocates Release More Crowdfunded Billboards Exposing Key Lawmakers Who are Supporting the FCC’s Net Neutrality Repeal

FOR IMMEDIATE RELEASE, August 29, 2017
Contact: Evan Greer, press@fightforthefuture.org, 978-852-6457
New billboards in three states single out members of Congress who support the FCC’s plan to gut rules stopping ISPs from charging new fees, slowing traffic, or blocking websites

Today digital rights organization Fight for the Future unveiled 3 more crowdfunded billboards targeting Representatives Cathy McMorris Rodgers, Bob Latta, and Greg Walden, members of Congress who have publicly supported the FCC’s efforts to gut net neutrality protections that keep the web free from censorship, throttling, and extra fees. The three new billboards are the latest in an ongoing campaign focused on lawmakers who oppose Internet freedom. Earlier this month the group launched an initial round of net neutrality billboards targeting six different lawmakers in states across the country.

The move comes just hours before the FCC’s final deadline for public input on their controversial plan to repeal net neutrality. With lawmakers still in their home districts, the billboards - paid for by hundreds of small donations - appear in three different states.

See PHOTOS of the 3 new billboards here: https://imgur.com/a/UHfXJ

Since the massive July 12th day of action, millions have contacted their representatives – who have oversight over the FCC – to ensure these key protections are not changed or removed. The billboards send a strong message to any Members of Congress contemplating support for the FCC’s plan to repeal net neutrality, which is currently being tracked through a “congressional scorecard” on BattleForTheNet.com. So far very few lawmakers have been willing to publicly support Ajit Pai’s plan, likely in light of polling that shows voters – including Republicans – overwhelmingly oppose it.

The billboards encourage constituents to contact their elected representatives; for example, Committee on Energy and Commerce Chairman Rep. Greg Walden’s (R-OR) billboard in Medford, Oregon asks, “Want slower, more expensive Internet? Rep. Walden supports CenturyLink’s plan to destroy net neutrality. Ask him why: (541) 776-4646.”

The outdoor ads feature some of the few members of Congress who came out with early support for FCC’s plan to repeal net neutrality rules, including:

• Spokane, WA – Rep. Cathy McMorris Rodgers (N. Monroe Street at W. Broadway Ave)
• Findlay, OH – Rep. Bob Latta (corner of E Main Cross St and East St.)
• Medford, OR – Rep. Greg Walden (N. Pacific Hwy at Elm Ave)

“It doesn’t matter which party you’re in, or how charming you are on TV – if you attack net neutrality and Internet freedom we will make sure everyone knows that you’re corrupt to the core” said Evan Greer, campaign director of Fight for the Future (pronouns: she/hers), “every member of Congress should take note: supporting the FCC’s plan to allow censorship, throttling, and price gouging may get you a few extra campaign donations from big telecom companies, but it will infuriate your constituents, and will come with a serious political cost.”

The billboards highlight the increasing scrutiny on Congress - who have important oversight authority over the FCC. With no viable legislation on the table, net neutrality supporters remain opposed to any attempt at legislation that would undermine the strong rules at the FCC, which were fought for by millions of Americans, and are calling on lawmakers to publicly oppose Ajit Pai’s plan, and require the FCC to act with transparency and address serious irregularities in its rulemaking process.

Fight for the Future was also one of the leading organizations behind the historic Internet-Wide Day of Action for Net Neutrality on July 12, which drove a record breaking 2 million+ comments to the FCC and Congress in a single day. Learn more at fightforthefuture.org
https://www.fightforthefuture.org/ne...e-crowdfunded/





Apple Calls for FCC to Keep 'Strong, Enforceable' Net Neutrality Protections
Malcolm Owen

Apple has written to the U.S. Federal Communications Commission in support for the concept of net neutrality, with its four-page commentary arguing for the government agency to "retain strong, enforceable open internet protections" instead of rolling back the rules forbidding "fast lane" internet connections.

"An open internet ensures that hundreds of millions of consumers get the experience they want, over the broadband connections they choose, to use the devices they love, which have become an integral part of their lives," starts the comment signed by Cynthia Hogan, Apple's Vice President of Public Policy for the Americas.

Citing a "deep respect" for its customers' privacy, security, and control over personal information, Apple believes this extends to their internet connection choices as well. "What consumers do with those tools is up to them - not Apple, and not broadband providers," the statement claims, before urging the FCC to keep advancing the key principles of net neutrality.

Based on a belief of consumer choice with regards to connectivity, Apple insists broadband providers should not "block, throttle, or otherwise discriminate against lawful websites and services," and not create "paid fast lanes on the internet." Lifting current FCC bans on these restrictions could allow broadband providers to favor one service over another's, "fundamentally altering the internet as we know it today - to the detriment of consumers, competition, and innovation."

Allowing such fast lanes could result in an internet with heavily distorted competition, caused through online providers being forced to make deals or risk losing customers from providing a hampered service. Apple suggests the practice could "create artificial barriers to entry for new online services, making it harder for tomorrow's innovations to attract investment and succeed," effectively turning broadband providers into a king-maker based on its priorities.

Apple believes internet providers should disclose their traffic management policies to consumers, a move which could help consumers make informed choices about their broadband services. The transparency would also assist online service providers, who need clear information about the policies to understand how their services will be delivered to consumers.

Competition between providers of the "last-mile broadband connections" is considered to be crucial for protecting an open internet, with Apple citing FCC data claiming 57 percent of Americas with fixed broadband connections meeting or exceeding the current FCC benchmark for advanced broadband services have the choice of only one provider. This inability to switch broadband providers, even if it is discovered net neutrality rules are not being followed by the company, means consumers "cannot make their voices heard through their market choices."

Apple also writes that the open internet "fosters innovation and investment," with new online services being created based on a level playing field without interference from broadband providers. The increased demand from consumers for online services drives a need for faster and better connections, in turn prompting network investment from broadband providers, in what Apple calls a mutually reinforcing virtuous circle that benefits consumers, productivity, and economic growth.

"Apple remains open to alternative sources of legal authority, but only if they provide for strong, enforceable, and legally sustainable protections, like those in place today," the end of the comment reads. "Simply put, the internet is too important to consumers and too essential to innovation to be left unprotected and uncertain."

The submission from Apple surfaces at the end of a period where the FCC accepted comments about the "Restoring Internet Freedom" initiative. The proposal, created under the leadership of FCC Chairman Ajit Pai, aims to reverse a decision made under former Chairman Tom Wheeler in 2015 that introduced regulation of internet providers as "Title II" common carriers.

Other technology companies, including Microsoft, Google, Amazon, and Twitter, have also spoken out against the proposal, alongside a number of major sites that took part in a "Day of Action" on July 12, to raise awareness of the plans to internet users and to encourage comment submissions to the FCC.
http://appleinsider.com/articles/17/...ty-protections





Even Many ISP-Backed Allies Think Ajit Pai's Attack On Net Neutrality Is Too Extreme
Karl Bode

With its quest to gut net neutrality, privacy and other consumer broadband protections, the FCC is rushing face first toward stripping meaningful oversight of some of the least-liked -- and least competitive -- companies in America. The FCC's plan, based on flimsy to no data and in stark contrast to the will of the public, involves gutting most FCC oversight of broadband providers, then shoveling any remaining authority to an FTC we've noted is ill-suited, under-funded, and legally ill-equipped for the job. That's a real problem for a sector that's actually getting less competitive than ever in many markets.

Giant ISPs and their armies of policy allies often try to frame the effort as a noble quest for deregulation, often insisting they're somehow "restoring internet freedom" in a bare-knuckled attempt to pander to partisan constituents. But by any sane measure the FCC's quest is little more than a massive gift to despised duopolies like Comcast -- at what might be the worst possible time for a severely dysfunctional industry. But there are signs that even many traditional big ISP allies think Ajit Pai's plan is absurdly extreme.

Hal Singer is an economist the telecom industry has often hired to manipulate data in order to make all manner of flimsy claims (from falsely stating net neutrality stifled network investment to falsely claiming net neutrality would dramatically raise taxes). But last week even Singer came forward to acknowledge that the FCC's plan to shovel net neutrality and other ISP oversight to the FTC won't fly. While Pai has repeatedly claimed that FTC authority and existing antitrust laws are enough to protect consumers from companies like Comcast, Singer disagrees:

"Singer lists several roadblocks to stopping discriminatory paid prioritization via antitrust. "Monopolists are generally free from legal constraints to choose their suppliers and engage in price discrimination under the antitrust laws," he wrote.

Antitrust laws are designed to protect competition, but "competition is not the only value that net neutrality aims to address: end-to-end neutrality or non-discrimination is a principle that many believe is worth protecting on its own," he wrote.

"Moreover, antitrust litigation imposes significant costs on private litigants, and it does not provide timely relief; if the net neutrality concern is a loss to edge innovation, a slow-paced antitrust court is not the right venue," he also wrote."

Of course there's also the fact that AT&T is currently engaged in a legal battle with the FTC over its network throttling that could hamstring the agency's authority over ISPs even further. If AT&T wins that court fight, the FTC has previously warned that it could open the door to all manner of companies dodging responsibility for unfair or deceptive business practices -- provided some small fraction of their business enjoys common carrier status. That could result in tiny acquisitions specifically designed to free any number of non-telecom companies from accountability, noted the FTC last year:

"Many companies provide both common-carrier and non-common-carrier services—not just telephone companies like AT&T, but also cable companies like Comcast, technology companies like Google, and energy companies like ExxonMobil (which operate common carrier oil pipelines). Companies that are not common carriers today may gain that status by offering new services or through corporate acquisitions. For example, AOL and Yahoo, which are not common carriers, are (or soon will be) owned by Verizon."

If you're the type of non-nuanced thinker that truly believes that all regulation is automatically evil without bothering to actually analyze the regulation, this whole idea probably sounds good to you. But telecom isn't a normal industry; it suffers from regulatory capture on both the state and federal level, which acts to prop up noncompetitive duopoly fiefdoms nationwide. Removing oversight of this sector without fixing any of the underlying corruption and dysfunction doesn't magically forge Utopia; it simply makes companies like Comcast less accountable than ever. And again, with broadband competition diminishing as many telcos refuse to upgrade their networks, that's a recipe for disaster.

Said disaster would likely result in greater calls than ever for tougher oversight and rules governing ISP behavior (aka monumental backlash during any post-Trump Presidency), which is likely why you're seeing Singer -- and even industry-backed groups like the ITIF -- calling for a more measured approach than Pai and friends are offering:

Interesting because ITIF was a prominent voice opposing Title II for net neutrality rules. @AjitPaiFCC's proposal is pretty extreme. https://t.co/WzL09HWnbL

— The real Jon Brodkin (@jbrodkin) August 28, 2017

Of course this may have been Pai's plan all along; to offer an extreme frontal assault on net neutrality and FCC authority that would subsequently make any resulting "compromises" seem almost sane. But these end proposals would all likely be far weaker than the somewhat flimsy net neutrality protections we already enjoy. We've noted that's one of the reasons ISPs are pushing for a new Congressional law they claim would "settle the issue once and for all," hoping the public won't realize said law would be notably more tepid than the existing FCC protections -- since ISP lobbyists and lawyers would be the ones writing it.

Again, there's a far-simpler trajectory than the chaotic, disruptive and despised one proposed by Ajit Pai: leave FCC authority, and the popular. existing net neutrality rules, alone.
https://www.techdirt.com/articles/20...-extreme.shtml





The FCC.gov Website Lets You Upload Malware Using Its Own Public API Key
Guise Bule

Somewhat incredibly I am the first tech writer on the planet to break this story, but even more incredible is the fact that the FCC lets you upload any file to their website and make it publicly accessible using the FCC.gov domain.

Or rather they don’t, but they have somehow not realized that they are letting people do it and telling them how in their own documentation.

Take a look at this document about FCC Chairman Ajit Pai which has clearly not been put there by anyone who works at the FCC, neither has this one.

Those currently uploading files are able to do this using the FCC’s own public API, a key that they seem to send to anyone with any email address.

I am not going to tell you how and obviously I have never actually done this myself, but if you have enough of the right kind of technical experience the public FCC API documentation tells you all you need to know.

From what I can see happeneing on Twitter, people seem to be experimenting uploading different filetypes and so far they have managed pdf/gif/ELF/exe/mp4 files up to 25MB in size.

This means that you could easily host malware on the FCC.gov website and use it in phishing campaigns that link to malware on a .gov website.

So far those with the technical chops have discovered that you can upload video and play it back using an FCC.gov link, some have been having trouble uploading, while others playing with the vulnerability are clearly not.

Check out this funny FCC.gov hosted picture, it was the first image hosted but am not going to link to any others, because you can imagine.

This is clearly hugely embarassing for the FCC and even though they seem to have disabled public API use until they investigate further, I am told that their DEMO API works just fine still and all the content is still hosted.

We can’t have people uploading fake communications carrying an FCC letterhead and pretending they are real document, the potential for fraudulent use is ridiculously high and this vulnerability is still being abused.

This story is so new that it hasn’t hit the mainstream tech media yet (Update: The Register, Gizmodo and Breitbart covered this story) and even though we only just publicly realized this vulnerability existed, who knows how long it has been abused by people who found it earlier?

**** UPDATE : Interview with OP ****

I have just finished interviewing the guy who sent that very first cuck PDF up onto the FCC website and he has asked me to keep his name confidential for now until we see how this story plays out tomorrow in the media.

I verified his account by checking the original PDF documents metadata and it was created long before the first mention of this story on the web, long before I first noticed others using the vulnerability and before I wrote this.

OP is legit and he stumbled across this vulnerability, he then stumbled across my story and reached out to me to talk, agreeing to go on record.

He did this because he know that I protect my sources.
Always have, always will.

OP was commenting on the FCC.gov website just before midnight deadline and he realized that they assigned a URL to a file before posting a comment.

The “express” comment filing system that most people are using does not allow you to attach files and I was using the more ‘robust’ filing feature.

FCC.gov Commenting UI

OP was upset about Net Neutrality and decided to create a document containing the now immortal sentence and upload it to the FCC.

OP is a 20yr student at university and was goofing off from his homework and he decided to have some fun, he saw it as a dumb joke and had no idea that things would get so out of hand, or that others would follow his lead.

He also did not think anyone would notice his PDF, otherwise he would have written the document in a more mature way he told me.

It’s also important to note that OP believes that he never agreed to the FCC.gov TOS because he never applied for an API key, he just managed to get the URL through their faulty comment system, no hacking involved.

This is absolutely true, the FCC don’t enforce their TOS anywhere, you can signup here and here without ever having to agree to a terms of service agreement of any kind, so OP seemingly didnt break their TOS.

OP is scared and a lot of you are making him really worried about this, so its worth noting that he did not actually hack anything to upload his PDF.
This kind of talk has OP worried.

OP has already written to the EFF to ask for advice, he really does believe he is about to enter a world of pain for this, just as he is beginning his professional career and interviewing for jobs.

He thought that nobody would see it, so he took no privacy precautions.

I think we can all agree that OP was foolish, but fingers crossed nobody will harshly punish him for what is very obviously a flaw in the FCC website and a huge gaping hole in the FCC’s cybersecurity posture.
https://medium.com/contratastic/the-...e-bdcd5c1a5b8b





End the Tyranny of Cable!

Cord-cutting FUD takes an absurd turn in which more competition and choice is somehow bad for consumers.
Jared Newman

TV and tech pundits have for years derided cord cutting with bogus arguments. They’ve claimed, for instance, that dropping cable TV won’t really save you money, that it will ruin quality television, and that it might even break the internet. These claims almost always ignore the evidence to the contrary, whether it’s the sky-high average cost of cable TV, the glut of prestige programming on streaming services, or steady advancements in online video technology.

Now the cord-cutting naysayers are trotting out a new argument in favor of cable, and it’s even more absurd than the old ones: Having too many high-quality, standalone streaming services, they say, is actually bad for consumers, who are apparently helpless at using technology or making sound purchase decisions.

This argument has appeared in several stories over the past week, likely prompted by Disney’s plan to launch its own standalone streaming service and pull its movies from Netflix in 2019. The idea of having another service to choose from is just too much for our poor pundits to bear.
[ Further reading: The best media streaming devices ]

Here’s Anurag Harsh at Huffington Post, describing the horror of having multiple, reasonably priced options for high-quality drama:

What is agonizing for consumers is the endless list of “must watch” TV shows, that are speckled across an incredibly fragmented marketplace. Whether it be Game of Thrones (HBO), House of Cards (Netflix), or The Man in the High Castle (Amazon Studios), the cost of juggling these services can quickly add up for a consumer.

...

What initially sounded like a great idea for consumers is now looking like a great deal for everyone but the viewer. The appeal of streaming or subscription video on demand (SVOD) services was the ability for consumers to only pay for what they watch. The reality, however, is having to manage multiple subscriptions to experience a degree of choice.

A similar story turned up at the Washington Post, where Hayley Tsukayama and Sintia Radu wax nostalgic for the days of limited choice and competition:

In the old days of video streaming—that is, not so long ago—consumers could cut the cable cord and subscribe to one or two services, enjoying a vast array of movies and television programming at a rate far less than the monster cable bill.

It’s not so simple anymore. ...

[T]he move toward streaming—though consumers have been demanding it for years—is proving to be a more fragmented experience than many have anticipated. Entertainment companies are now running services with increasingly narrow offerings, looking to hit consumers up for more subscription revenue wherever possible.

Meanwhile, the New York Post’s Johnny Oleksinski concluded that all those sneering hipsters who’ve had the nerve to ditch cable are about to get their comeuppance—in the form of additional services to choose from:

Remember when streaming services were youthful and rebellious? Smug even? If you cut the cable cord and became a follower of the Church of Netflix, you were seen as practical and forward-thinking. You were saving money (spending just $8 instead of somewhere in the ballpark of $100), reducing space (no bulky cable box), and, oh, what impeccable TV taste you had.

“Oh, no, I don’t have cable anymore—just Netflix,” you’d brag and take another sip of Sancerre.

And, at first, you were right. Netflix was a killer deal: an inexpensive trove of TV shows, movies and burgeoning original content that made for a perfect couple hours of vegging out after work. Taken on its own, it still is.

But the same people who hate corporate oppression love prestige programming. And in the past three years, every major streaming service has added to its roster at least one must-watch show for the culturati.

By now, anyone who’s actually cut the cable cord should be screaming out in unison: No one’s making you subscribe to all these services! You can pick the ones you care about most, rotate between services, or occupy your screen time with a growing number of other digital distractions. This point is so well-worn that I won’t belabor it anymore.

Instead, I want to focus on a more insidious contention in these pieces, which is that more competition among streaming services equates to less value for consumers. Like all the other bogus claims we’ve seen about cord cutting, this one just isn’t backed by evidence.

Competition: Still a good thing

Our misguided pundits all point to Disney’s streaming plans as a major loss for Netflix, and an example of how streaming services will become worse whenever they lose a licensing deal.

For what it’s worth, Disney movies were never a Netflix fixture; they only arrived on the service about a year ago. But Netflix has weathered this kind of loss before, only to come out stronger. Starz pulled its films from Netflix in 2012, and Epix departed in 2015, but along the way, Netflix has been dumping more money into original TV shows, movies, documentaries, and standup specials. People have rewarded Netflix for those decisions, as the service now has more than 50 million subscribers in the U.S. alone.

The race to build better streaming services has also made Netflix’s competitors stronger. JPMorgan estimates that Amazon will spend $4.5 billion on video licensing this year, and Hulu has said that its own spending will be close behind. Both services will have better catalogs as a result, without the annual price hikes you get from cable.

Multiple options for critically-acclaimed TV without cable: The horror!

But what about “fragmentation?” Isn’t it a problem that all these services are creating their own must-see TV? Not really, unless you believe a single company should have a monopoly on pop culture, or that older, licensed movies and shows are inherently more valuable than new creative works. If consumers actually felt that way, these services wouldn’t be drawing in ever-greater numbers of subscribers. (Also, doesn't the ability to choose between these services represent the type of "a la carte" TV people have been wanting for years?)

Meanwhile, the pundits braying about streaming being “as bad as cable” seem to be suffering from selective memory loss. When you look back to cable’s heyday 15 years ago, scripted original series were almost exclusively the domain of major broadcast TV networks, while cable was a hotbed of low-cost, low-quality reality TV. Without competition—first from premium channels like HBO, then from streaming services like Netflix—basic cable channels had little incentive to spend heavily on prestige TV. And even after cable channels got around to making better shows, subscribers still had to deal with commercials and DVR. On-demand, ad-free streaming is a clear leap forward.

I will concede that if you want to use multiple streaming services, trying to sift through them all can be confusing. But even this concern is blown entirely out of proportion by naysaying pundits, who seem to ignore solutions that already exist. Roku, Amazon Fire TV, and Apple TV all offer universal search across services like Netflix and Hulu, while features like Roku Feed and the Apple TV TV app demonstrate how system-wide browsing is getting easier. Besides, using a handful of apps to get what you want isn’t that burdensome—especially for the growing audience of people who’ve been raised on smartphones.

What’s behind these stories?

Whenever I call out these kinds of stories on Twitter, I get responses wondering whether there’s some kind of conspiracy at work, as if major media companies are forcing their publications to put out cord-cutting hit pieces. (I’ve seen some writers peddle this conspiracy theory as well.)

Instead, the best explanation is probably Hanlon’s razor. A writer who’s looking across the entire streaming business might get overwhelmed sorting it all out, which in turn germinates a story idea about how consumers will get confused. This in turn becomes a story based on preconceived notions, perhaps padded by a couple analysts and man-on-the-street interviewees willing to help make the argument.

Consumers, however, likely see things differently. They subscribe to one or two services—Netflix and Hulu, perhaps—and come to understand that they don’t need much more to satisfy their viewing needs. They may consider additional services on occasion, like HBO Now, or Amazon Prime, or whatever Disney comes up with in 2019, but they’re not trying to decipher the entire streaming landscape on a deadline. Instead, they’re enjoying what’s on TV and saving lots of money in the process.

Put another way, consumers are smarter than they’re getting credit for. That’s why cable subscriptions continue to plunge, even as these bogus stories keep popping up like clockwork.
https://www.techhive.com/article/321...surd-turn.html





98.5% of Unique Net Neutrality Comments Oppose Ajit Pai’s Anti-Title II Plan

Besides form letters, ISP-funded study finds almost no support for repealing rules.
Jon Brodkin

A study funded by Internet service providers has found something that Internet service providers really won't like.

The overwhelming majority of people who wrote unique comments to the Federal Communications Commission want the FCC to keep its current net neutrality rules and classification of ISPs as common carriers under Title II of the Communications Act, according to the study released today.

The study (available here) was conducted by consulting firm Emprata and funded by Broadband for America, whose members include AT&T, CenturyLink, Charter, CTIA-The Wireless Association, Comcast, NCTA–The Internet & Television Association, the Telecommunications Industry Association (TIA), and USTelecom.

Unique comments support current rules

When Emprata analyzed all 21.8 million comments, including spam and form letters, 60 percent were against FCC Chairman Ajit Pai's plan to repeal the Title II classification, and 39 percent supported the repeal plan. But the numbers shifted starkly in favor of keeping the Title II rules when excluding spam and form letters in order to analyze just unique comments written by individuals.

Emprata wrote:

[T]here are considerably more "personalized" comments (appearing only once in the docket) against repeal (1.52 million) versus 23,000 for repeal. Presumably, these comments originated from individuals that took the time to type a personalized comment. Although these comments represent less than 10 percent of the total, this is a notable difference.

That amounts to 98.5 percent of personalized comments supporting the current rules.

Form letters constitute the majority of comments on both sides. This was especially pronounced in the case of anti-Title II comments:

The overwhelming majority of comments for and against repealing Title II are form letters (pre-generated portions of text) that appear multiple times in the docket. The form letters likely originated from numerous sources organized by groups that were for or against the repeal of Title II. Form letters comprise upwards of 89.8 percent of comments against Title II repeal and upwards of 99.6 percent of the comments for Title II repeal.

Group that funded study opposes Title II rules

Emprata said that it was contracted by Broadband for America "to perform an independent and unbiased analysis of the comment data received by the FCC in response" and that Emprata itself "does not have a vested interest in whether Title II is repealed or not."

Broadband for America provided a link to the study on its homepage. The statement on the group's homepage today did not mention the public's broad opposition to repealing Title II net neutrality rules, saying only that the "report by expert data analytics firm reveals unprecedented volume and clutter in the docket."

Broadband for America's homepage advocates for overturning the rules, saying that "Repealing Title II utility Regulations Will Strengthen the Internet—utility regulations deter investment in networks and put Internet jobs at risk."

ISPs support net neutrality despite their stance against using the FCC's Title II authority to enforce net neutrality rules, the group says.

"Internet providers practice net neutrality today and they always will. They even put it in writing," the group said. While the NCTA cable lobby group posted a full-page ad in The Washington Post saying they won't block or throttle Internet content, the group's members did not put that pledge into binding contracts with customers.

The NCTA recently conducted a survey that found strong public support for net neutrality rules.

Former FCC official Gigi Sohn, who played a role in crafting the current rules, tweeted that the FCC "needs to do its own analysis of net neutrality comments, not rely on [a] study funded by Comcast, AT&T, and the broadband industry."

Fake e-mails and duplicate comments

The Emprata study shed more light on comments from artificial e-mail domains and international addresses. Comments from fake e-mails generally opposed Pai's plan to overturn the Title II common carrier classification.

"More than 7.75 million comments... appear to have been generated by self-described 'temporary' and 'disposable' e-mail domains attributed to FakeMailGenerator.com and with nearly identical language," Emprata wrote. "Virtually all of those comments oppose repealing Title II. Assuming that comments submitted from these e-mail domains are illegitimate, sentiment favors repeal of Title II (61 percent for, 38 percent against)."

There were 9.93 million duplicate comments from submitters listing the same physical address and e-mail. "This was more prevalent in comments against repeal of Title II (accounting for 82 percent of the total duplicates), with a majority of duplicate comments associated with e-mail domains from FakeMailGenerator.com," Emprata wrote.

There were also 1.72 million comments with non-US home addresses, and nearly all of those (99.4 percent) oppose repealing the Title II classification, the study noted.

As we've previously reported, the net neutrality docket appears to have been targeted by numerous spam bots that falsely attribute comments to people whose names and addresses were pulled from data breaches.

The deadline for submitting comments is today. Pai has indicated that the raw number of comments opposing his plan will not cause him to change his mind.
https://arstechnica.com/tech-policy/...trality-rules/





Cable Industry’s Own Study Shows their Plan to Kill Net Neutrality is as Unpopular as Ever

FOR IMMEDIATE RELEASE, August 30, 2017
Contact: Evan Greer, 978-852-6457, press@fightforthefuture.org

While FCC’s refusal to address cybersecurity issues and fake comments creates intentional confusion around data, unique comments are overwhelmingly in favor of Title II net neutrality— by more than 73 to 1

Today, the telecom industry is touting a study funded by cable lobby group Broadband for America regarding the millions of comments submitted to the FCC’s public docket surrounding the agency’s plan to gut Title II net neutrality rules that prevent companies like AT&T and Verizon from charging extra fees, throttling apps and services, and censoring online content.

The most telling statistic in the report is that the unique comments in the docket – the ones that people took the extra time to write themselves – are overwhelmingly in favor of Title II net neutrality protections, by more than 73 to 1.

So the telecom industry’s own study essentially shows what nearly all other polling on this issue has shown: that they are getting trounced when it comes to public opinion, and people from across the political spectrum overwhelmingly agree that they don’t want their ISPs to have control over what they can see and do on the Internet.

The report also highlights that the data in the FCC docket is a mess. Fight for the Future has been working with a group of tech volunteers to analyze this data as well, and will be releasing our findings soon.

Much of the reporting on this study draws a false equivalence between real comments from real people gathered through grassroots activism campaigns with massive public participation – and completely fraudulent comments that use names and addresses from breached databases, or completely fake information.

Sadly, this confusion appears to be by design. Under Ajit Pai’s leadership, the FCC has repeatedly refused to address serious cybersecurity and transparency issues surrounding their public comment process, from the now debunked claims of DDoS attacks to the confirmed fraudulent comments that the agency won’t remove from its docket.

The agency is sabotaging the legitimacy of its own proceeding in a cynical attempt to spread confusion about something that is actually as clear as day: the majority of the voices calling for the end of net neutrality protections are those bought and paid for by the industry that stands to gain unprecedented control of our online experience if they succeed in rigging the game and gutting these consumer protections at the FCC or through bad legislation billed as a “compromise.”
https://www.fightforthefuture.org/ne...their-plan-to/





AT&T Absurdly Claims that Most “Legitimate” Net Neutrality Comments Favor Repeal

AT&T ignores finding that 98.5% of unique comments favor net neutrality rules.
Jon Brodkin

Despite a study showing that 98.5 percent of individually written net neutrality comments support the US's current net neutrality rules, AT&T is claiming that the vast majority of "legitimate" comments favor repealing the rules.

The Federal Communication Commission's net neutrality docket is a real mess, with nearly 22 million comments, mostly from form letters and many from spam bots using identities stolen from data breaches. AT&T is part of an industry group called Broadband for America that just funded a study that tries to find trends within the chaos.

As we wrote earlier today, that study (conducted by consulting firm Emprata) found fewer than 1.6 million filings appear to have "originated from individuals that took the time to type a personalized comment." Of those, 1.52 million were against FCC Chairman Ajit Pai's plan to repeal the current Title II net neutrality rules, while just 23,000 were in favor of repeal.

Let's contrast that finding with what AT&T Executive VP Joan Marsh wrote in a blog post today:

While Title II proponents may claim that millions of consumers representing the large majority of commenters support Title II, in fact, most of these comments were not legitimate. And when only legitimate comments are considered, the large majority of commenters oppose Title II regulation of Internet access.

In related comments filed with the FCC today, AT&T accused Title II proponents of "stuff[ing] the... ballot box with millions upon millions of sham 'comments.'"

AT&T analysis doesn’t account for spam bots

Marsh's blog post does not refer directly to the Emprata study. Instead, it links to a Multichannel News article that describes Broadband for America's interpretation of the study. The broadband industry lobby group argues that 69.9 percent of legitimate comments support repealing the rules.

After linking to that article, Marsh writes that most comments on the FCC docket "appear to us to be fraudulent. Millions of comments were generated using phony e-mail addresses. Millions of others were generated using duplicative e-mail or physical addresses. And still others originated overseas. Consider this: nearly 450,000 comments were filed using Russian addresses, all but four in support of Title II regulation of Internet services."

That all appears to be true, but that doesn't prove that the majority of legitimate comments oppose the current Title II net neutrality rules.

Marsh reasonably excludes millions of comments that appear to be have been submitted with fake contact information, such as e-mails with artificial domains or that were submitted multiple times using the same contact information. But the docket is also filled with spam comments falsely attributed to people whose names and addresses were taken from data breaches. These comments would appear to come from real people despite being fraudulent, yet AT&T did not mention this problem.

Most comments on the docket are based on form letters, i.e. blocks of text pre-written by advocacy groups and ostensibly submitted by real people who agree with the sentiment.

Form letter comments are often legitimate expressions of support or opposition to the current rules, but many of them were also submitted by spam bots, not the actual people whose names are attached to the comments. One anti-net neutrality form letter comment whose spread was attributed at least partly to a bot has appeared on the docket more than 800,000 times under various names, for example.

People who say their names and addresses were attached to anti-net neutrality comments without their permission asked the FCC to remove fraudulent comments from the docket, but the FCC has not done so. Given that these comments were submitted under real people's names and addresses, they would apparently count as "legitimate" opposition to net neutrality rules in AT&T's analysis. The Emprata study examined fake e-mail addresses and duplicate addresses, but did not mention the problem of identities from data breaches being used to fake opposition to net neutrality rules.

Conclusions depend on your definition of “real”

Today is the final day of the public comment period on the FCC proposal to eliminate the Title II common carrier classification of broadband providers and repeal or replace the net neutrality rules against blocking, throttling, and paid prioritization.

The anti-Title II crowd relied on form letters more heavily than the pro-Title II crowd; 99.6 percent of anti-Title II comments came from form letters while 89.8 percent of pro-Title II comments were based on form letters, according to Emprata.

AT&T led a form-letter campaign itself; the company claimed to be in favor of net neutrality rules while trying to convince its customers to support repealing the current rules.

A tally of all 21.8 million comments evaluated in the Emprata study, regardless of their origin, found that 60 percent supported keeping the current rules and 39 percent supported repealing them. Emprata noted that the results shift in the other direction (61 percent against the current rules) if you exclude comments with fake e-mail addresses.

But the exclusion of fake e-mail addresses doesn't guarantee that all of the remaining comments are legitimate. That's because of the spam bot problem we just described.

Emprata notes that "it is very difficult to draw any definitive conclusions from the comments found in the docket. Any conclusions that one might draw from the data would be based on the subset of data that [one] considered to be 'real.'"

Net neutrality advocacy group Fight for the Future argues that the most "real" comments are the ones written by individuals. Fight for the Future has led form letter submission campaigns itself, but recognizes that it takes more effort to write one's own comment.

"The most telling statistic in the report is that the unique comments in the docket—the ones that people took the extra time to write themselves—are overwhelmingly in favor of Title II net neutrality protections, by more than 73 to 1," the group said.
https://arstechnica.com/tech-policy/...-favor-repeal/





Expose: AT&T California Fiber Optic Scandal: Billions Charged for Broadband that Never Showed Up.
Bruce Kushnick

Read the excerpt: The History of Fiber Optic Broadband in California, 1993-2005

Pacific Telesis (now AT&T California), 1994 Investor Fact Book, Excerpt 1.

In 1993, Pac Bell California announced it would be spending $16 billion on fiber optic upgrades to 5.5 million homes by the year 2000; 1.5 million by 1996. This was 24 years ago. This page is from the Pacific Telesis 1994 Investor Fact Book.

Virtually no one knows the history of fiber optic broadband in America, much less what happened in their state, even though they were charged thousands of dollars per household. Instead, in 2017, we get embarrassing proposed laws, such as SB-649 in California, which claims that if the State just frees the companies from regulations, they will deliver new, ‘fabulous’, broadband wireless services. These are tied to other bills and new proposed regulations, including current FCC proceedings to ‘shut off the copper’ and replace it with wireless. It is time for investigations, not new gifts to AT&T et al.

I’ll get back to this in a moment

In Part 1 we detailed:

• The price of the basic AT&T California state utility phone service went up 138% from 2008-2016. And this was just the basic service; other parts of the service like ‘nonlisted numbers’ or directory assistance calling went up 525% to 1891%, since 2004. Calling features, like Call Waiting, which went up 240%, are just pure profit and cost less than a penny to offer.
• The companies manipulated the accounting of access “landlines”. While there have been declines in basic wired phone service, it turns out that the exact same wires – the copper wires, are also used for U-verse, which is a copper-to-the-home service that is based on the existing utility wires. And there are other services, from DSL, an older copper-based broadband service, as well as Business Data Services (also known as ‘special access’ services), which are the wires to ATM machines or are the fiber wires that go to the cell sites, and none of these other wires are never mentioned or even counted by the phone companies, the FCC or the State.
• The State admits that it doesn’t collect or examine basic data about the companies’ financials anymore. The California PUC is simply allowing the companies to be ‘deregulated’, which is now just the punchline to a bad joke as it always means – give more money to AT&T, smile, then do it again.
• This new proposed bill is just a put-on job: Worse, like talking to someone with Alzheimers who can’t remember yesterday, much less the last few hours, there is a proposed piece of legislation, SB-649, in the California State Assembly that is based on wireless ‘vaporware’, and the same false claims—if only we get rid of pesky regulations and preempt any zoning or customer challenges, we will get a fabulous broadband wireless future, “5G”. It doesn’t exist yet and whatever shows up will require a fiber optic wire and will have a range of a ‘small cell’, a city block or so.
• Moreover, the state senators that are proposing the bill have not only gotten campaign contributions from AT&T, but also get AT&T Foundation grant monies for their districts.

Part 1 concluded that what is really needed are investigations and audits of AT&T’s financial books, now. There is an investigation of Verizon New York by the NY State Public Service Commission that corroborated our findings; that there are massive financial cross-subsidies between and among the wireline state utility and all of the other Verizon affiliates, including the wireless company. AT&T California appears to be doing the same financial chicanery.

The Fiber Optic Future that Never Came. The First Wave.

California was supposed to be a fiber optic state, starting in 1996, as was pointed out in the Pacific Telesis (now AT&T California) 1994 Investor Fact Book. Instead, customers paid billions upon billions extra, including low income families, seniors, small businesses and everyone else, as these promises of a broadband-internet–cable competition future were used over and over to raise rates and get tax perks – and then never showed up.

There have been three major waves of false claims to get rate increases and deregulation, and a host of other plays including merger conditions, the “IP Transition”, and rural government broadband funding, among others, and these happened on both the state and federal level.

• The Information Superhighway and Competition, 1993-2004
• The U-verse Bait and Switch, 2005-
• The latest: The current wireless small cell, 5G, vaporware.

The Information Superhighway and Competition, 1993-2004

Pacific Telesis 1994 Fact Sheet, Excerpt 2:

This is a page from the same financial report and it details what parts of California were supposed to be upgraded by the year 2000. This pretty much covers most of the state.

Backdrop: America was to become a Fiber Optic Powerhouse, 1991

In 1991, soon-to-be Vice President Al Gore proposed the Information Superhighway, which was to replace the aging copper wires with fiber optics – and ALL of America was to be completed around 2010. And while there were proposals to have the government build it, every phone company in America screamed that they would be glad to do this for the good of America.

And they all went state-to-state to get changes in the laws to raise rates or not have the profits of the ‘calling features’ and ancillary services examined, and they all got major tax breaks as well.

And in California, as the excerpt discussed, now-AT&T-California had the laws changed to fund this upgrade.

Pacific Bell also took a $3.6 billion dollar tax deduction, which was directly tied to this ‘communications info-highway’, and the company received what would be billions extra in profits that were supposed to be used for this new construction.

And this and previous announcements, such as upgrading schools – started the process of ‘price caps’ – i.e., not examining the profits but keeping the basic service ‘fair and reasonable’, while everything else was allowed to have obscene returns.

You’re not going to like what happened next if you’re in California.

Pacific Telesis, which was California and Nevada, was one of seven ‘Baby Bells’ that were created in 1984 when “Ma Bell”, the original AT&T, was ‘broken up’. The state utilities were put into seven new regional companies that all wanted to be just like Ma Bell.

Southwestern Bell (SBC), the Bell company that controlled Kansas, Texas, Arkansas, Oklahoma and Missouri, in 1996, decided to expand and merged with Pacific Telesis, and it was a disaster. SBC, controlled by the ‘Texans’, was a slash and burn company that just wanted profits—so they stopped all of the building that had trickled in the start, and then the hatchet fell. (SBC included Kansas, FCC Chairman Pai’s home state. It’s a shame he never mentions the failed fiber optic plan called “TeleKansas”.)

The San Diego Tribune created a timeline to broadband hell—“A Plan that Failed.”

In 1995, the California state laws were partially ‘deregulated’ (as well as earlier deregulations also based on tech deployments), and the company’s profits after the law went from 22% return on equity in 1995, to 46% in 1996.

And yet, as this timeline details, they stopped building out this fiber optic future, so why were there no ‘refunds’, or a return to reducing the massive new profits being accrued? Thus, starting in 1995, a cascade of deregulation was unleashed that has, more or less, been built into local rates and all ancillary services. All customers who had service, including low income families, seniors, and businesses, paid extra.

Some will say that this law was based on ‘competition’ coming into the market, due to the Telecom Act of 1996. Before 1996, the wires of the utility were closed to competition. In exchange for opening the networks, the Bell companies were allowed into other lines of business, including long distance, online services, etc.

But with a memory that lasts until the next press release, the fiber optic future faded from view. In fact, the Internet and World Wide Web showed up around this time and people were happy to get dial-up internet service at 64 kbps, (1000 kbps=1Mbps)

Merge the Companies; Purge the Fiber Plans

SBC (now AT&T) was on a roll. They would next buy SNET, an independent company that controlled Connecticut. It had committed to spending $4.5 billion to have the state completed with fiber by 2007. SBC closed down what was being built a year or two after the merger.

SBC next went to Ameritech, which controlled the mid-west states; Illinois, Indiana, Michigan, Wisconsin and Ohio. All of the states had plans for fiber optic upgrades. In this merger, SBC claimed it would be spending $6 billion on a fiber optic deployment called “Project Pronto” and would compete out-of-region in 30 cities.

You already know the punchline. Whatever had been built, starting in 1996, was sold off to a small cable company, “WOW”, and SBC never competed out of their wired territories. All of these states had rate increases via this ‘alternative regulation’ deregulation to pay for new networks that did not show up

And after a series of FCC actions which closed the networks to competition, SBC would buy AT&T and then change its name to—AT&T.

The Fiber U-verse Bait-and-Switch

But it got a lot worse. This is a quote from former FCC Chairman Michael Powell (now the head of the NCTA, cable association) about why he voted to kill off competition by blocking competitors from using the networks. At the time AT&T was a separate company and was the largest wireline local and long distance competitor, as well as one of the largest internet companies.

Powell gives his reason for closing the networks based on ‘commitments’ for 100 Mbps, fiber-optic based services by SBC (now AT&T), October 2004.

“In my separate statement to the Triennial Review Order and in countless other statements during my seven years at the Commission, I have emphasized that ‘broadband deployment is the most central communications policy objective of our day’. Today, we take another important step forward to realize this objective.... By removing unbundling obligations for fiber-based technologies, today’s decision holds great promise for consumers, the telecommunications sector and the American economy. The networks we are considering in this item offer speeds of up to 100 Mbps and exist largely where no provider has undertaken the expense and risk of pulling fiber all the way to a home.

“SBC has committed to serve 300,000 households with a FTTH network while BellSouth has deployed a deep fiber network to approximately 1 million homes. Other carriers are taking similar actions.”‘ (Emphasis added)

Note: According to the FCC, “FTTH” is “Fiber to the Home”, where “the fiber optic wire starts at the customer’s location”. “FTTC”, “Fiber to the Curb’” was defined as 500 feet from a customer’s premises.

“In granting such relief, we first define FTTC loops. Specifically, a FTTC loop is a fiber transmission facility connecting to copper distribution plant that is not more than 500 feet from the customer’s premises.”

AT&T’s U-verse is not ‘fiber-to-the-home’ or curb but is actually ‘copper-to-the-home’ and uses the existing, aging copper wires that are part of the state utility, while the fiber is somewhere within ½ mile, not 500 feet from the home.

The Second Wave of Broadband Scandal: The U-verse Reverse and Miss-direction

With the FCC helping, in 2006, California changes the laws and deregulates now-AT&T California, which is why rates jumped up. This was based on the state-wide franchise that most believed was based on fiber optic deployments.

And this happened in every AT&T state as far are we can tell. Promise a ‘statewide franchise’ for broadband and cable competition and claim it is fiber, and get rate increases.

And the miss-direction was everywhere. AT&T essentially deceived every state and federal regulator, lawmaker, the press and even the public. In 2013, I highlighted links to then-current AT&T U-verse information that could be found with any search engine where they define U-verse as “fiber optic”. (Most of the original links changed the content by 2017.) Here are a few of the original statements.

• AT&T U-verse Fiber Technology: “Learn how AT&T is taking the fiber optics within our network and turning it into the vehicle that’s delivering all your entertainment to your television, computer and phone.”
• Welcome to the Evolution of Digital TV, Internet, and Voice: “AT&T U-verse® includes fiber optic technology and computer networking to bring you better digital TV, faster Internet, and a smarter phone. Bring it all together by customizing your own bundle now.”
• “Save with AT&T U-verse® Bundles AT&T U-verse®: “Better DVR, Better Features, A Better Experience. The universe is at your fingertips with AT&T U-verse®. U-verse is an exciting new AT&T product that uses fiber optic technology and computer networking to bring you advanced digital television, high speed Internet and digital home phone service.”
• What is AT&T U-verse?: “AT&T U-verse is a suite of services and products that primarily consist of internet, television and phone services. These services are delivered using highly advanced fiber-to-the-node and fiber-to-the-premises technologies.”

And there are tens of thousands of web places, blogs, etc. that repeat this hype.

• What is U-verse? “ConnectMyHighSpeed,” “AT&T has combined fiber optic technology and computer networking to offer U-verse, providing faster Internet, telephone and television services.”
• See Details: Geographic and service restrictions apply to AT&T U-verse® services. Call to see if you qualify. “Fiber optics may apply to all or part of the network, depending on your location.”

Where did it say anywhere on any page that U-verse is based on the copper-utility wiring? Nowhere.

Raise Rates Based on U-verse, but do Not Call It a “Cross-Subsidy”.

Watch closely.

In 2006, the California cable franchise goes through – based on AT&T claiming U-verse is fiber, and the state laws are again changed to give the companies more money via rate increases, starting with ancillary services. In the 2015 proceeding, “Order Instituting Investigation into the State of Competition Among Telecommunications Providers in California”, the State details that at the same time as the statewide franchise was being implemented, the State further deregulated rates—i.e., started the process of being able to raise the price on lots of services, except the basic rate, to start.

“On August 24, 2006, we issued “URF I”…This decision removed many of the rules that had governed the prices and operations of the largest incumbent telecommunications carriers (ILECs),” (ILECS are AT&T and Verizon.)

“And accordingly, in URF I, we eliminated price restrictions for all but residential services, and granted the large ILECs (sometimes referred to as “URF carriers”) broad pricing freedoms across almost all telecommunications services, including new telecommunications products, bundles of services, promotions and contracts.

“We also permitted carriers to add services to “bundles” and to target services and prices to specific geographic markets, thus permitting geographically de-averaged pricing, which the Commission had previously not allowed. URF I also eliminated previously applicable New Regulatory Framework-specific monitoring reports.”

In English, they deregulated the companies based on what they had wanted—more profits and less regulation.

The 5th Annual DIVCA (Digital Infrastructure and Video Competition Act) Report claims that the rate increases do not necessarily mean that there are cross-subsidies.

“This report presents the video and broadband service findings relating to California state-issued video franchisees that must be reported annually to the Legislature pursuant to the Digital Infrastructure and Video Competition Act of 2006 (DIVCA).

“DIVCA prohibits state issued franchisees that provide stand alone residential primary line basic telephone service from increasing their rate for such service to finance the cost of deploying a network to provide video service. A previous California Public Utilities Commission (CPUC or Commission) decision, which prevented AT&T and Verizon from raising their rates, expired and since 2011 both have raised their basic rates. This alone does not necessarily mean that cross-subsidization has occurred.” (Emphasis added)

This says that until 2011, the ‘basic’ rate, which is one line item out of many on the bill, was not allowed to be increased, while the other parts of the service had been allowed to be ‘freed’ by 2006.

And the State, of course, claimed that since they erased all the accounting, they can’t tell if the rate increases were used to pay for the U-verse deployments.

“The fact that AT&T raised rates for basic service beyond the levels authorized... or that Verizon may do so in the future does not prove or disprove that residential basic services are cross subsidizing a network used to provide video service.

“To make this determination significant analysis is required. Revenues for residential basic service, video service and other services that use the shared network to provide video service would need to be compared to their respective costs. The Commission would need to audit those costs to ensure they have been accurately assigned to each service. Such an audit would be onerous as it would require the Commission to perform a cost of service analysis, which has not been performed in decades, since the Commission adopted its New Regulatory Framework and established price caps to replace cost of service regulation.”

No audits for decades? Really?

Had enough? Oh, but there’s more.

AT&T’s 100% Coverage of Broadband in 22 States.

After SBC bought AT&T and changed its name to AT&T, it decided to merge with Bellsouth, one of the other remaining Baby Bells (that was also supposed to be bringing fiber to neighborhoods). It controlled the southeast including Florida, Louisiana, Tennessee, Kentucky, and South Carolina, among others.

And this excerpt is one of the merger conditions. AT&T was to have completed upgrading 100% of AT&T’s territories in 22 states (including California) with broadband, albeit slow, but broadband, by the year 2007.

(At this time, 200 kbps was the official broadband speed, as defined by the FCC.)

And yet, we find that AT&T has lots of ‘unserved’ areas and has been getting government funding to build out areas that should have already been finished with basic broadband by 2007, it would seem.

• February 20, 2009: This Resolution adopts funding for four (4) AT&T California (AT&T) broadband projects in unserved areas totaling $216,832 from the California Advanced Services Fund (CASF)
• $428 million a year from the federal government: Connecting Rural and Underserved Communities
• “We are committed to using a variety of technologies to expand internet access to more locations. To help meet the needs of customers in largely rural areas and expand the opportunities enabled by internet access, AT&T participates in the FCC’s Connect America Fund Phase II (CAF II) program. By the end of 2020, AT&T will have used funds from the program to deploy, maintain and offer internet access and voice service to 1.1 million mostly rural homes and small business locations in FCC-identified areas.”
• AT&T California is getting over $60 million a year out of AT&T’s annual $428 million, according to the FCC.

A recent Haas Institute study had a number of disturbing findings about California.

• “Rural California is left behind by AT&T. In 14 largely rural counties, virtually no household has access to AT&T broadband at the FCC’s 25/3 Mbps speed and one-third or more households are underserved without access to AT&T broadband at 6/1.5 Mbps.”
• “Many urban and suburban Californians are stuck in AT&T’s slow lane. AT&T’s slow speeds are not limited to rural areas. In Los Angeles county, for example, approximately 443,000 households (20.4 percent) in AT&T’s wireline footprint lack access to AT&T broadband at 6/1 Mbps and approximately 1.1 million households (51.5 percent) lack access to AT&T broadband at 25/3 Mbps.”

In fact, we filed a complaint against AT&T at the FCC, claiming that the company may have committed perjury as we were deluged with emails from people after an article that claimed AT&T had never shown up with broadband. We know of no audit or investigation in any state of how many ‘unserved areas’ appear to have never been served, especially in 2007, and yet there was a commitment, in writing, to do so, and the FCC and states just rubber stamped the AT&T-BellSouth merger.

And note that the rate increases hit all phone customers, even those in rural areas that were never served.

Conclusion: Halt SB-649 and Start Investigations.

And then we get this Op Ed by State Senator Bill Quirk. He and Ben Hueso both get not only campaign contributions from AT&T, but also monies via AT&T’s Foundation for their territories.

“Senate Bill 649, by Sen. Ben Hueso, D-San Diego, and Bill Quirk, would establish a standardized, expedited process for statewide deployment of the equipment necessary to power 5G, the most advanced wireless technology ever to come to market.

“SB 649 will help families and businesses gain access to a technology that will reshape modern life. The 5G technology will support smart cities, improve public safety, and provide environmental and economic gains. Put simply, SB 649 is vital to California’s future and deserves support.”

If 5G has a range of 1-2 city blocks and requires fiber optic wires to be installed, and this is to happen statewide—I got a bridge I’ll sell ya… cheap.

The current plan is to replace the retail wires with wireless and this is the next wave of false claims to get rid of regulations. This bill needs to be halted immediately and investigations need to start immediately into how much money AT&T received in the name of broadband over the last 2+ decades and what the State got for the money.
http://www.huffingtonpost.com/entry/...b0b234aecad1c7





Kansas City Was First to Embrace Google Fiber, Now Its Broadband Future Is 'TBD'

The dream of a fully-connected gig city may never come to be.
Kaleigh Rogers

Broadband Land is an ongoing Motherboard series about the digital divide in America. Follow along here .

The shared interests of Kansas City, Kansas, and Kansas City, Missouri, pretty much start and end with the Royals. The cities, which straddle the state line, have often served as a microcosm for the two states' longstanding sports and economic rivalries. But when the opportunity arose in 2011 to become the first community to pilot Google Fiber, the two cities put aside differences to reel Google in.

Now, five years after hooking up its first Kansas City customers, expansion of Google Fiber—the tech giant's foray into internet service provision, offering gigabit per second download and upload speeds—has come to a screeching halt.

Thousands of customers in KC who had pre-registered for guaranteed service when Fiber made it to their neighborhood were given their money back earlier this year, and told they may never get hooked up. Fiber cycled through two CEOs in the last 10 months, lost multiple executives, and has started laying off employees. Plans to expand Fiber to eight other American cities halted late last year, leaving the fate of the project up in the air. I recently asked Rachel Hack Merlo, the Community Manager for Google Fiber in Kansas City, about the future of the expanding the project service there, and she told me it was "TBD."

Kansas City expected to become Google's glittering example of a futuristic gig-city: Half a decade later, there are examples of how Fiber benefitted KC, and stories about how it fell short. Thousands of customers will likely never get the chance to access the infrastructure they rallied behind, and many communities are still without any broadband access at all. Many are now left wondering: is that it?

"We were saying that in all likelihood this is too good to be true," said Isaac Wilder, co-founder of the Free Network Foundation and a Kansas City native. A few years ago, after Motherboard first met Wilder in New York City during Occupy Wall Street protests, he and the FNF partnered with a non-profit organization to set up free community mesh networks in low-income neighborhoods around KC.

"Lo and behold, just a few years later and it's beginning to become clear that [Google Fiber] was just a lot of lip service," Wilder told me.

When Google zeroed in on Kansas City, there was so much hype that no one seemed to consider whether this was a good financial deal for the city, Wilder said. The city agreed to waive millions of dollars in right-of-way fees to allow Google to start laying fiber in the city, for example.

But Google also made an effort to address the digital divide in Kansas City, said Carrie Coogan, director of public affairs for the KC public library, and the chair of the Kansas City Coalition for Digital Inclusion. With Fiber, Google offered broadband for free to residents in affordable housing, and created a digital inclusion fellowship.

"[The library has] had the benefit of having a digital inclusion Google fellow for that past year and I can't tell you what a difference it has made," Coogan said. "Having a dedicated person for this work really meant a lot for us, and she has made huge strides."

Many in Kansas City's startup scene credit Google Fiber's entrance with bringing new attention to KC as a tech city, too. But Wilder—who is now based in Berlin—and Coogan both noted that Google has now slowed down these initiatives. The fellowship, for example, has ended, and Google Fiber has stopped expanding throughout the city.

Google Fiber's Kansas City spokesperson wouldn't tell me the exact number of customers it's hooked up in KC, and was vague about the program's future.

"We hear loud and clear from communities in the region who are interested in talking with us, but for now we're heads down in innovation that will help us to do this business in a way that's maintainable for the long haul," said Hack Merlo.

Meanwhile, 70 percent of children in the Kansas City Missouri School District still do not have internet access in the home, according to recent surveys, and 28 percent of Kansas Citians who don't use the internet have said access is the main reason why. To be fair, Google never promised to eliminate the digital divide in Kansas City, but Wilder wondered why the local government invested so much to attract Google and its whip-fast internet for startups, when some communities didn't have any internet at all.

"It was a bit like if the public transit system said, 'instead of buying buses, we're going to buy a bunch of Teslas,'" Wilder said. "There are people that have nothing. Why are you hyping up this ultra speed thing?"

On a national scale, Google Fiber expanded to a total of 10 mid-sized cities across the country, but that is still "not even a blip" in the national broadband market, said Jan Dawson, the chief analyst at Jackdaw Research, which focuses on telecommunications. Dawson told me Google always seemed to view Fiber as an experimental project, one that perhaps wasn't as profitable as the company had anticipated, which would explain why it's started to scale back.

Ultimately, Google made a massive investment in the cities' infrastructures, one that will serve the communities for decades to come, according to Joanne Hovis, a technology and energy consultant who helps local governments figure out the best solutions for expanding broadband. This expansion, and the relatively affordable service, forced other internet service providers to step up to the plate and offer faster speeds for less. In this sense, Hovis said it can only be viewed as a success.

"This is the Holy Grail of infrastructures because fiber is capable of anything, it can scale up to include technology we can't even imagine yet," Hovis told me. "From an economic standpoint, it would have been deeply foolish to turn down what Google offered Kansas City. It's incredible disappointing that Google has decided to slow its rate of growth, but its impact has undeniably been tremendous."

While Google Fiber might not be the white knight that will hoist Kansas City into true digital inclusion or gig city status, it did plant a seed, according to Coogan. She told me people have had a glimpse of what KC can once again become: a hub of innovation and entrepreneurism. It might not be on Google's track, but after getting a taste of the future, people like Coogan are hopeful the community will be able to go the rest of the way on its own.
https://motherboard.vice.com/en_us/a...-future-is-tbd





Comcast Sues Vermont to Avoid Building 550 Miles of New Cable Lines

Vermont is trying to make Comcast bring TV and Internet to unserved areas.
Jon Brodkin

Comcast has sued the state of Vermont to try to avoid a requirement to build 550 miles of new cable lines.

Comcast's lawsuit against the Vermont Public Utility Commission (VPUC) was filed Monday in US District Court in Vermont and challenges several provisions in the cable company's new 11-year permit to offer services in the state. One of the conditions in the permit says that "Comcast shall construct no less than 550 miles of line extensions into un-cabled areas during the [11-year] term."

Comcast would rather not do that. The company's court complaint says that Vermont is exceeding its authority under the federal Cable Act while also violating state law and Comcast's constitutional rights:

The VPUC claimed that it could impose the blanket 550-mile line extension mandate on Comcast because it is the "largest" cable operator in Vermont and can afford it. These discriminatory conditions contravene federal and state law, amount to undue speaker-based burdens on Comcast's protected speech under the First Amendment of the United States Constitution... and deprive Comcast and its subscribers of the benefits of Vermont law enjoyed by other cable operators and their subscribers without a just and rational basis, in violation of the Common Benefits Clause of the Vermont Constitution.

Rival providers Charter and Burlington Telecom don't have to comply with these special requirements, Comcast said. Instead, the other companies "need only comply with the non-discriminatory line extension policies" established in a VPUC rule.

Comcast's complaint also objected to several other requirements in the permit, including "unreasonable demands" for upgrades to local public, educational, and governmental (PEG) access channels and the building of "institutional networks ("I-Nets") to local governmental and educational entities upon request and on non-market based terms."

The requirements will raise prices for Comcast customers, the company argues. "Together, these contested conditions would impose tens of millions of dollars in additional regulatory costs and burdens on Comcast and its Vermont cable subscribers," Comcast wrote.

Comcast often refuses to extend its network to customers outside its existing service area unless the customers pay for Comcast's construction costs, which can be tens of thousands of dollars.
Vermont defends cable expansion requirement

Comcast previously asked the VPUC to reconsider the conditions, but the agency denied the request. (Vermont Public Radio posted the documents that we've linked to and published a story on the lawsuit yesterday.)

Comcast entered Vermont by purchasing Adelphia in 2005, despite already being aware of state procedures that ascribe great importance "to building out cable networks to unserved areas to meet community needs," the VPUC's denial said.

"However, Comcast presented no evidence in this proceeding that previously identified community needs and interests for cable line extensions to unserved areas were no longer as important as in the past or could be adequately met through compliance with [VPUC's line extension rule]," the denial said. "Based on the evidence, the Commission found that the... line extension requirements were supported by the needs and interests of the state to expand the availability of service in unserved areas of Vermont."

The commission determined that the 550-mile buildout "will not impair Comcast’s ability to continue to earn a fair and reasonable return on its investments." The commission considered factors "includ[ing] the historic rate of line extensions in the service area, prior construction budgets for line extensions, and the profitability of Comcast’s cable operations in Vermont currently and while it was completing significant line extensions in Vermont [in previous years]."

Comcast's complaint asks the court to declare the state-imposed conditions unlawful and prevent Vermont from enforcing them.
https://arstechnica.com/tech-policy/...w-cable-lines/





Rural America Is Building Its Own Internet Because No One Else Will

Big Telecom has little interest in expanding to small towns and farmlands, so rural America is building its own solutions.
Kaleigh Rogers

Dane Shryock walked over to a map hanging on the wall of the county commissioners' office in downtown Coshocton. He ran his finger along a highway to point out directions to a family farm, where he told me I'd find an antenna placed atop a tall blue silo.

"You're going to want to go straight down 36, turn left on this county road," Shryock, one of three county commissioners in Coshocton, Ohio, said. "There's a cemetery on the left, and then you'll see a big red barn."

I snapped a photo of the map. The old-school directions were necessary because the address doesn't exactly show up on Google Maps and, besides, my phone lost all signal after about the third hill on that county road. It was a blistering hot July day in Appalachian Ohio and I was on a mission to see firsthand how rural communities have stopped waiting for Big Telecom to bring high-speed internet to them and have started to build it themselves.

About 19 million Americans still don't have access to broadband internet, which the Federal Communication Commission defines as offering a minimum of 25 megabits per second download speeds and 3mbps upload speeds. Those who do have broadband access often find it's too expensive, unreliable, or has prohibitive data caps that make it unusable for modern needs.

In many cases, it's not financially viable for big internet service providers like Comcast and CharterSpectrum to expand into these communities: They're rural, not densely populated, and running fiber optic cable into rocky Appalachian soil isn't cheap. Even with federal grants designed to make these expansions more affordable, there are hundreds of communities across the US that are essentially internet deserts.

But in true heartland, bootstrap fashion, these towns, hollows—small rural communities located in the valleys between Appalachia hills—and stretches of farmland have banded together to bring internet to their doors. They cobble together innovative and creative solutions to get around the financial, technological, and topological barriers to widespread internet. And it's working, including on that farm down the county road in Coshocton.

It's just one example of a story that's unfolding across America's countryside. Here, a look at three rural counties, in three different states, demonstrates how country folk are leading their communities into the digital age the best way they know how: ingenuity, tenacity, and good old-fashioned hard work.

THE 'SILICON HOLLOW'
Letcher County, Kentucky

Letcher County is in the heart of coal country. The 300-square-mile, 25,000-person corner of Kentucky is tucked just across the border from Virginia. It's rippled with endless rolling hills, dense forest, little towns, and boarded-up mines. Like many similar communities, the county has been hard hit by the waning coal industry.

But while politicians, including President Donald Trump, rally around the promise to "bring coal back," the residents in many of these communities would rather look to the future. And in their mind, that future depends on high-speed internet.

"We view it as the next economic revolution for coal towns," said Harry Collins, the chairman of the Letcher County Broadband Board, which formed late last year. "The majority of our railroad tracks are ripped up now—that revolution has played out. We feel that this [digital] revolution is just as game changing and life changing as those railroad tracks were in the 20s and 30s."

I met Collins, and his vice chair Roland Brown, at a rural broadband summit in Appalachian Ohio, where the two men chatted strategy and ideas with other rural community leaders who were further along in the process.

Their region of Kentucky has the highest unemployment rate in the state, at 10.2 percent, according to the Kentucky Center for Education and Workforce Statistics. That's more than double the national average. The population also has poor health indicators, a low education attainment, and may not be able to work outside the home because they're caring for small children or aging relatives. Brown told me high-speed internet could help alleviate all of these pressures.

"If I've got reliable broadband, we can do telemedicine and bring in doctors from other areas," Brown said. "If I can get people at home going to school online, I can raise up my education attainment level, which is only going to help me attracting employers in the long run. There are so many economic and social benefits of this."

Other parts of Kentucky have already set a high standard for rural internet. Jackson County, in the middle of the state, is home to just over 13,000 residents spread across 350 square miles. It's also home to gigabit internet available via fiber optic cable to every home in the county.

Eager to keep pace, Letcher established a voluntary broadband board, which had its first meeting in February this year. They began by surveying the entire region to pinpoint the areas with the least amount of access and quickly identified a stretch in the southwest corner of the county that was completely unserved. Ten businesses and 489 households covering 55 miles of mountain terrain had no access to high-speed internet whatsoever. The board decided to focus on this area first, which they've dubbed "Phase One."

But it will be no easy task to connect this rural, rocky stretch of Appalachia. There are hills, hollows, and a lot of distance from the nearest hooked-up hub. The county has applied for a $1.3 million grant from the Department of Agriculture under its Community Connect program, and will find out in September whether that's been approved. The County Fiscal Court has also committed $200,000 to the project, bringing the total to $1.5 million.

The plan, if the funding comes through, is to beam out a broadband signal from Whitesburg—the county seat ten miles away—to the Phase One area, then send fiber out to individual homes and businesses. But it will be a patchwork, with some fiber ending at the edge of a long hollow, and feeding into another tower that will transmit the signal to the folks living at the other end.

"We're never going to be able to level the mountains off to get us connected to the rest of the world, but I can lay a piece of fiber that goes around that mountain."

The board has established a "dig once" initiative, where any time roadwork or repairs are being done in the area, county workers are obliged to lay fiber at the same time. It's also looking into innovative techniques for connecting along the highway, such as micro trenching, where the fiber optic cable is embedded a few inches into the road and blacktopped over.

"It cuts down your chances of animals taking your line down, or car wrecks that take it down, or storms that take it down," Brown said.

The goal, over time, is to connect as much of the county as possible with a municipally-run broadband service that's delivered like a utility—the same as electricity or water—and is self-sustaining, even if it's not profitable. It's all part of a larger effort in the state, led by Congressman Hal Rogers, who envisions tech as filling much of the gap left by the coal industry, and has proposed the dream of a "Silicon hollow." As Letcher County prepares to lay its first miles of fiber, that dream is keeping these volunteers motivated.

"Broadband is the digital railroad but instead of extractive, we're looking to it to bring jobs in, bring education in," Collins said. "We're never going to be able to level the mountains off to get us connected to the rest of the world, but I can lay a piece of fiber that goes around that mountain and then I can connect to the rest of the world."

INTERNET ON THE TV
Garrett County, Maryland

Cheryl DeBerry likes to joke about her home county's location in Maryland.

"We have Pennsylvania to the north, West Virginia to the east, west, and south," DeBerry told me. "I'm not really sure where we're connected to Maryland."

Garrett County is located at the most western reach of Maryland's panhandle. It sits just below the Mason-Dixon line, smack dab in the Appalachian Mountains. It's rural, mountainous, and forested—pretty much the opposite of Cape Cod.

This geography was part of the reason why fewer than 60 percent of residents in Garrett County had broadband internet as of 2011, when county commissioners asked the economic development office, where DeBerry works, to identify its No. 1 priority for improving the region's economy. DeBerry and her colleague quickly zeroed in on rural access to high-speed internet.

With a goal of 90 percent broadband coverage across the county, low population density, and plenty of hills and trees in the way, it wasn't a simple proposal. To start, the county applied for a grant from the Appalachian Regional Commission—a federal-state partnership that supports economic development in Appalachia. With that money and some county funds, the local government hired a consultant to come up with a plan to reach its new goal: partner with a private company, and use any resources on hand to weave a network together.

One of those resources was unused TV channels. Known as white space, a lot of the frequencies that previously broadcast analog channels are no longer used, since stations have switched to digital, which requires less spectrum space. All these unused "channels" can act like Wi-Fi extenders, bringing internet to further reaches. Basically, if you could get the local TV news back in analog days, you can get the internet to your door now.

In Garrett County, this was a huge asset, according to Nathaniel Watkins, the chief information officer for the county government. Due to the county's geography, there were multiple unused channels available that weren't being broadcast on and that weren't getting any bleed over from other cities.

"We're kind of protected on all sides by mountains," Watkins said. "In rural areas, we're super fortunate because there aren't a lot of TV broadcasters that are bleeding over into those channels."

White space is particularly useful because it's transmitted on low-frequency waves, meaning it doesn't need a direct line-of-sight from the transmission point to the receiver. It can reach through trees, hills, and buildings, making it ideal for rural areas. The FCC recently approved the use of channel bonding, where multiple consecutive channels are lumped together to create a larger bandwidth, something Garrett County quickly took advantage of.

But while whitespace enabled a lot of the internet expansion in this corner of Maryland, it was only one tool the county has been using. When there is direct line-of-sight—if a community has a tall hill in the center where a tower can be built, for example—using a 5Ghz wireless system can provide better results. To get these hubs in as many places as possible, the county government started looking for anything tall enough to stick an antenna on.

"People have allowed us to put antennae on barns, silos, the sides of houses," DeBerry told me. "There are antennae on trees. We've got folks willing to put in poles [on their property] for us. They're just desperate for service and willing to help their neighbors get it as well."

Often, a combination of techniques is used: fiber connection from the county seat can feed to a tower, which transmits several miles via whitespace to a smaller tower on someone's barn, which shoots 5G signal down to all the neighbors. For $75 a month (comparable or less than a satellite internet subscription), residents can get 5mbps download and upload speeds with no data caps and much more reliable services.

DeBerry said while the county government has a private partner working on these projects, it's not trying to compete with the local ISPs that have been serving the area. They've worked with these businesses to extend their service as well, tasking county summer workers with digging trenches so the companies can expand another mile to reach an unserved area.

"We just recently completed a project for that, and a cable company is now able to provide service for 25 new homes and businesses because we helped them get the infrastructure there," DeBerry said.

In the last year, more than 150 new homes and businesses gained access to high-speed internet through the program. There are still plenty of people without access, and with the exception of those who don't want internet (like the local Mennonite and Amish communities), DeBerry said she believes they can one day get everyone hooked up.

"I'm hopeful we can reach most of those people even out in the middle of nowhere," she said. "We're trying to get everybody."

AHEAD OF THE CURVE
Coshocton County, Ohio

"Enjoy your visit to Coshocton," said a high schooler after I snapped some photos of the flag team practice.

She wasn't quite sure why a reporter would come to this sleepy corner of Ohio, with its winding country roads, corn fields, and population of 11,000. But when I told her I was reporting on rural broadband, a look of understanding washed over her face. After all, for the last seven years, a high-speed internet transmitter has topped the student radio station tower on the hill across from River View High School, beaming lightning fast internet into the school and surrounding homes.

It's the product of a long-term project spearheaded by a local politician. In 2006, Gary Fischer was the mayor of Warsaw, Ohio (population 800), and decided to run for a commissioner seat. It was the first time he had realized the area's digital divide.

"We had great internet service in Warsaw, so I didn't realize it was an issue until I started campaigning county wide," Fischer told me.

It quickly became apparent that many people in the county's rural stretches lacked any internet access—more than 4,000 households were unconnected. The only option they had was satellite service, which was slow (1 to 2 mbps), spotty (bad weather, or even a breeze, could knock out a signal), and expensive ($75 to $80 per month in 2006, according to Fischer). He wasn't sure how, but he told voters he would work on fixing the problem if he got elected. Fischer took office on January 1, 2007.

That spring, a member of the county's IT department returned from a conference bearing a napkin scribbled with ideas. It was Fischer's first glimpse at a solution; A private-public partnership could help set up infrastructure to expand broadband and deliver wireless signals to pockets across the county. But the private company, Lightspeed Technologies (now owned by Watch Communications), didn't want to foot the bill for putting up dozens of towers. It needed "vertical infrastructure," Fischer said. So he went hunting for the tallest things in the county.

First the county identified huge state-owned radio towers that transmitted Ohio's Multi-Agency Radio Communication System (MARCS), which is used by emergency services. Coshocton asked the state if it could lease some space at the top of these towers to put up broadband antennae. While the state mulled it over, the county looked for more towers: the local 911 radio towers, the water towers, the radio station at the high school.

"Then we started to broaden our horizons a little bit," Fischer told me. "We're in a farming community. We've got 100 foot silos all over the country. That's as good as a 100 foot tower."

Eventually, the state gave the greenlight to lease the MARCS towers, and Coshocton secured a $38,000 grant from the Appalachian Regional Commission. It used that money to offset the costs of leasing the towers while the local provider set up shop. Over the next six years, 16 towers were raised—on top of barns, on MARCS towers, on water towers—to deliver high-speed internet to the county's most rural residents.

Each tower took some creative engineering. The village of Walhonding, for example, is located in a hollow that blocked the signal from the closest MARCS tower in Newcastle, just three miles away. County surveyors went in, identified an ideal location, and knocked on the door of a Walhonding resident.

"They said, 'We'll give you free internet service if you let us put a tower in your backyard,'" Fischer said. "He was happy to do it. Now we have 20 or 30 households in Walhonding that are connected."

Though the county government still works as a middleman—it leases the local 911 towers to the ISP, and subleases the state towers—it has spent very little beyond that first $38,000 grant (Fischer said the county invested about $10,000 in lawyers fees to draw up all the contracts). The user fees keep the system entirely self-sustaining, and profitable for the private ISP.

Between 2008 and 2011, the percent of Coshocton County residents with broadband internet at home rose from 32 percent to 58 percent, according to Connect Ohio, and they were paying less than the state average. The latest map shows a significant coverage area, though there are still pockets of unserved communities. Fischer said he knew from day one that the county wouldn't be able to reach all 4,000 households in the short term.

These days, Fischer plans to start a campaign to inform small communities of how to lure the local service provider. If just seven households can all agree they want internet and one of the homeowners puts a tower in their yard, or on top of their silo, it would make it worthwhile for the ISP to bring in service. He said the proof of concept has reached the point where it's a lot easier to expand than in the initial parts of the project.

"A lot of people said this would never work, but we've been up and running since April of 2009," Fischer said. "It's here. It's proven."

Back on that hot day in June, I managed to find the cemetery and the barn outside of Coshocton, just as Shryock had instructed. No one was home when I knocked and, as horses eyed me from a nearby paddock, stared up at the antenna on top of that big blue silo. I know now that it beams high-speed internet to the farmhouse next door—for free, since the family graciously provided the silo—and to dozens of neighbors up and down the hollow, including another farmer down the road, who waved at me from his tractor as I drove past on my way back to Brooklyn.
https://motherboard.vice.com/en_us/a...-one-else-will





Before Hurricane Harvey, Wireless Carriers Lobbied Against Upgrades to a National Emergency Alert System

A wonky debate at the FCC has real-life consequences, and public-safety officials aren’t happy
Tony Romm

As Hurricane Harvey bombards Texas, a different, decidedly more political storm is brewing in the U.S. capital — over the emergency alerts that first responders around the country are able to send to smartphones.

For years, the Federal Communications Commission has endeavored to upgrade the sort of short text-based messages — often accompanied by a loud alarm — that authorities have used since 2012 to warn Americans about rising floods, abducted children and violent criminals at large.

But efforts to bring those alerts into the digital age — requiring, for example, that they include multimedia and foreign-language support — have been met with skepticism or opposition from the likes of AT&T, Sprint, Verizon and T-Mobile, and even some device makers, too.

Carriers have argued that some of those changes could prove technically difficult or costly to implement, while congesting their networks — and in recent months, they’ve encouraged the FCC to slow down its work. Tech giants like Apple and Microsoft, meanwhile, also have lobbied the agency against some proposed rules that might put more burden on them for delivering emergency alerts to smartphones.

It all amounts to a great deal of well-lawyered bickering in Washington, D.C., and it stands in stark contrast to the dire Category 4 megastorm that’s poised to cause immense rainfall, flooding and damage in Texas.

Perhaps presciently, local officials there raised those exact issues with the FCC in July.

Before Hurricane Harvey existed, a top homeland security official in Harris County, Texas — which includes Houston — slammed wireless carriers and others for stalling on changes to the wireless emergency alert system, or WEA.

“Currently, Harris County rarely uses WEA because it does not want to potentially alert the entire county when a WEA message may only pertain to a certain portion of the county,” wrote Francisco Sánchez, Jr., liaison to the director and public information officer in the county, in a letter to the FCC. That includes, he said, a “hurricane or tropical storm.”

Reached yesterday before Hurricane Harvey made landfall, Sánchez said it “frustrates” local officials there and elsewhere that they can’t easily send alerts about flooding and other hazards to very narrow, specific parts of his expansive slice of Texas, one of the most populous in the United States.

To most Americans, wireless alerts are mere annoyances — loud interruptions that have spawned no shortage of news stories over the years explaining how to turn them off. Generally speaking, most of these quick text bursts can be disabled through a smartphone’s notifications or settings page, with the sole exception of so-called “presidential alerts,” which are reserved for the most dire national emergencies.

To public-safety officials, however, the alerts are a lifeline for dispatching critical, real-time information during a disaster — albeit, for some, an outdated one. For years, police officers, firefighters and other first responders have urged the FCC to expand the system so that it takes advantage of the tools that make smartphones so useful — like links to websites, maps of affected areas and photos and videos.

Those groups believed they notched an early victory during the Obama administration, under the leadership of then-Chairman Tom Wheeler. In 2016, his agency adopted an order that increased the maximum length of a wireless emergency alert from 90 characters to 360 characters. It also pushed wireless giants to support transmission of those alerts in Spanish. And it required that companies soon allow “embedded references,” like URLs and phone numbers, in the alerts they pass along on behalf of public-safety leaders.

The changes applied only to telecom providers that participate in the program, which includes major carriers — but, technically, participation is voluntary.

In doing so, Wheeler also put the agency on track to weigh other, more ambitious reforms to wireless alerts. He teed up for debate new requirements that the messages finally enable multimedia, like video, and that they would be more specifically targeted to exact locations — including smartphone owners in harm’s way. Wheeler even wanted to look into tools that would allow recipients to send information about a disaster back to first responders.

Under Trump, the fate of those ideas now rests with Ajit Pai, the Republican chairman of the FCC. In the past, at least, Pai has supported reforms to the content and delivery of wireless emergency alerts.

Already, though, Pai has faced an onslaught of opposition from the regulation-wary telecom industry.

Previously, the well-heeled Washington, D.C.-based lobbying group for wireless giants, known as CTIA, argued against a full-scale, aggressive overhaul of emergency alerts. Among their fears: Too many users clicking too many links or other multimedia during an emergency would overwhelm companies’ networks. Those alerts may look like text messages, but they’re actually delivered by other means — so telecom companies had technical concerns about the changes, too.

By January, though, CTIA explicitly asked the FCC to hit the brakes on any additional reforms. In official comments with the agency, the lobbying group again stressed “proposed rules pose technical and economic challenges that render implementation infeasible or premature.”

Asked yesterday about its doubts, a CTIA spokesman responded in a statement: “The wireless industry partners with federal, state and local emergency authorities to deploy wireless networks and handsets that support unique WEA capabilities, and continuously seeks to enhance the WEA system.”

For now, Pai also has offered little indication as to his next steps. But speaking on the matter in September 2016 — at the time, as a commissioner — he pointed to the likes of Houston and Harris County, Texas, as he made the case for reform.

“Millions of people who live in these communities could miss out on potentially life-saving information because [the alert system’s] current brushstroke is too broad,” he said.

At the time, Pai endorsed a “device-based approach to geo-targeting,” he explained, which he said meant that devices themselves would “screen emergency messages and only allow the relevant ones through.” Local officials like Sánchez in Harris County, Texas, who have advised the FCC in recent months, share a belief that device-makers should play a greater role, while fretting that the “carriers are asking for the FCC to delay the timeline for some of these critical improvements.”

But the idea hasn’t exactly won support among tech giants like Apple and Microsoft, which have quietly taken their own concerns to the FCC in recent weeks

During a private call with top FCC officials, for example, Apple’s leading lobbyists said the iPhone cannot currently do what Pai has proposed — and if it did make the tweaks, it might “harm consumers by delaying their access to critical safety information.” Also, it’d drain the battery, Apple said.

The company did not respond to requests for comment. Nor did a spokesman for FCC Chairman Pai. On Friday, though, Pai stressed the telecom agency is prepared for the incoming hurricane.

“We have activated our Disaster Information Reporting System, deployed personnel to Texas, and provided emergency response officials and licensees with emergency contact information,” he said. “These actions will enable us to monitor the extent of communications outages and, working with industry and government partners, support restoration efforts.”

“Our thoughts and prayers are with those on the Gulf Coast, and we urge residents of the affected areas to take shelter and other necessary precautions,” he said.
https://www.recode.net/2017/8/26/162...y-alert-system





Central Banks Can’t Ignore the Cryptocurrency Boom
Enda Curran, Piotr Skolimowski, and Craig Torres

• Digital coins challenge the guardians of official money
• Central banks are being urged to heighten their oversight

The boom in cryptocurrencies and their underlying technology is becoming too big for central banks, long the guardian of official money, to ignore.

Until recently, officials at major central banks were happy to watch as pioneers in the field progressed by trial and error, safe in the knowledge that it was dwarfed by roughly $5 trillion circulating daily in conventional currency markets. But now as officials turn an eye toward the increasingly pervasive technology, the risk is that they’re reacting too late to both the pitfalls and the opportunities presented by digital coinage.

"Central banks cannot afford to treat cyber currencies as toys to play with in a sand box," said Andrew Sheng, chief adviser to the China Banking Regulatory Commission and Distinguished Fellow of the Asia Global Institute, University of Hong Kong. "It is time to realize that they are the real barbarians at the gate."

Bitcoin -- the largest and best-known digital currency -- and its peers pose a threat to the established money system by effectively circumventing it. Money as we know it depends on the authority of the state for credibility, with central banks typically managing its price and/or quantity. Cryptocurrencies skirt all that and instead rely on their supposedly unhackable technology to guarantee value.

China’s Lead

If they don’t get a handle on bitcoin and their ilk, and more people adopt them, central banks could see an erosion of their control over the money supply. The solution may be in the old adage, if you can’t beat them, join them.

The People’s Bank of China has done trial runs of its prototype cryptocurrency, taking it a step closer to being the first major central bank to issue digital money. The Bank of Japan and the European Central Bank have launched a joint research project which studies the possible use of distributed ledger -- the technology that underpins cryptocurrencies -- for market infrastructure.

The Dutch central bank has created its own cryptocurrency -- for internal circulation only -- to better understand how it works. And Ben Bernanke, the former chairman of the Federal Reserve who has said digital currencies show "long term promise," will be the keynote speaker at a blockchain and banking conference in October hosted by Ripple, the startup behind the fourth largest digital currency.

Russia, too, has shown interest in ethereum, the second-largest digital currency, with the central bank deploying a blockchain pilot program.

In the U.S., both banks and regulators are studying distributed ledger technology and Fed officials have made a couple of formal speeches on the topic in the past 12 months, but have voiced reservations about digital currencies themselves.

Policy Issues

Fed Governor Jerome Powell said in March there were “significant policy issues” concerning them that needed further study, including vulnerability to cyber-attack, privacy and counterfeiting. He also cautioned that a central bank digital currency could stifle innovations to improve the existing payments system.

At the same time, central bankers are obviously wary of the risks posed by alternative currencies -- including financial instability and fraud. One example: The Tokyo-based Mt. Gox exchange collapsed spectacularly in 2014 after disclosing that it lost hundreds of millions of dollars worth of bitcoin.

But for all their theoretical tinkering, official-money guardians have largely stood by as digital currencies have taken off. The explosion in initial coin offerings, or ICOs, is evidence. Investors have poured hundreds of millions of dollars into the digital currency market this year alone.

The dollar value of the 20 biggest cryptocurrencies is around $150 billion, according to data from Coinmarketcap.com. Bitcoin itself has soared more than 380 percent this year and hit a record -- but it’s also prone to wild swings, like a 50 percent slump at the end of 2013.

"At a global level, there is an urgent need for regulatory clarity given the growth of the market," said Daniel Heller, Visiting Fellow at the Peterson Institute for International Economics and previously head of financial stability at the Swiss National Bank.

Self Interest

Rather than trying to regulate the world of virtual currencies, central banks are mainly warning of risks and attempting to garner some advantage from distributed-ledger technology for their own purposes, like upgrading payments systems.

Carl-Ludwig Thiele, a board member of Germany’s Bundesbank, has described bitcoin as a “niche phenomenon” but blockchain as far more interesting, if it can be adapted for central-bank use. In July, Austria’s Ewald Nowotny said the he’s open to new technologies but doesn’t believe that will lead to a new currency, and that dealing in bitcoin is effectively “gambling.”

There could also be a monetary policy aspect to consider. ECB Governing Council member Jan Smets said in December that a central-bank digital currency could give policy makers more leeway when interest rates are negative. Policy makers have long been concerned that if they cut rates too low, people will simply hoard cash. The ECB’s deposit rate is currently minus 0.4 percent.

Other central banks see the uses of distributed ledger technology, but worry about the abuses virtual money can be put to outside the official system -- like criminal money laundering and the sale of illegal goods. That’s not to mention the risk that virtual currencies could pose to the rest of the financial system if the bubble were to pop.

‘Great Promise’

Bank of England Governor Mark Carney -- who has said blockchain shows “great promise” -- also warned regulators this year to keep on top of developments in financial technology if they want to avoid a 2008-style crisis.

While Mt. Gox cast a shadow over bitcoin in Japan, it now has many supporters in the world’s third-biggest economy. Parliament passed a law in April this year making it a legal method of payment. Japan’s largest banks have invested in bitcoin exchanges and small-cap stocks linked to the cryptocurrency or its underlying technology have rallied this year as it begins to win favor with some retailers.

With the nation’s Financial Services Agency responsible for bitcoin’s regulation, the BOJ remains focused on studying its distributed ledger technology.

Not Ready Yet

"Central banks are not yet ready for regulating digital currencies," said Xiao Geng, a professor of finance and public policy at the University of Hong Kong. "But they have to in the future since unregulated digital currencies are prone to crime and Ponzi-type speculation."

To be sure, the attraction of virtual currencies for many remains speculation, rather than for households or companies buying and selling goods.

"It is a fad that will die down and it will be used by less than 1 percent of consumers and accepted by even fewer merchants," said Sumit Agarwal of Georgetown University, who was previously a senior financial economist at the Federal Reserve Bank of Chicago. "Even if we can make the digital currency safe it has many hurdles."

— With assistance by Brett Miller, Lucy Meakin, Carolynn Look, and Justina Lee
(This story has been corrected to remove references to Exio Coin because of an inability to verify aspects of the company’s business.)
https://www.bloomberg.com/news/artic...-central-banks





China to Launch World's First Quantum Communication Network

As malicious hackers find ever more sophisticated ways to launch attacks, China is about to launch the Jinan Project, the world's first unhackable computer network, and a major milestone in the development of quantum technology.

Named after the eastern Chinese city where the technology was developed, the network is planned to be fully operational by the end of August 2017. Jinan is the hub of the Beijing-Shanghai quantum network due to its strategic location between the two principal Chinese metropolises.

"We plan to use the network for national defence, finance and other fields, and hope to spread it out as a pilot that if successful can be used across China and the whole world," commented Zhou Fei, assistant director of the Jinan Institute of Quantum Technology, who was speaking to Britain's Financial Times.

By launching the network, China will become the first country worldwide to implement quantum technology for a real life, commercial end. It also highlights that China is a key global player in the rush to develop technologies based on quantum principles, with the EU and the United States also vying for world leadership in the field.

The network, known as a quantum key distribution (QKD) network, is more secure than widely used electronic communication equivalents. Unlike a conventional telephone or internet cable, which can be tapped without the sender or recipient being aware, a QKD network alerts both users to any tampering with the system as soon as it occurs. This is because tampering immediately alters the information being relayed, with the disturbance being instantly recognisable. Once fully implemented, it will make it almost impossible for other governments to listen in on Chinese communications.

In the Jinan network, some 200 users from China's military, government, finance and electricity sectors will be able to send messages safe in the knowledge that only they are reading them. It will be the world's longest land-based quantum communications network, stretching over 2 000 km.

Also speaking to the Financial Times, quantum physicist Tim Byrnes, based at New York University's (NYU) Shanghai campus commented: "China has achieved staggering things with quantum research… It's amazing how quickly China has gotten on with quantum research projects that would be too expensive to do elsewhere… quantum communication has been taken up by the commercial sector much more in China compared to other countries, which means it is likely to pull ahead of Europe and US in the field of quantum communication."

However, Europe is also determined to also be at the forefront of the 'quantum revolution' which promises to be one of the major defining technological phenomena of the twenty-first century. The EU has invested EUR 550 million into quantum technologies and has provided policy support to researchers through the 2016 Quantum Manifesto.

Moreover, with China's latest achievement (and a previous one already notched up from July 2017 when its quantum satellite—the world's first—sent a message to Earth on a quantum communication channel), it looks like the race to be crowned the world's foremost quantum power is well and truly underway.
https://phys.org/news/2017-08-china-...m-network.html





China Orders Internet Comments Linked to Real Identities

Another way to silence dissent.
Jon Fingas

China isn't slowing down in its bid to silence online political opposition. As of October 1st, the country will require that tech firms hold on to records of the real identities of everyone posting comments on internet message boards. This is to discourage "false rumors, filthy language and illegal messages," according to the government. Of course, it's that last part that Chinese officials are really interested in -- they know you're less likely to challenge the political order if investigators can easily track you down.

The timing of this identity requirement, the VPN restriction and other crackdowns (such as an investigation into internet giants for allowing material that "harms the social order") isn't coincidental. China's ruling party has its next national congress later in 2017, and it has a habit of ramping up censorship around these gatherings to discourage criticism of party policies.

The difference versus previous years, as an anonymous lawyer tells the Financial Times, is the focus of that censorship. Past rules centered around services, but China is targeting the content more directly this time. Also, it wasn't always evident who was supposed to enforce rules -- the Cybersecurity Administration of China is clearly the one wielding authority here. Like it or not, the country is getting much better at clamping down on freedom of speech.
https://www.engadget.com/2017/08/27/...al-identities/





Germany's Facial Recognition Pilot Program Divides Public

Germans are volunteering to be monitored as part of a six-month facial recognition pilot program in Berlin. Germany's interior minister is pleased with the initial results, but critics are wary of increased surveillance.

German Interior Minister Thomas de Maizière paid a special visit on Thursday to Berlin's Südkreuz train station, one of the largest railway junctions in the country's capital. With local and long distance trains arriving every minute, thousands of people pass through this station every day. But the minister was not interested in the trains. De Maizière, who is responsible for domestic security in Germany, wants to find out how effective his facial recognition pilot project really is.

Critics of increased government surveillance also came out to Südkreuz station to protest de Maizière's visit. They have demanded the six-month field trial, launched in early August, be terminated immediately. Activist Paul Gerstenkorn from "Digitalcourage," a German privacy and digital rights organization, claims that the technology used in the tests would create more extensive profiles of the country's citizens than advocates of the project admit.

The 300 testers who volunteered for the project carry a transponder that apparently only transmits data on ambient temperature, battery status and signal strength, according to the project staff member in the Südkreuz station control room who explained the technology to de Maizière. But Gerstenkorn contends the angle and acceleration of the testers are recorded as well. With the help of his smartphone, Gerstenkorn easily detected the number of testers within a radius of 20 meters (66 feet), while de Maizière was fielding questions from journalists. The word "blukii," the name of the transponder used for the facial recognition field trial, appeared 10 times on Gerstenkorn's phone screen.

For German Data Protection Commissioner Andrea Vosshoff, the fact that active and not passive technology is being used is going too far. Unlike a passive chip, the transponder constantly transmits information that anyone can collect with the help of freeware available on the internet. Vosshoff says the police have not "sufficiently" informed the testers, and called for the project to be temporarily halted.

Surveillance as public safety

But Reinhard Thieme, a supporter of the program who was passing through Südkreuz station, has no reservations whatsoever. He considers every form of video surveillance to be useful for safety reasons and as evidence in criminal investigations.

De Maizière would be pleased to hear such views. The interior minister has vehemently defended the project, saying the technology is not being used to catch petty criminals such as shoplifters, but terrorists and serious offenders. Four weeks into the test phase, De Maizière has praised its "surprising accuracy" - specifically referring to people recognized by the software whose pictures are already stored in police databases. According to Germany's federal police force, pictures of all other passers-by captured by the surveillance cameras are "immediately deleted."

Fooling the software

After the six-month trial phase in Berlin, a decision will be made on whether automatic facial recognition will be implemented nationwide in Germany's train stations and other public spaces. "If this works out well, it would be an incredible increase in safety for the population," said de Maizière, who also cautioned there was still work to be done. "What happens when people put on sunglasses, a hat or a hood?"

The interior minister could have the answer sooner than expected, as many opponents of the facial detection project were riding up and down the escalators at Südkreuz station during his visit. They could be seen from the control room while he was listening to police explain the technology. The activists wore masks and wigs, or held newspapers in front of their faces. For the moment, it looks like there are still a few ways to escape facial recognition.
http://www.dw.com/en/germanys-facial...lic/a-40228816





High-Res Satellites Want to Track Human Activity From Space
Sarah Scoles

Hopkinsville, Kentucky, is normally a mid-size town, home to 32,000 people and a big bowling ball manufacturer. But on August 21, its human density more than tripled, as around 100,000 people swarmed toward the total solar eclipse.

Hundreds of miles above the crowd, high-resolution satellites stared down, snapping images of the sprawl.

These satellites belong to a company called DigitalGlobe, and their cameras are sharp enough to capture a book on a coffee table. But at that high resolution, they can only image that book (or the Kentucky crowd) at most twice a day. And a lot can happen between brunch and dinner. So the Earth observation giant is building a new constellation of satellites to fill in the gaps in their chronology. When this new "WorldView Legion" sat-set is finished in 2021, DigitalGlobe will be able to image parts of the planet every 20 minutes, flashing by for photos dozens of times a day.

That’s called “high revisit” satellite imagery, and it’s mostly been the purview of smallsat companies, which can launch more and cheaper satellites to cover more ground more often. The leading smallsat imaging company, Planet, prides itself on capturing the globe’s full landmass every day, mostly at around four meters of resolution—so a Pontiac shows up as about a pixel. Planet has nearly two hundred satellites in orbit, and the smallsat industry at large is out to launch thousands more in the next decade, filling low-Earth orbit and staring down at the world with a gaze of increasing intensity.

Planet and its competitors provide a new service: (slightly fuzzy) images that can show daily changes in a spot on Earth. Traditional satellite companies sometimes have months-long gaps between images of a given spot.

But DigitalGlobe thinks it can provide quality and quantity. Along with WorldView Legion, it is banking on something different: that their customers (governments, oil-drillers, metal miners, retail chain owners) don’t need or want to see the whole planet’s diurnal dynamics. They care about the grittiest details of the places where people are—moving missiles, digging up natural resources, cutting down forests, parking cars for shopping sprees. “A large percentage of the population lives in a really narrow band of latitudes,” says Walter Scott, DigitalGlobe's founder and CTO.
SSL

So DigitalGlobe dreamed up the WorldView Legion constellation, which—with another flock of satellites called Scout—can snap a photo of a high-demand spot (say the Port of Shanghai) 40 times a day. Scott declined to specify how many satellites count as legion, but they will be 30-centimeter- and 50-centimeter-class, meaning they could resolve a laptop or a TV. The first will rocket up in 2020, the last in 2021.

The battalion of satellites comes courtesy of Space Systems Loral (SSL) of California, which builds its satellites in Palo Alto. The companies twine together like the noodles in a corporate alphabet soup: DigitalGlobe is in the final stages of a merger with a communications company called MDA, which also bought SSL back in 2012.

MDA isn't just keeping WorldView Legion in the family: It's keeping the majority of humanity's remote-sensing activities in the family. According to the latest Satellite-Based Earth Observation report from Northern Sky Research, an industry analysis group, the satisfied urge to merge gives MDA/DigitalGlobe command over the Earth observation stage. “A whopping 74 percent of the [Earth observation] data market was concentrated between three players, namely Digital Globe, Airbus D&S, and MDA—with the rest split between roughly a dozen players, including the likes of Telespazio and Planet,” Northern Sky analyst Prateep Brasu wrote.

With DigitalGlobe and MDA under a single umbrella, they control 54 percent of the market. And with SSL, they can in-house legions of satellites, big and small, for themselves and others. Money, money, money, mo-ney,.

That's a big deal: Terrestrial imagery affects economies and international relations, in addition to map apps. Companies sell intelligence to governments, revealing troop movement and arms test prep. Image analysis software (which gets smarter faster the more examples it sees) can count cars in WalMart parking lots to know how many people shop where when and if Target should be concerned. Prospectors can learn whether someone just started drilling into an oil supply, and how much black gold they seem to be netting. Relief organizations can look at a flood zone and figure out how best to help. And, in Earth observation as in casual conversation, there is always the future's weather to worry about, or its self-driving cars: “If you're building a support structure for autonomous vehicles, you can't have 50-meter errors in where you say the road is,” says Scott.

Now imagine that instead of seeing, say, how the floodwaters crest and recede over a week, or even from day to day or morning to afternoon, a satellite can see them shift from 9:30 a.m. to 9:50 a.m. Or capture how the 2024 eclipse-chasing crowd snowballs as totality approaches. That satellite-streamed “nowcasting” may just make life easier. “You're on vacation and want to know what the beach looks like, where the traffic is, where the crowds are,” says Al Tadros, vice president of space infrastructure and civil space at SSL. And then you do.

Because the future's satellite industry—from the fine print of DigitalGlobe to the rough sketch of smaller sats—is showing not how the world was, or even how it is, but how it goes.
https://www.wired.com/story/high-res...ity-from-space





Court: Locating Suspect Via Stingray Definitely Requires a Warrant

But, judge rules in Ellis, cops didn't need warrant due to "exigent circumstances."
Cyrus Farivar

A federal judge in Oakland, California has ruled against the suppression of evidence derived from warrantless use of a cell-site simulator. The simulator, a device often referred to as a stingray, was used to locate the lead defendant in an ongoing attempted murder case.

In the 39-page ruling, US District Judge Phyllis Hamilton notably found that the use of stingray to find a man named Purvis Ellis was a "search" under the Fourth Amendment—and therefore required a warrant. However, in this case, the judge also agreed with the government’s assertion that there were exigent circumstances, along with the "good faith exception" to the warrant requirement. In other words, use of the stingray was wholly justified.

"Cell phone users have an expectation of privacy in their cell phone location in real time and that society is prepared to recognize that expectation as reasonable," Judge Hamilton wrote, citing an important Supreme Court decision from 1967 known as United States v. Katz.

But because Ellis was believed to be involved in another shooting that happened one day earlier on January 20, 2013, the judge felt there were exigent circumstances.

"Though Ellis was not known to be the shooter, he was believed to be a suspect in possession of firearms," Judge Hamilton continued. "The need to prevent escape by a suspect presented exigent circumstances here."

"Exigent circumstances" is the idea in American criminal procedure that law enforcement can search or seize persons or things if there are imminent circumstances where bodily harm or injury is in process, evidence is being destroyed, or a suspect is in flight. In such situations, a warrant is not required.

Warrants required since 2015 anyway

Thursday’s court order could provide a new incentive for a guilty plea for Ellis, who was located in an East Oakland apartment building by the FBI and the Oakland Police Department several hours after the January 21, 2013 shooting of a non-uniformed OPD officer. Three other men were charged in the case, although one of them pleaded guilty earlier this year. No trial date has been set.

Ellis’ attorney, Martha Boersch, did not immediately respond to Ars’ request for comment.

As Ars has been reporting for years, Ellis has provided rare insight into stingrays are used in practice to find suspects and the seeming lengths the government is willing to go to keep usage quiet. The surveillance tool has come under increasing scrutiny by lawmakers and activists in recent years. Since this case began, both the Department of Justice, which oversees the FBI, and the State of California now require a warrant when a stingray is used in most circumstances.

Riana Pfefferkorn, a lawyer affiliated with the Stanford Center for Internet and Society, said that the judge’s ruling was "careful," but she noted that it may not specifically matter, given that both state and federal policy has changed since Ellis and his co-defendants were arrested in 2013.

"This is resolving something that happened over four years ago where on a going forward basis it may be a moot point," she told Ars.

When this case first began unfolding in federal court four years ago, prosecutors insisted that only one stingray was used. It turned out that there had been two—one first by the OPD, followed by another belonging to the FBI.

The entire issue as to what level of privacy a person can expect over their location information, and what hurdles law enforcement must jump through to obtain it (through a stingray or other means), are still up for discussion.

A related case, United States v. Carpenter, is pending before the Supreme Court. In Carpenter, the court is being asked whether law enforcement needed a warrant to obtain over 120 days of cell-site location information (CSLI), or whether a lesser standard was sufficient.

"The court's search analysis is based heavily on that one district's CSLI cases, which conflict with other circuits' rulings on CLSI and are going to be decided soon by the Supreme Court anyway," Orin Kerr, a law professor at George Washington University, told Ars. "Given that the constitutional analysis is largely based on issues the Supreme Court is about to revisit, it's not obvious that Ellis will have much influence on its own."
https://arstechnica.com/tech-policy/...res-a-warrant/





US Cops Can't Keep License Plate Data Scans Secret Without Reason

California's Supreme Court rules authorities must justify denying data requests
Thomas Claburn

Police departments cannot categorically deny access to data collected through automated license plate readers, California's Supreme Court said on Thursday – a ruling that may help privacy advocates monitor government data practices.

The ACLU Foundation of Southern California and the Electronic Frontier Foundation sought to obtain some of this data in 2012 from the Los Angeles Police Department and Sheriff's Department, but the agencies refused, on the basis that investigatory data is exempt from disclosure laws.

So the following year, the two advocacy groups sued, hoping to understand more about how this data hoard is handled.

Automated license plate readers, or ALPRs, are high-speed cameras mounted on light poles and police cars that capture license plate images of every passing vehicle.

The LAPD, according to court documents, collects data from 1.2 million vehicles per week and retains that data for five years. The LASD captures data from 1.7 to 1.8 million vehicles per week, which it retains for two years.

Authorities use this data to investigate crimes, though most of the license plates captured are associated with drivers not implicated in any wrongdoing. Regardless, license plate images can reveal where drivers go, which may point to the people they associate with and the kinds of activities they engage in. And if combined with other data sets, like mobile phone records, an even more complete surveillance record may be available.

The ACLU contends that indiscriminate license plate data harvesting presents a risk to civil liberties and privacy. It argues that constant monitoring has the potential to chill rights of free speech and association and that databases of license plate numbers invite institutional abuse, not to mention security risks.

EFF senior staff attorney Jennifer Lynch said the ruling demonstrates that the court recognizes the privacy implications of license plate data.

At the same time, making license plate data available to researchers seeking to understand the privacy implications is itself a privacy risk. The court recognized this conundrum in its ruling.

"Although we acknowledge that revealing raw ALPR data would be helpful in determining the extent to which ALPR technology threatens privacy, the act of revealing the data would itself jeopardize the privacy of everyone associated with a scanned plate," the ruling says.

Accordingly, the California Supreme Court does not call for the release of this data; rather, it sends the plaintiffs' record request back to the trial court, which will decide what data can be made public and whether some of it will need to be redacted or anonymized to protect driver privacy.
https://www.theregister.co.uk/2017/0..._scans_secret/





Despite Privacy Outrage, AccuWeather Still Shares Precise Location Data with Ad Firms

New tests reveal that while one privacy-invading feature was removed in an app update, the app still shares precise geolocation coordinates with advertisers.
Zack Whittaker

AccuWeather is still sending precise geolocation data to a third-party advertiser, ZDNet can confirm, despite updating its app earlier this week to remove a feature that collected user's location data without their permission.

In case you missed it, AccuWeather was until this week sending the near-precise location of its iPhone app users to Reveal Mobile, a data monetization firm -- even when location sharing was switched off. Security researcher Will Strafach, who first reported the issue, also accused the company of sharing a user's precise GPS coordinates under the guise of providing local weather alerts.

The news sparked outrage and anger. AccuWeather responded with a forced apology, which one leading Apple critic John Gruber called a "bulls**t response."

However, tests conducted by Strafach show that the updated app, released Thursday, still shares precise geolocation data with a data monetization and advertising firm.

ZDNet independently verified the findings. We found that AccuWeather was still, with location sharing enabled, sending precise GPS coordinates and altitude albeit to a different advertiser, without the user's explicit consent.

That data can be used to pinpoint down to a few meters a person's location -- even which floor of a building they are on.

The data is sent to a server run by Nexage, now owned by Oath, which uses the data as part of its AdMax platform for increasing mobile advertising revenue. According to one of its pages, Nexage will use the location data "to ensure users receive the best quality ads and that publishers get the highest possible eCPM," referring to the cost-per-mile metric for advertisers.

But at no point does AccuWeather's updated app explicitly state that the location data will be used for advertising, a key criticism first noted by Strafach in his original disclosure.

Gruber said in his blog post that users who permit their location to be shared are doing so under "the guise of showing you local weather wherever you are."

Strafach commented Friday that many of those who did not want their location data used for purposes besides local alerts are "still angry," and noted that these concerns have gone "totally unacknowledged by AccuWeather."

A rival weather app, Dark Sky, said in a blog post Wednesday that the monetization of customer's location data is "a much larger -- and more widespread -- phenomenon."

Dark Sky, which says it doesn't and "never will" share its customers location data with third party advertisers or data monetization firms, posted several screenshots of emails they have received soliciting business to monetize their customers' locations.

Adam Grossman, co-creator of Dark Sky, said in the blog post: "These companies all claim that the location data they collect is "anonymous", and that it can't be used to identify or track individual people -- this is false," he said.

"In fact, it's trivially easy for one of these data monetization firms to put real names to the latitude/longitude pairs they receive," Grossman added.

A spokesperson for Oath said in a statement: "AccuWeather sends us geo-location data through our SDK only when location sharing is enabled by the consumer. We use this data to enable our buyers on our ad exchange to effectively value the impression. Location is commonly used by buyers in order to serve more relevant content and advertising to enhance the overall consumer experience. We're committed to fostering an accountable ecosystem and complying with all applicable privacy laws and regulations."

A spokesperson for AccuWeather did not respond to a request for comment.
http://www.zdnet.com/article/accuwea...-tests-reveal/





WSU Professor Says IRS is Breaking Privacy Laws by Mining Social Media
Becky Kramer

Those Facebook posts from your vacation on a white sand beach, or that purchase of a fancy new vehicle, could be attracting views from the federal government.

As its staff shrinks, the Internal Revenue Service has turned to mining social media and large data sets in search of taxpayers to audit, a Washington State University professor says in a recent report in the Vanderbilt Journal of Entertainment and Technology Law.

People should be aware “that what they say and do online” could be used against them by the IRS, said Kimberly Houser, an associate professor of business law in WSU’s Carson College of Business.

Her 55-page report is studded with examples of how the IRS has turned to social media and data analytics for enforcement, including a 2013 fraud case in which a Florida woman was convicted after bragging about being the ‘Queen of Tax Fraud’ on Facebook.

Tax evasion cost the U.S. government an estimated $3 trillion in lost revenue between 2000 and 2009, the report said. With its budgets and staff in decline, the IRS created a new “Office of Compliance Analytics” division in 2011 to make use of big data and predictive algorithms for finding tax scofflaws, Houser said. But some of the practices used by the IRS violate federal laws related to privacy and fair information gathering, she said.

While the burden is on taxpayers to provide supporting documents for their tax returns, the IRS does not have unlimited power to obtain any information it wants, the report said.

In a 2010 case, United States v. Warshak, a federal appeals court affirmed that citizens have a reasonable expectation of privacy in their emails and the government needs a search warrant to read them.

However, “many of these (privacy) statutes were written before the internet was widely used, and certainly before social media,” Houser said. “My instinct is that because the law is not worded as broadly as it could be to cover these situations, the IRS has just taken the stance of ‘Let’s just do what we can until someone tells us we can’t.’ ”

The IRS is mostly mum on how the agency targets taxpayers through analytics, according to Houser, who cites examples culled from outside reports, including other universities’ freedom of information requests.

Houser said the agency uses data analytics to decide which taxpayers to audit, based on “private, highly detailed profiles” of taxpayers created from sources other than tax returns or third-party reports, such as W-2 wage information. Her report says the IRS mines commercial and public data, including social media sites such as Facebook, Instagram and Twitter. The information is added to IRS databases and algorithms are used to identify potential tax evaders, the report said.

“The collection and use of this data without proper oversight and the increasing reliance on machine-generated decisions may result in harm” – such as targeting or discrimination of particular groups, Houser said in the report.

Social media, for instance, is full of errors and exaggerations, she said. The agency should be transparent about what types of information it collects and give taxpayers a chance to review and correct errors, Houser said federal law states.

The IRS’s media office in Washington, D.C., did not respond to an interview request. But Houser’s report is creating a buzz among privacy and data experts.

“It wouldn’t surprise me, that in an effort to save money, the IRS has created an algorithm to verify information on your tax return,” said Angie Raymond, associate professor in the business and ethics department of Indiana University’s Kelley School of Business.

“It’s an almost elegant use of an algorithm,” said Raymond, who wasn’t involved in the research.

But she said there are “significant legal implications” for an agency using information mined from social media or other online activity for government use, such as an IRS audit. The same privacy protections in federal law should apply, regardless of whether the records are paper or electronic, she said.

“People are going to be surprised that it is happening,” Raymond said. “We just feel sort of creepy that we’re monitored in this way.”

Jody Blanke teaches courses on the law and ethics of big data at Mercer University in Atlanta, where he is a law and computer science professor.

“I consider myself a privacy advocate,” Blanke said. “Quite frankly, whenever you read a law journal article like this about big data and privacy, they are often quite terrifying. …You read these papers and say, ‘Wow, I didn’t know you could do that.’ ”

In his classes, Blanke asks students whether they’re more concerned about businesses gathering information about them or government agencies. The class is usually split, he said.

“The federal government is among the leaders in trying to have better controls and safeguards for personal information,” Blanke said. “I would imagine the IRS takes security and privacy quite seriously.”

However, Houser’s report points out potential areas for misuse, , said Blanke, who wasn’t involved in the research.

The IRS has a long history of using audits for political purposes, Houser said. One of the more recent examples is when the IRS was accused of targeting conservative organizations affiliated with the tea party. The IRS also has had major data breaches, she said.

“The IRS is not the entity I want maintaining these records,” Houser said.

Hauser said she’d like to see an oversight office “watching what the IRS is doing with data.”

“We have laws in place to prevent the government from doing certain things with our data,” she said, “and it doesn’t seem like the IRS is complying.”
http://www.spokesman.com/stories/201...ivacy-laws-by/





Trump Cybersecurity Advisers Resign In ‘Moral’ Protest

Board members also condemned the president’s response to racist violence in Charlottesville, Va.
Joseph Marks

More than one-quarter of a panel tasked with advising the Homeland Security Department on cybersecurity and infrastructure protection resigned en masse Monday, citing President Donald Trump’s “insufficient attention” to the nation’s cyber vulnerabilities, among other complaints.

Resigning members of the National Infrastructure Advisory Council also cited the president’s failure to single out neo-Nazis and white supremacists for condemnation after a violent protest earlier this month in Charlottesville, Virginia.

“The moral infrastructure of our nation is the foundation on which our physical infrastructure is built,” the council members stated in a group resignation letter.

The resignation letter, obtained by Nextgov, also cites Trump’s decision to withdraw from the Paris climate change agreement and to revoke building standards related to flooding risk.

“Your actions have threatened the security of the homeland I took an oath to protect,” the letter writers tell the president.

The resignations come after Trump disbanded two business advisory councils earlier this month following a wave of resignations by chief executive officers. Those CEOs similarly condemned Trump’s response to the violence in Charlottesville.

The former infrastructure council members particularly faulted Trump administration efforts to ensure the digital security of election systems.

“You have given insufficient attention to the growing threats to the cybersecurity of the critical systems upon which all Americans depend, including those impacting the systems supporting our democratic election process,” the letter states.

Former Homeland Security Secretary John Kelly continued an Obama administration effort to shift federal resources to shore up the cybersecurity of state and local election infrastructure following Russian efforts to meddle in the 2016 election. Kelly also stuck with an Obama-era decision to label election systems critical infrastructure—an official Homeland Security Department designation that makes it easier to commit federal resources to protecting them.

Trump, however, has repeatedly questioned whether that meddling occurred and if Russia was responsible for it.

Among the resigning council members are three Obama-era officials: White House Chief Data Scientist DJ Patil, Office of Science and Technology Policy Chief of Staff Cristin Dorgelo, and White House Council on Environmental Quality Managing Director Christy Goldfuss, according to Twitter posts.

In total, eight out of 28 NIAC members’ names were removed from the official members web page this week.

Remaining council members met Tuesday and approved a report on cybersecurity vulnerabilities to critical infrastructure. That report warned that U.S. infrastructure is in “a pre-9/11 moment” when it comes to cybersecurity.
http://www.defenseone.com/politics/2...rotest/140535/





Judge Cracks Down on LinkedIn’s Shameful Abuse of Computer Break-In Law
Jamie Williams and Amul Kalia

Good news out of a court in San Francisco: a judge just issued an early ruling against LinkedIn’s abuse of the notorious Computer Fraud and Abuse Act (CFAA) to block a competing service from perfectly legal uses of publicly available data on its website. LinkedIn’s behavior is just the sort of bad development we expected after the United States Court of Appeals for the Ninth Circuit delivered two dangerously expansive interpretations of the CFAA last year—despite our warnings that the decisions would be easily misused.

The CFAA is a criminal law with serious penalties. It was passed in the 1980s with the aim of outlawing computer break-ins. Since then, it has metastasized in some jurisdictions into a tool for companies and websites to enforce their computer use policies, like terms of service (which no one reads) or corporate computer policies. Violating a computer use policy should by no stretch of the imagination count as felony. But the Ninth Circuit’s two decisions—Facebook v. Power Ventures and U.S. v. Nosal—emboldened some companies, almost overnight, to amp up their CFAA threats against competitors.

Luckily, a court in San Francisco has called foul, questioning LinkedIn’s use of the CFAA to block access to public data. The decision is a victory—a step toward our mission of holding the Ninth Circuit to its word and limiting its two dangerous opinions to their “stark” facts. But the LinkedIn case is in only its very early stages, and the earlier bad case law is still on the books.

The U.S. Supreme Court has the opportunity to change that, and we urge them to do so by granting certiorari in U.S. v. Nosal. The Court needs to step in and shut down abuse of this draconian and outdated law.

Background

The CFAA makes it illegal to engage in “unauthorized access” to a computer connected to the Internet, but the statute doesn’t tells us what “authorization” or “without authorization” means. This vague language might have seemed innocuous to some back in 1986 when the statute was passed, reportedly in response to the Matthew Broderick movie War Games. In today’s networked world, where we all regularly connect to and use computers owned by others, this pre-Web law is causing serious problems.

If you’ve been following our blog, you’re familiar with Facebook v. Power Ventures and U.S. v. Nosal. Both cases adopted expansive readings of “unauthorized access”—and we warned the Ninth Circuit that they threatened to transform the CFAA into a mechanism for policing Internet use and criminalizing ordinary Internet behavior, like password sharing.

Unfortunately, we were right.

Within weeks after the decisions came out, LinkedIn started sending out cease and desist letters citing the bad case law—specifically Power Ventures—to companies it said were violating its prohibition on scraping. One company LinkedIn targeted was hiQ Labs, which provides analysis of data on LinkedIn user’s publicly available profiles. Linkedin had tolerated hiQ’s behavior for years, but after the Power Ventures decision, it apparently saw an opportunity to shut down a competing service. LinkedIn sent hiQ letters warning that any future access of its website, even the public portions, were “without permission and without authorization” and thus violations of the CFAA.

Scraping publicly available data in violation of a company’s terms of use comes nowhere near Congress’s original intent of punishing those who break into protected computers to steal data or cause damage. But companies like LinkedIn still send out threatening letters with bogus CFAA claims. These letters are all too often effective at scaring recipients into submission given the CFAA’s notoriously severe penalties. Since demand letters are not generally public, we don’t know how many other companies are using the law to threaten competitors and stomp out innovation, but it’s unlikely that LinkedIn is alone in this strategy.

Luckily here, in the face of LinkedIn’s threats, hiQ did something that a lot of other companies don’t have the resources or courage to do: it took LinkedIn’s claims straight to court. It asked the Northern District of California in San Francisco to rule that its automated access of publicly available data was not in violation of the CFAA, despite LinkedIn’s threats. hiQ also asked the court to prohibit LinkedIn from blocking its access to public profiles while the court considered the merits of its request.

hiQ v. Linkedin: Preliminary Injunction Decision

Earlier this month, Judge Edward Chen granted hiQ’s request, enjoining LinkedIn from preventing or blocking hiQ’s access or use of public profiles, and ordering LinkedIn to withdraw its two cease and desist letters to hiQ. Although Judge Chen didn’t directly address the merits of the case, he expressed serious skepticism over LinkedIn’s CFAA claims, stating that “the Court is doubtful that the Computer Fraud and Abuse Act may be invoked by LinkedIn to punish hiQ or accessing publicly available data” and that the “broad interpretation of the CFAA invoked by LinkedIn, if adopted, could profoundly impact open access to the Internet, a result that Congress could not have intended when it enacted the CFAA over three decades ago.”

Judge Chen’s order is reassuring, and hopefully a harbinger of how courts going forward will react to efforts to use to the CFAA to limit access to public data. He’s not the only judge who feels that companies are taking the CFAA too far. During a Ninth Circuit oral argument in a different case in July, Judge Susan Graber—one of the judges behind the Power Ventures decision—pushed back on [at around 33:40] Oracle’s argument that automated scraping was a CFAA violation.

It’s still discouraging to see LinkedIn actively advocate for such a shortsighted expansion of an already overly broad criminal law—an outcome that could land people in jail for innocuous conduct—rather than trying to compete to provide a better service. The CFAA’s exorbitant penalties have already caused great tragedies, including playing a role in the death of our friend, Internet activist Aaron Schwartz. The Internet community should be trying to fix this broken law, not expand it. Opportunistic efforts to expand it are just plain shameful.

That’s why we’re asking the Supreme Court to step in and clarify that using a computer in a way that violates corporate policies, preferences, and expectations—as LinkedIn is claiming against hiQ—cannot be grounds for a CFAA violation. A clear, unequivocal ruling would go a long way to help stop abusive efforts to use the CFAA to limit access to publicly available data or to enforce corporate policies.

We hope the Supreme Court takes up the Nosal case. We should hear from the high court this fall. In the meantime, we hope LinkedIn takes Judge Chen’s recent ruling as a sign that’s its time to back away from its shameful abuse of the CFAA.
https://www.eff.org/deeplinks/2017/0...uter-break-law





Kaspersky Lab Turns the Tables, Forces “Patent Troll” to Pay Cash to End Case

“Why don’t you pay us $10,000?”
Joe Mullin

In October, Kaspersky Labs found itself in a situation familiar to many tech companies: it was sued by a do-nothing patent holder in East Texas who demanded a cash settlement before it would go away.

The patent-licensing company, Wetro Lan LLC, owned US Patent No. 6,795,918, which essentially claimed an Internet firewall. The patent was filed in 2000 despite the fact that computer network firewalls date to the 1980s. The '918 patent was used in what the Electronic Frontier Foundation called an "outrageous trolling campaign," in which dozens of companies were sued out of Wetro Lan's "headquarters," a Plano office suite that it shared with several other firms that engage in what is pejoratively called "patent-trolling." Wetro Lan's complaints argued that a vast array of Internet routers and switches infringed its patent.

Most companies sued by Wetro Lan apparently reached settlements within a short time, a likely indicator of low-value settlement demands. Not a single one of the cases even reached the claim construction phase. But Kaspersky wouldn't pay up.

As claim construction approached, Kaspersky's lead lawyer Casey Kniser served discovery requests for Wetro Lan's other license agreements. He suspected the amounts were low.

"Their patent was for a firewall that's not user-configurable," Kniser said in an interview with Ars. "They knew ours was configurable. So they started taking weird positions, basically saying, 'Well, you can only configure it a little bit.' I think that would have gotten them in trouble as far as [patent] validity goes."

Wetro Lan's settlement demands kept dropping, down from its initial "amicable" demand of $60,000. Eventually, the demands reached $10,000—an amount that's extremely low in the world of patent litigation. Kniser tried to explain that it didn't matter how far the company dropped the demand. "Kaspersky won't pay these people even if it's a nickel," he said.

Then Kniser took a new tack.

"We said, actually, $10,000 is fine," said Kniser. "Why don't you pay us $10,000?"

After some back-and-forth, Wetro Lan's lawyer agreed to pay Kaspersky $5,000 to end the litigation. Papers were filed Monday, and both sides have dropped their claims.

"From our point of view, we had a winning case," Kniser said. "We had invalidity contentions that were good. For that effort, we didn't want to pay them money. It didn't seem fair that they should be able to just walk away."

On a post to his personal blog detailing the victory against Wetro Lan, founder and CEO Eugene Kaspersky says his company has now defeated five claims from patent assertion entities, including the infamous claims from Lodsys, a much-maligned patent holder that sent demand letters to small app developers. Lodsys dropped its case against Kaspersky right before a trial.

While the company has spent plenty in legal fees, its total payout to so-called "trolls" has been $0. Firms that engage in "trolling" know that companies often simply settle instead of dealing with the costs and pain of a court litigation.

"Companies just pay the relatively small sum to the troll to shut it up so they can get back to work on something worthwhile," Kaspersky wrote in his blog post. "However, in the long run, they're on to a loser: once the troll gets a taste of the easy money—it comes back for more again and again."

Wetro Lan attorney Peter Corcoran didn't respond to a request for comment on the case.
https://arstechnica.com/tech-policy/...h-to-end-case/





Fraud Forces WannaCry Hero's Legal Fund To Refund All Donations

The lawyer managing fundraising for Marcus Hutchins' legal defense decided it was easier to refund all donations than figure out which ones were legitimate.
Kevin Collier

Marcus Hutchins, the British cybersecurity expert accused of creating and selling malware that steals banking passwords, arrives at the federal courthouse in Milwaukee for his Aug. 14 arraignment.

The vast majority of money raised to pay for the legal defense of beloved British cybersecurity researcher Marcus Hutchins was donated with stolen or fake credit card numbers, and all donations, including legitimate ones, will be returned, the manager of the defense fund says.

Lawyer Tor Ekeland, who managed the fund, said at least $150,000 of the money collected came from fraudulent sources, and that the prevalence of fraudulent donations effectively voided the entire fundraiser. He said he'd been able to identify only about $4,900 in legitimate donations, but that he couldn't be certain even of those.

“I don’t want to take the risk, so I just refunded everything,” he said.

“One person had five different charges to his account,” Ekeland told BuzzFeed News. “Odd numbers, the kind you’ll glance over when looking at your bill.”

Ekeland said he's still determining what to do with a bitcoin wallet set up to take donations for Hutchins, which has received 96 small donations, worth a total of about $3,400.

Hutchins, 23, became famous in May when he stopped the global WannaCry ransomware attack by obtaining and examining its code, then registering a URL that functioned as a kill switch. Before Hutchins acted, the WannaCry ransomware had frozen tens of thousands of computers, including many in the United Kingdom's National Health Service.

In early August, after Hutchins traveled with friends to Las Vegas for a cybersecurity conference, the FBI arrested him on charges he created the Kronos Trojan that targeted bank accounts. A shocked community of cybersecurity researchers rallied behind him and tried to raise money for experienced attorneys. Ekeland stepped in to host the fundraiser after GoFundMe refused to host it, citing its terms of service.

Hutchins, who pleaded not guilty to all six charges against him on Aug. 14, has retained Brian Klein, a Los Angeles-based trial lawyer, and Marcia Hofmann, an acclaimed expert on US hacking laws, as his attorneys. A judge told him he could not return to the UK before his trial, scheduled for October, but that he could stay in LA in the meantime.

Tarah Wheeler, a friend of Hutchins, previously told BuzzFeed News that she planned to set up a new legal defense fund for him, but did not immediately respond to request for comment for this story.

Ekeland said he believes he has refunded every donation to the fund, but that anyone who hasn’t heard from him can email info@torekeland.com for their refund.
https://www.buzzfeed.com/kevincollie...nd-refunds-all





711 Million Email Addresses Ensnared in "Largest" Spambot

The spambot has collected millions of email credentials and server login information in order to send spam through "legitimate" servers, defeating many spam filters.
Zack Whittaker

A huge spambot ensnaring 711 million email accounts has been uncovered.

A Paris-based security researcher, who goes by the pseudonymous handle Benkow, discovered an open and accessible web server hosted in the Netherlands, which stores dozens of text files containing a huge batch of email addresses, passwords, and email servers used to send spam.

Those credentials are crucial for the spammer's large-scale malware operation to bypass spam filters by sending email through legitimate email servers.

The spambot, dubbed "Onliner," is used to deliver the Ursnif banking malware into inboxes all over the world. To date, it's resulted in more than 100,000 unique infections across the world, Benkow told ZDNet.

Troy Hunt, who runs breach notification site Have I Been Pwned, said it was a "mind-boggling amount of data."

Hunt, who analyzed the data and details his findings in a blog post, called it the "largest" batch of data to enter the breach notification site in its history.

Benkow, who also wrote up his findings in a blog post, has spent months digging into the Ursnif malware, a data-stealing trojan used to grab personal information such as login details, passwords, and credit card data, researchers have said. Typically, a spammer would send a "dropper" file as a normal-looking email attachment. When the attachment is opened, the malware downloads from a server and infects the machine.

But while spamming is still an effective malware delivery method, email filters are getting smarter and many domains found to have sent spam have been blacklisted.

The spammer's Onliner campaign, however, uses a sophisticated setup to bypass those spam filters.

"To send spam, the attacker needs a huge list of SMTP credentials," said Benkow in his blog post. Those credentials authenticate the spammer in order to send what appears to be legitimate email.

"The more SMTP servers he can find, the more he can distribute the campaign," he said.

Those credentials, he explained, have been scraped and collated from other data breaches, such as the LinkedIn hack and the Badoo hack, as well also other unknown sources. The list has about 80 million accounts, he said, with each line containing the email address and password, along with the SMTP server and the port used to send the email. The spammer tests each entry by connecting to the server to ensure that the credentials are valid and that spam can be sent. The accounts that don't work are ignored.

These 80 million email servers are then used to send the remaining 630 million targets emails, designed to scope out the victim, or so-called "fingerprinting" emails.

These emails appear innocuous enough, but they contain a hidden pixel-sized image. When the email is open, the pixel image sends back the IP address and user-agent information, used to identify the type of computer, operating system, and other device information. That helps the attacker know who to target with the Ursnif malware, by specifically targeting Windows computers, rather than sending malicious files to iPhone or Android users, which aren't affected by the malware.

Benkow said that narrowing down of would-be victims is key to ensuring the success of the malware campaign.

"There is a risk that the campaign can become too noisy, like Dridex, for example," he told ZDNet. "If your campaign is too noisy, law enforcement will look for you."

Benkow explained that the attacker can send out a million "fingerprinting" spam emails and get a fraction of emails back, but still have enough responses to send out a second batch of a few thousand targeted emails with malware.

Those emails often come days or even weeks later, masquerading as invoices from delivery services, hotels, or insurance companies, with a malicious JavaScript file attached.

"It's pretty smart," Benkow admitted.

According to Hunt, who processed the data, 27 percent of email addresses in the data are already in Have I Been Pwned. But he noted a caveat: Because the data has been scraped from the web, some of the data is malformed. He said that while the 711 million figure is "technically accurate," the number of humans involved will be somewhat less.

Hunt has made the data now searchable in Have I Been Pwned.
http://www.zdnet.com/article/onliner...aign-millions/





Tech Firms Team Up to Take Down ‘WireX’ Android DDoS Botnet
Brian Krebs

A half dozen technology and security companies — some of them competitors — issued the exact same press release today. This unusual level of cross-industry collaboration caps a successful effort to dismantle ‘WireX,’ an extraordinary new crime machine comprising tens of thousands of hacked Android mobile devices that was used this month to launch a series of massive cyber attacks.

Experts involved in the takedown warn that WireX marks the emergence of a new class of attack tools that are more challenging to defend against and thus require broader industry cooperation to defeat.

News of WireX’s emergence first surfaced August 2, 2017, when a modest collection of hacked Android devices was first spotted conducting some fairly small online attacks. Less than two weeks later, however, the number of infected Android devices enslaved by WireX had ballooned to the tens of thousands.

More worrisome was that those in control of the botnet were now wielding it to take down several large websites in the hospitality industry — pelting the targeted sites with so much junk traffic that the sites were no longer able to accommodate legitimate visitors.

Experts tracking the attacks soon zeroed in on the malware that powers WireX: Approximately 300 different mobile apps scattered across Google‘s Play store that were mimicking seemingly innocuous programs, including video players, ringtones or simple tools such as file managers.

“We identified approximately 300 apps associated with the issue, blocked them from the Play Store, and we’re in the process of removing them from all affected devices,” Google said in a written statement. “The researchers’ findings, combined with our own analysis, have enabled us to better protect Android users, everywhere.”

Perhaps to avoid raising suspicion, the tainted Play store applications all performed their basic stated functions. But those apps also bundled a small program that would launch quietly in the background and cause the infected mobile device to surreptitiously connect to an Internet server used by the malware’s creators to control the entire network of hacked devices. From there, the infected mobile device would await commands from the control server regarding which Websites to attack and how.

Experts involved in the takedown say it’s not clear exactly how many Android devices may have been infected with WireX, in part because only a fraction of the overall infected systems were able to attack a target at any given time. Devices that were powered off would not attack, but those that were turned on with the device’s screen locked could still carry on attacks in the background, they found.

“I know in the cases where we pulled data out of our platform for the people being targeted we saw 130,000 to 160,000 (unique Internet addresses) involved in the attack,” said Chad Seaman, a senior engineer at Akamai, a company that specializes in helping firms weather large DDoS attacks (Akamai protected KrebsOnSecurity from hundreds of attacks prior to the large Mirai assault last year).

The identical press release that Akamai and other firms involved in the WireX takedown agreed to publish says the botnet infected a minimum of 70,000 Android systems, but Seaman says that figure is conservative.

“Seventy thousand was a safe bet because this botnet makes it so that if you’re driving down the highway and your phone is busy attacking some website, there’s a chance your device could show up in the attack logs with three or four or even five different Internet addresses,” Seaman said in an interview with KrebsOnSecurity. “We saw attacks coming from infected devices in over 100 countries. It was coming from everywhere.”

BUILDING ON MIRAI

Security experts from Akamai and other companies that participated in the WireX takedown say the basis for their collaboration was forged in the monstrous and unprecedented distributed denial-of-service (DDoS) attacks launched last year by Mirai, a malware strain that seeks out poorly-secured “Internet of things” (IoT) devices such as security cameras, digital video recorders and Internet routers.

The first and largest of the Mirai botnets was used in a giant attack last September that knocked this Web site offline for several days. Just a few days after that — when the source code that powers Mirai was published online for all the world to see and use — dozens of copycat Mirai botnets emerged. Several of those botnets were used to conduct massive DDoS attacks against a variety of targets, leading to widespread Internet outages for many top Internet destinations.

Allison Nixon, director of security research at New York City-based security firm Flashpoint, said the Mirai attacks were a wake-up call for the security industry and a rallying cry for more collaboration.

“When those really large Mirai DDoS botnets started showing up and taking down massive pieces of Internet infrastructure, that caused massive interruptions in service for people that normally don’t deal with DDoS attacks,” Nixon said. “It sparked a lot of collaboration. Different players in the industry started to take notice, and a bunch of us realized that we needed to deal with this thing because if we didn’t it would just keep getting bigger and rampaging around.”

Mirai was notable not only for the unprecedented size of the attacks it could launch but also for its ability to spread rapidly to new machines. But for all its sheer firepower, Mirai is not a particularly sophisticated attack platform. Well, not in comparison to WireX, that is.

CLICK-FRAUD ORIGINS

According to the group’s research, the WireX botnet likely began its existence as a distributed method for conducting “click fraud,” a pernicious form of online advertising fraud that will cost publishers and businesses an estimated $16 billion this year, according to recent estimates. Multiple antivirus tools currently detect the WireX malware as a known click fraud malware variant.

The researchers believe that at some point the click-fraud botnet was repurposed to conduct DDoS attacks. While DDoS botnets powered by Android devices are extremely unusual (if not unprecedented at this scale), it is the botnet’s ability to generate what appears to be regular Internet traffic from mobile browsers that strikes fear in the heart of experts who specialize in defending companies from large-scale DDoS attacks.

DDoS defenders often rely on developing custom “filters” or “signatures” that can help them separate DDoS attack traffic from legitimate Web browser traffic destined for a targeted site. But experts say WireX has the capability to make that process much harder.

That’s because WireX includes its own so-called “headless” Web browser that can do everything a real, user-driven browser can do, except without actually displaying the browser to the user of the infected system.

Also, Wirex can encrypt the attack traffic using SSL — the same technology that typically protects the security of a browser session when an Android user visits a Web site which requires the submission of sensitive data. This adds a layer of obfuscation to the attack traffic, because the defender needs to decrypt incoming data packets before being able to tell whether the traffic inside matches a malicious attack traffic signature.

Translation: It can be far more difficult and time-consuming than usual for defenders to tell WireX traffic apart from clicks generated by legitimate Internet users trying to browse to a targeted site.

“These are pretty miserable and painful attacks to mitigate, and it was these kinds of advanced functionalities that made this threat stick out like a sore thumb,” Akamai’s Seaman said.

NOWHERE TO HIDE

Traditionally, many companies that found themselves on the receiving end of a large DDoS attack sought to conceal this fact from the public — perhaps out of fear that customers or users might conclude the attack succeeded because of some security failure on the part of the victim.

But the stigma associated with being hit with a large DDoS is starting to fade, Flashpoint’s Nixon said, if for no other reason than it is becoming far more difficult for victims to conceal such attacks from public knowledge.

“Many companies, including Flashpoint, have built out different capabilities in order to see when a third party is being DDoS’d,” Nixon said. “Even though I work at a company that doesn’t do DDoS mitigation, we can still get visibility when a third-party is getting attacked. Also, network operators and ISPs have a strong interest in not having their networks abused for DDoS, and many of them have built capabilities to know when their networks are passing DDoS traffic.”

Just as multiple nation states now employ a variety of techniques and technologies to keep tabs on nation states that might conduct underground tests of highly destructive nuclear weapons, a great deal more organizations are now actively looking for signs of large-scale DDoS attacks, Seaman added.

“The people operating those satellites and seismograph sensors to detect nuclear [detonations] can tell you how big it was and maybe what kind of bomb it was, but they probably won’t be able to tell you right away who launched it,” he said. “It’s only when we take many of these reports together in the aggregate that we can get a much better sense of what’s really going on. It’s a good example of none of us being as smart as all of us.”

According to the WireX industry consortium, the smartest step that organizations can take when under a DDoS attack is to talk to their security vendor(s) and make it clear that they are open to sharing detailed metrics related to the attack.

“With this information, those of us who are empowered to dismantle these schemes can learn much more about them than would otherwise be possible,” the report notes. “There is no shame in asking for help. Not only is there no shame, but in most cases it is impossible to hide the fact that you are under a DDoS attack. A number of research efforts have the ability to detect the existence of DDoS attacks happening globally against third parties no matter how much those parties want to keep the issue quiet. There are few benefits to being secretive and numerous benefits to being forthcoming.”
https://krebsonsecurity.com/2017/08/...d-ddos-botnet/





On Internet Privacy, Be Very Afraid

‘Surveillance is the business model of the internet,’ Berkman and Belfer fellow says
Liz Mineo

In the internet era, consumers seem increasingly resigned to giving up fundamental aspects of their privacy for convenience in using their phones and computers, and have grudgingly accepted that being monitored by corporations and even governments is just a fact of modern life.

In fact, internet users in the United States have fewer privacy protections than those in other countries. In April, Congress voted to allow internet service providers to collect and sell their customers’ browsing data. By contrast, the European Union hit Google this summer with a $2.7 billion antitrust fine.

To assess the internet landscape, the Gazette interviewed cybersecurity expert Bruce Schneier, a fellow with the Berkman Klein Center for Internet & Society and the Belfer Center for Science and International Affairs at Harvard Kennedy School. Schneier talked about government and corporate surveillance, and about what concerned users can do to protect their privacy.

GAZETTE: After whistleblower Edward Snowden’s revelations concerning the National Security Agency’s (NSA) mass surveillance operation in 2013, how much has the government landscape in this field changed?

SCHNEIER: Snowden’s revelations made people aware of what was happening, but little changed as a result. The USA Freedom Act resulted in some minor changes in one particular government data-collection program. The NSA’s data collection hasn’t changed; the laws limiting what the NSA can do haven’t changed; the technology that permits them to do it hasn’t changed. It’s pretty much the same.

GAZETTE: Should consumers be alarmed by this?

SCHNEIER: People should be alarmed, both as consumers and as citizens. But today, what we care about is very dependent on what is in the news at the moment, and right now surveillance is not in the news. It was not an issue in the 2016 election, and by and large isn’t something that legislators are willing to make a stand on. Snowden told his story, Congress passed a new law in response, and people moved on.
Graphic by Rebecca Coleman/Harvard Staff

GAZETTE: What about corporate surveillance? How pervasive is it?

SCHNEIER: Surveillance is the business model of the internet. Everyone is under constant surveillance by many companies, ranging from social networks like Facebook to cellphone providers. This data is collected, compiled, analyzed, and used to try to sell us stuff. Personalized advertising is how these companies make money, and is why so much of the internet is free to users. We’re the product, not the customer.

GAZETTE: Should they be stopped?

SCHNEIER: That’s a philosophical question. Personally, I think that in many cases the answer is yes. It’s a question of how much manipulation we allow in our society. Right now, the answer is basically anything goes. It wasn’t always this way. In the 1970s, Congress passed a law to make a particular form of subliminal advertising illegal because it was believed to be morally wrong. That advertising technique is child’s play compared to the kind of personalized manipulation that companies do today. The legal question is whether this kind of cyber-manipulation is an unfair and deceptive business practice, and, if so, can the Federal Trade Commission step in and prohibit a lot of these practices.

GAZETTE: Why doesn’t the commission do that? Why is this intrusion happening, and nobody does anything about it?

SCHNEIER: We’re living in a world of low government effectiveness, and there the prevailing neo-liberal idea is that companies should be free to do what they want. Our system is optimized for companies that do everything that is legal to maximize profits, with little nod to morality. Shoshana Zuboff, professor at the Harvard Business School, invented the term “surveillance capitalism” to describe what’s happening. It’s very profitable, and it feeds off the natural property of computers to produce data about what they are doing. For example, cellphones need to know where everyone is so they can deliver phone calls. As a result, they are ubiquitous surveillance devices beyond the wildest dreams of Cold War East Germany.

GAZETTE: But Google and Facebook face more restrictions in Europe than in the United States. Why is that?

SCHNEIER: Europe has more stringent privacy regulations than the United States. In general, Americans tend to mistrust government and trust corporations. Europeans tend to trust government and mistrust corporations. The result is that there are more controls over government surveillance in the U.S. than in Europe. On the other hand, Europe constrains its corporations to a much greater degree than the U.S. does. U.S. law has a hands-off way of treating internet companies. Computerized systems, for example, are exempt from many normal product-liability laws. This was originally done out of the fear of stifling innovation.

“Google knows quite a lot about all of us. No one ever lies to a search engine. I used to say that Google knows more about me than my wife does, but that doesn’t go far enough. Google knows me even better, because Google has perfect memory in a way that people don’t.”
—Bruce Schneier, cybersecurity expert

GAZETTE: It seems that U.S. customers are resigned to the idea of giving up their privacy in exchange for using Google and Facebook for free. What’s your view on this?

SCHNEIER: The survey data is mixed. Consumers are concerned about their privacy and don’t like companies knowing their intimate secrets. But they feel powerless and are often resigned to the privacy invasions because they don’t have any real choice. People need to own credit cards, carry cellphones, and have email addresses and social media accounts. That’s what it takes to be a fully functioning human being in the early 21st century. This is why we need the government to step in.

GAZETTE: You’re one of the most well-known cybersecurity experts in the world. What do you do to protect your privacy online?

SCHNEIER: I don’t have any secret techniques. I do the same things everyone else does, and I make the same tradeoffs that everybody else does. I bank online. I shop online. I carry a cellphone, and it’s always turned on. I use credit cards and have airline frequent flier accounts. Perhaps the weirdest thing about my internet behavior is that I’m not on any social media platforms. That might make me a freak, but honestly it’s good for my productivity. In general, security experts aren’t paranoid; we just have a better understanding of the trade-offs we’re doing. Like everybody else, we regularly give up privacy for convenience. We just do it knowingly and consciously.

GAZETTE: What else do you do to protect your privacy online? Do you use encryption for your email?

SCHNEIER: I have come to the conclusion that email is fundamentally insecurable. If I want to have a secure online conversation, I use an encrypted chat application like Signal. By and large, email security is out of our control. For example, I don’t use Gmail because I don’t want Google having all my email. But last time I checked, Google has half of my email because you all use Gmail.

GAZETTE: What does Google know about you?

SCHNEIER: Google’s not saying because they know it would freak people out. But think about it, Google knows quite a lot about all of us. No one ever lies to a search engine. I used to say that Google knows more about me than my wife does, but that doesn’t go far enough. Google knows me even better, because Google has perfect memory in a way that people don’t.

GAZETTE: Is Google the “Big Brother?”

SCHNEIER: “Big Brother” in the Orwellian sense meant big government. That’s not Google, and that’s not even the NSA. What we have is many “Little Brothers”: Google, Facebook, Verizon, etc. They have enormous amounts of data on everybody, and they want to monetize it. They don’t want to respect your privacy.

GAZETTE: In your book “Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World,” you recommend a few strategies for people to protect their privacy online. Which one is the most effective?

SCHNEIER: Unfortunately, we live in a world where most of our data is out of our control. It’s in the cloud, stored by companies that may not have our best interests at heart. So, while there are technical strategies people can employ to protect their privacy, they’re mostly around the edges. The best recommendation I have for people is to get involved in the political process. The best thing we can do as consumers and citizens is to make this a political issue. Force our legislators to change the rules.

Opting out doesn’t work. It’s nonsense to tell people not to carry a credit card or not to have an email address. And “buyer beware” is putting too much onus on the individual. People don’t test their food for pathogens or their airlines for safety. The government does it. But the government has failed in protecting consumers from internet companies and social media giants. But this will come around. The only effective way to control big corporations is through big government. My hope is that technologists also get involved in the political process — in government, in think-tanks, universities, and so on. That’s where the real change will happen. I tend to be short-term pessimistic and long-term optimistic. I don’t think this will do society in. This is not the first time we’ve seen technological changes that threaten to undermine society, and it won’t be the last.
https://news.harvard.edu/gazette/sto...lyst-suggests/





How to Become a Full-Blown Privacy Fanatic With Purism's Librem Laptop
Dell Cameron

Concerns over online privacy and security are increasingly changing the way consumers spend their money and behave online. According to a Pew Research study conducted one year ago, 86 percent of internet users have now taken at least some steps to conceal their digital footprints, though many say they would like to do more, if only they knew how.

Librem 13
Price $1400

What is it?
A laptop that puts privacy before convenience.

Like
It maintains my privacy whether I want it to or not.

No Like
Trackpad and keyboard could be much much better.

If you want to go beyond merely using browser extensions intended to block privacy-killing trackers and advertisements, a laptop manufacturer you’ve likely never heard of has created a business based on a full on defense of privacy. It employs its own custom operating system designed for one purpose: to prevent the laptop’s owner from inadvertently relinquishing control over their most sensitive and personal data.

The $1399 Librem 13 manufactured by California-based Purism is a surveillance paranoiac’s fantasy. While industry leaders—Dell, Lenovo, and HP, among others—construct their machines based on factors such as the price and availability of parts, with the software powering their operating systems geared toward usability, the Librem is instead built with the user’s security and privacy foremost in mind.

If you decide to use this computer, you’re basically saying that privacy is no longer an accessory, but rather a lifestyle that requires a prodigious shift in every facet of your online behavior. For most, the concept of switching up your routine so dramatically will be far too intimidating. Buyer’s remorse will come for those who, having taken the leap, suddenly decide the evolution is too painful. They’ll gladly surrender their privacy once again.

For a select few, however, there’s no price too steep to pay in the quest for privacy. Again, security demands sacrifice, and so, with the Librem, the first thing you’ll be asked to forfeit is the familiarity of your preferred operating system. In place of macOS or Windows, the Librem leverages a Debian GNU/Linux distribution to create PureOS, a simple and unique Linux-based system designed by Purism’s own team of specialized Debian developers.

Now, this is normally the part where tech reporters feel duty-bound to warn you about the what a god-awful “chore” it is to pick up Linux. I’m going to pass. If you have any useful skills whatsoever beyond tying your own shoes, then I promise you already possess the faculties required to conquer Linux. The biggest challenge will be taking an interest in mastering something new, maybe reading the first 25-30 pages of a For Dummies book, and avoiding the urge to crawl back into your boring comfort zone. You can learn Linux, it will not take forever, and when you do you will be grateful you did.

There are significant advantages to using a Linux-based operating system—in this case, Debian—the least of which is the enhanced privacy you’ll enjoy from a system devoid of rancid bloatware. Linux is infinitely more secure than Windows. Its codebase is maintained by the umpteen people who actually use it, and when a glitch does arise, it gets fixed fast. And while, yes, it’s not as snappy to configure as Mac OS, you’ll eventually come to enjoy not existing within the confines of Apple’s bullshit walled off garden.

Though with Purism there’s still a wall. To protect your privacy, it won’t let you install some of your favorite (data-stealing) apps from its app store, which is simply called “Software.” PureOS includes and only allows users to install and run software that meets strict requirements with regard to privacy protection. All of the software it makes available is both free and open source (FOSS), meaning it can be easily audited by anyone to weed out nefarious code. There should be a tool for virtually every task you do from word processing to image editing. LibreOffice, which is basically a free version of Microsoft Office; Kodi instead of Media Player/iTunes; Gnome Mail and Thunderbird for email

But you bought the Librem! Therefore you are free to do with it as you wish. So if you dislike the draconian nature of the app store, there are plenty of workarounds for installing apps offered by companies that are more than eager to compromise your privacy. Purism simply isn’t in the business of participating in or making convenient your self-destructive behavior.

PureOS does, however, offer some significant privacy advantages if, say, you do decide to install Chrome instead of using Purebrowser, the Librem’s built-in Firefox-based browser that comes packed with privacy-based add-ons, such as HTTPS Everywhere and Ublock Origin, an anti-tracking extension.

Various app-isolating features (such as Flatpak) ensure that any insecure applications can’t read other areas of the system. For example, nothing that pops up in Chrome can access your password manager. Still, to get the most bang for your privacy-buck, you should endeavor to use only the free and open-source apps downloadable via the Software storefront.

My personal favorite feature of the Librem—which absolutely should, but does not, come standard in all new laptops—is a physical kill switch above the keyboard that deactivates the webcam and microphone. (Say goodbye to that grody piece of masking tape you’ve been using.) Purism claims this mechanism will make your webcam virtually “unhackable.” The killswitch, which I did not personally probe with a power supply tester, severs all power to both the webcam and the internal mic. In other words, there is no battery backup for malware to take advantage of to activate the cam or mic when the switch is disengaged. Flip it and people shouldn’t be able to see or hear you. Period.

A second and equally as useful kill switch deactivates both the Librem’s Bluetooth and wi-fi functions, though, I admit it would be more useful if these were separate switches. Both the webcam/mic switch and wi-fi/Bluetooth switch appeared to work as promised and their utility is easy to appreciate in an age of effortless wireless intrusion.

Another cool feature, frequently touted by Purism, is that the Librem’s firmware cannot “phone home.” This means, for example, that the Qualcomm Atheros chipset fabricated into the motherboard uses open-source wireless drivers so you can be sure it isn’t running some mystery code that’s slipping your wi-fi passwords or other sensitive data into RAM storage—or worse, transmitting it to some malicious third party.

The Librem also comes pre-loaded with the open-source Coreboot instead of proprietary closed-source BIOS firmware. This change comes about after a years-long controversy which led some critics to advise avoiding previous versions of the Librem altogether. Earlier models of the Librem shipped using an AMI UEFI BIOS, which relies on proprietary, closed-source code—a fact that seemed to fly in the face of Purism’s promise that all of its components would be “free according to the strictest of guidelines set forth by the Free Software Foundation’s Free Software definition.”

Guts wise the Librem 13 isn’t particularly special. It’s got a 6th generation Intel i5 processor that’s about a year too old, and comes standard with 4GB of RAM, and a 120GB SSD. The processor, RAM, and storage can all be upgraded, but the matte finish 1080p display cannot. The only real downside to the Librem, aside from the some outdated guts, is the usability of the hardware itself.

The trackpad sucks. By that I mean it’s fucking awful. I’d say I’m just spoiled because Apple’s Magic Trackpad 2, which I’ve been using for years, is too perfect, but the way the Elantech Trackpad the Librem sports handles is downright offensive. (Sadly, the Elantech is the best best GNU/Linux trackpad available.) The back-lit keyboard is also nothing to write home about, though I had no specific complaints save one: THERE’S NO INDICATOR LIGHT ON THE CAPS LOCK KEY.

With its high $1,400 price tag and privacy focus, the Librem is a device with a very thin market. Still, if you’re not a security expert yourself and don’t feel confident about securing your own device from the hackers, spies, and shady corporations tracking your every click, then the Librem is an option—albeit an extreme one—that’s worth consideration.

README

• From its integrated circuits to the application layer, the Librem is manufactured with user privacy as the utmost priority.
• You’re paying extra for privacy-centric components.
• The battery life is good enough. The trackpad is atrocious. The keyboard is nothing special.
• A physical kill switch severing all power to the the webcam and microphone is pure genius.
• If you’re not an experienced Linux user, you should approach the adoption of PureOS as if it were new hobby you’re hoping to master—there’s a bit of a learning curve.

SPEC DUMP

13.3-inch 1920x1080p matte display • roughly 7-9 hours battery life • 2 Core i5 6200U Skylake CPU (4 threads) • 2.8 GHz CPU max frequency • Up to 2 TB storage • Intel HD Graphics 52 • 16GB max memory • DDR4 AT 2133 MHz • 720p 1.0 megapixel webcam • Atheros 802.11n w/ two antenna • two internal speakers • 1 audio jack, mic/line out • 1 HDMI port for external monitor (4k capable) • 2-in-1 SD/MMC card reader • 325 x 219 x 18mm • 1.4kg (3.3lbs)


Dell Cameron@dellcam

Investigative reporter. FOIA enthusiast. Send tips: dell@gizmodo.com

PGP Fingerprint: EB53 EA4F 3049 C3B5|PGP Key
https://gizmodo.com/how-to-become-a-full-blown-privacy-fanatic-with-purisms-1798505852






Man Who Refused to Decrypt Hard Drives Still in Prison After Two Years
Catalin Cimpanu

Francis Rawls, a former Philadelphia cop, will remain in jail for refusing to decrypt a hard drive federal investigators found in his home two years ago during a child abuse investigation.

A judge ordered the man to prison almost two years ago after the suspect claimed he forgot the password of an encrypted Apple FileVault system investigators found attached to his computer while performing a house search.

Investigators said content stored on the encrypted hard drive matched file hashes for known child pornography content.

Rawls sent to prison in 2015

Authorities tried to make Rawls hand over the hard drive's password, but he refused to comply. A federal judge found the man in contempt of court and sentenced him to an indefinite prison sentence until he was willing to cooperate.

Rawls said later he forgot the password and even entered three incorrect passwords during previous meetings with investigators.

The suspect appealed the indefinite prison sentence twice, but both appeals failed. His lawyers tried to argue that holding him breaches his Fifth Amendment right to not incriminate himself, but appeal judges did not see it that way.

Judges pointed out that the Fifth Amendment only applies to witnesses and that the prosecutors didn't call him as a witness but only made a request for him to unlock his device, hence Fifth Amendment protections did not apply.

Rawls files appeal with the Supreme Court

Rawls' team has now filed an appeal with the US Supreme Court on the same grounds. His team also filed a request to have Rawls released during his Supreme Court appeal as he's been held in court for more than 18 months, the standard punishment for contempt of court.

A judge declined the request saying that Rawls was not charged under a standard law (28 USC § 1826), but under All Writs Act (28 U.S.C. § 1651), hence he can be detained indefinitely.

This ancient piece of legislation dictates that US citizens must aide any law enforcement investigation. The prosecution used this legal trickery to avoid calling Rawls as a witness. This is also the same piece of legislation the FBI used against Apple when it tried to force the company to unlock the phone of the San Bernardino mass-shooter.

The government also said that Rawls doesn't have to provide them with his password anymore, as they only need him to perform the act of unlocking the hard drive.
https://www.bleepingcomputer.com/new...ter-two-years/





Intel ME Controller Chip has Secret Kill Switch

Researchers find undocumented accommodation for government customers
Thomas Claburn

Security researchers at Moscow-based Positive Technologies have identified an undocumented configuration setting that disables Intel Management Engine 11, a CPU control mechanism that has been described as a security risk.

Intel's ME consists of a microcontroller that works with the Platform Controller Hub chip, in conjunction with integrated peripherals. It handles much of the data travelling between the processor and external devices, and thus has access to most of the data on the host computer.

If compromised, it becomes a backdoor, giving an attacker control over the affected device.

That possibility set off alarms in May, with the disclosure of a vulnerability in Intel's Active Management Technology, a firmware application that runs on the Intel ME.

The revelation prompted calls for a way to disable the poorly understood hardware. At the time, the Electronic Frontier Foundation called it a security hazard. The tech advocacy group demanded a way to disable "the undocumented master controller inside our Intel chips" and details about how the technology works.

An unofficial workaround called ME Cleaner can partially hobble the technology, but cannot fully eliminate it. "Intel ME is an irremovable environment with an obscure signed proprietary firmware, with full network and memory access, which poses a serious security threat," the project explains.

On Monday, Positive Technologies researchers Dmitry Sklyarov, Mark Ermolov, and Maxim Goryachy said they had found a way to turn off the Intel ME by setting the undocumented HAP bit to 1 in a configuration file.

HAP stands for high assurance platform. It's an IT security framework developed by the US National Security Agency, an organization that might want a way to disable a feature on Intel chips that presents a security risk.

The Register asked Intel about this and received the same emailed statement that was provided to Positive Technologies.

"In response to requests from customers with specialized requirements we sometimes explore the modification or disabling of certain features," Intel's spokesperson said. "In this case, the modifications were made at the request of equipment manufacturers in support of their customer's evaluation of the US government's 'High Assurance Platform' program. These modifications underwent a limited validation cycle and are not an officially supported configuration."

Positive Technologies in its blog post acknowledged that it would be typical for government agencies to want to reduce the possibility of unauthorized access. It noted that HAP's effect on Boot Guard, Intel's boot process verification system, remains unknown, though it hopes to answer that question soon.
https://www.theregister.co.uk/2017/0...n_be_disabled/





Quebec Man Fights Back after Dealer Remotely Disables Car Over $200 Fee

Consumer Protection Bureau, advocate say dealership's actions may have broken the law
Stephen Smith, Claude Rivest

A car dealership in Sherbrooke, Que., may have broken the law when it used a GPS device to disable the car of a client who was refusing to pay an extra $200 fee, say consumer advocates consulted by CBC News.

Bury, Que., resident Daniel Lallier signed a four-year lease for a Kia Forte LX back in May from Kia Sherbrooke. Two months later, the 20-year-old's grandmother offered to buy the car outright when he lost his job and couldn't make his weekly payments.

After settling the balance and paying a $300 penalty, Lallier said, the dealership told him he would have to pay an additional $200 to remove a GPS tracker that had been installed on the car.

The device allows dealers to remotely immobilize a car in case lease payments are in arrears.

Consumer protection groups reveal top 10 scams targeting Canadians

Lallier said there was no mention of the removal fee in the contract and he disputed having to pay it.

"I just find it absurd that over $13,000 was spent on this vehicle and we still have to pay $200 more to have their device removed," he told CBC.

Notified by text message

After Lallier refused to pay the fee, a mechanic notified him by text message that his car was being remotely disabled until the dealership recovered the device and $200 fee.

"I went outside and tested my car, and it wouldn't work at all. It wouldn't start period, and I got angry," Lallier said.

He said the text message was the only notice he received from the dealership that his car would be deactivated.

Lallier had just started a new job and needed the car to get to work.

"I let my mom deal with it because I would have blown a head gasket."

His mother was able to reach an agreement with the dealership. Lallier said a salesperson reactivated the car with his smartphone.

Stricter policing for Quebec auto dealers needed, consumer group says

On Monday, the dealership contacted Lallier and removed the device without charge. CBC's calls to the dealer were not returned.

Disabling car you no longer own 'clearly illegal'

Quebec's Consumer Protection Bureau said it is illegal to charge fees not included in a signed contract.

The office also said a lender has to furnish a borrower with notice of 30 days before acting in such a way. Immobilizing a car could amount to a form of intimidation, which is also prohibited under consumer protection laws, they said.

George Iny, president of the Automobile Protection Association, said Quebec has strict rules on how GPS immobilizers can be used.

In this case, he said, the dealership's actions were against the law because it no longer owned the car.

"To turn off somebody's vehicle after he had already paid off the loan is clearly illegal … it's not your car anymore," Iny said.
George Iny

George Iny, president of the Automobile Protection Association, said GPS immobilizers are an effective way of getting consumers to keep up on their car payments. (CBC)

He said GPS immobilizers only benefit lenders and making consumers pay fees for their installation, maintenance and removal is unfair.

Lease or Buy? Your car questions answered by the Automobile Protection Association

Iny said Lallier's case is a good example of the conditions faced by borrowers with bad credit.

"Immobilizers are most often seen in cases of sub-prime borrowers with questionable credit," he said. "The devices are very effective at keeping people on time with their payments."

Bad credit, needed car

Lallier admitted to having bad credit and said he was just happy to get approved for the lease.

As a result, he didn't ask too many questions about the contract or the GPS tracker.

"I knew I had to deal with whatever I could get," he said.

$17M settlement reached in lawsuit targeting Eastern Townships gas 'cartel'

Residents of Bury, a small, rural town about 45 kilometres from Sherbrooke, need a car for their daily activities, Lallier said.

"It's very frustrating — anything that happens to my car, it's my lifeline. If I don't have a car, I can't go out with friends, I can't go anywhere, essentially."

"We only got one depanneur here and that's it… If I don't have my car, I'd be screwed."
http://www.cbc.ca/news/canada/montre...-fee-1.4265588





Will Supervolcanoes Help Power Our Future?

Vast new deposits of lithium could change the global politics of battery production—if we can get at them
Nathan Hurst

There’s no doubt that in coming years, we’re going to need a lot of lithium. The growing market of electric automobiles, plus new household energy storage and large-scale battery farms, and the current lack of any technology better for storage than lithium ion batteries, puts the future of energy storage in the hands of just a few places around the world where the alkali metal is extracted.

Earlier this decade, researchers from the University of Michigan projected the growth in demand for lithium up until the year 2100. It’s a lot—likely somewhere between 12 million and 20 million metric tons—but those same scientists, as well as others, at the USGS and elsewhere, have estimated that global deposits well exceed those numbers. The issue is not the presence of lithium on Earth, then, but being able to get at it. Most of what we use currently comes from just a few sources, mostly in Chile and Australia, which produce 75 percent of the lithium the world uses, and also by Argentina and China, according to USGS research from 2016.

Looking to solve this problem, Stanford geologists went in search of new sources of the metal. They knew it originates in volcanic rock, and so they went to the biggest volcanoes they could find: Supervolcanoes, which appear not as a mountain with a hole in it, but a big, wide, cauldron-shaped caldera where a large-scale eruption happened millions of years ago. There, they saw high concentrations of lithium contained in a type of volcanic clay called hectorite. Geologists already knew generally that lithium came from volcanic rocks, but the team from Stanford was able to measure it in unexpected locations and quantities opening up a wider range of potential sites.

“It turns out you don’t really need super high concentrations of lithium in the magma,” says Gail Mahood, a Stanford geology professor and author of the study, in Nature Communications, about the discovery. “Many of the volcanoes that erupted in the western U.S. would have enough lithium to produce an economic deposit, as long as the eruption is big enough … and as long as [it] created a situation where you could concentrate the lithium that was leached out of the rocks.”

Currently, most of the lithium we use comes from lithium brine—salty groundwater loaded with lithium. Volcanic rocks give up their lithium as rainwater or hot hydrothermal water leaches it out of them. It runs downhill to big, geologic basins where the crust of the Earth actually stretches and sags. When that happens in particularly arid regions, the water evaporates faster than it can accumulate, and you get denser and denser concentrations of lithium. This is why the best lithium deposits so far have been in places like Clayton Valley, Nevada, and Chile’s Atacama Desert. It consolidates in a liquid brine beneath the dry desert surface, which is pumped out of the ground, condensed further in evaporation pools, and extracted from the brine in chemical plants.

LeeAnn Munk, a geologist at the University of Alaska, has been working for years to develop a “geologic recipe” of the conditions under which lithium brine forms, and her team has been the first to describe this ore deposit model—the volcanic action, the tectonic structure, the arid climate, etc. Her work, which often pairs her with the USGS, has focused on brine.

But brine is just one of the ways lithium is found. It’s well known that the metal can be found in solid rock called pegmatite, and in hectorite. Hectorite is not clay like you would use to make a pot, but a dried out, layered, white ashy substance that formed due to hydrothermal action after the volcano erupted. The clay absorbs and affixes lithium that has leached out of the volcanic rock. Because these volcanoes are old—the most notable one, perhaps, is the 16 million-year-old McDermitt Volcanic Field in Kings Valley, Nevada—the land has shifted, and the clay is often found not in a basin but exposed, up on high desert mountain ranges.

“[Mahood and her team] have identified how lithium is held in these high silica volcanic rocks,” says Munk. “It helps further our understanding of where lithium occurs, within the Earth. If we don’t fully understand that then we have a hard time telling how much lithium we have, and how much lithium we can actually extract. They’ve helped advance the understanding of where lithium exists in the crust.”

Other locations identified by Mahood’s group include Sonora, Mexico, the Yellowstone caldera, and Pantelleria, an island in the Mediterranean. Each showed varying concentrations of lithium, which the researchers were able to correlate to the concentration of the more easily-detectable elements rubidium and zirconium, meaning in the future, those can be used as indicators in the search for further lithium.

But there’s more to it than just looking for lithium-rich supervolcano sites. “The issue right now is that there’s really no existing technology at a big enough scale to actually mine the lithium out of the clays that is economical,” says Munk. “It could be something that happens in the future.”

Mahood acknowledges this. “As far as I know, people have not worked out a commercial scale process for removing lithium from hectorite,” she says. “The irony of all of this is, the hectorite is being mined right now, but it’s not actually being mined for the lithium. What they’re mining it for is the hectorite as a clay, and hectorite clays have unusual properties in that they are stable to very high temperatures. So what the deposit at King’s Valley is being mined for now is to make specialty drilling muds that are used in the natural gas and oil industry.”

But to extract lithium from brine is also expensive, particularly in the amount of fresh water it requires, in places where water is scarce. There’s probably plenty of lithium to go around, says Mahood, but you don’t want it all to come from one source. “You want it to come from diversified places in terms of both countries and companies,” she says, “so that you’re never held hostage to the pricing practices of one country.”
http://www.smithsonianmag.com/innova...ure-180964635/





SanDisk Breaks Storage Record With 400GB microSD Card
Joel Hruska

SanDisk is offering a new 400GB microSD card, a breakthrough that would make it the largest microSD currently on the market. SanDisk, which is owned by Western Digital, hasn’t revealed details beyond stating that the capacity breakthrough was the result of WD “leveraging its proprietary memory technology and design and production processes that allow for more bits per die.” Western Digital set the previous record two years ago, when it launched a 200GB microSD card.

The speed appears to come with a tradeoff. SanDisk trumpets its A1 speed rating, saying: “Rated A1, the SanDisk Ultra® microSD card is optimized for apps, delivering faster app launch and performance that provides a better smartphone experience.”

This is a generous reading of the A1’s target performance specification. Last year, the SD Association released a report discussing the App Performance Class memory card specification and why the spec was created in the first place. When Android added support for running applications from an SD card, there was a need to make certain the cards people bought would be quick enough to run apps in the first place. The A1 is rated for 1500 read and 500 write IOPS, with a sequential transfer speed of 10MB/s. The SD Association writes:

It’s not bad. It’s just not fast.

“The SD 5.1 Physical specification introduced the first and most basic App Performance level, which sets the absolute minimum requirement bar named A1 or App Performance Class 1. Higher App Performance Class levels will be introduced to meet market needs.” (Emphasis added).

This SanDisk drive should run applications just fine. SanDisk claims it can be used for recording video, not just storing it. But it’s not going to be fast enough for 4K data; Class 10 devices are limited to 10MB/s of sequential write performance. Obviously not all phones support shooting in 4K anyway, so whether this is a limitation will depend on what device you plan to plug it into. The 100MB/s speed trumpeted by Western Digital is a reference to read speeds; write speeds are lower and likely closer to the 10MB/s sequential target mentioned above.

The microSD card is expected to retail for $250, which honestly isn’t bad for a product that could fit on a thumbnail. From the product description, however, it looks like this drive will work best for moderate recording needs. It won’t be suitable for 4K video, but if you’re shooting a lot of 1080p it should work well. An updated SanDisk Memory Zone app for managing data storage on your Android device is also available for download.
https://www.extremetech.com/computin...b-microsd-card





Murdoch Pulls Fox News from Sky Platform as UK Mulls Takeover Deal

Rupert Murdoch has pulled his Fox News channel from the Sky platform in Britain, where the government is assessing a bid by the media mogul to buy the broader Sky (SKYB.L) pay-TV company for $15 billion.

In a statement, Murdoch’s Twenty-First Century Fox (FOXA.O) said it had decided it was no longer in its commercial interest to provide Fox News in Britain, where only a few thousand viewers watch it.

Critics of Murdoch and his company regularly cite the right-wing Fox News channel as a reason why Murdoch should not be allowed to buy the 61 percent of Sky it does not already own.

Fox agreed to buy full control of the European pay-TV group Sky in December, but the British government is still deciding whether to refer the deal for a full investigation which could add months to the approval process.

The government has not found any problems with regard to Twenty-First Century Fox’s commitment to broadcasting standards, but it is examining whether the deal would give the company too much influence over the news agenda in the country.

“Fox News is focussed on the U.S. market and designed for a U.S. audience and, accordingly, it averages only a few thousand viewers across the day in the UK”, the company said.

“We have concluded that it is not in our commercial interest to continue providing Fox News in the UK.”

The Fox News channel was no longer available on the Sky platform from 1600 local time on Tuesday.

Reporting by Kate Holton; Editing by Mark Potter
https://uk.reuters.com/article/uk-sk...-idUKKCN1B91YH





Traditional Radio Faces a Grim Future, New Study Says
Jem Aswad

A new study published today by the head of New York University’s Steinhart Music Business Program casts a sobering outlook on the future of terrestrial radio. (Not surprisingly, the National Association of Broadcasters and Nielsen responded to the report; see their responses here.)

In the 30-page report, Larry Miller argues that traditional radio has failed to engage with Generation Z — people born after 1995 — and that its influence and relevance will continue to be subsumed by digital services unless it upgrades. Key points made in the study include:

*Generation Z, which is projected to account for 40% of all consumers in the U.S. by 2020, shows little interest in traditional media, including radio, having grown up in an on-demand digital environment;

*AM/FM radio is in the midst of a massive drop-off as a music-discovery tool by younger generations, with self-reported listening to AM/FM radio among teens aged 13 and up declining by almost 50 percentage points between 2005 and 2016. Music discovery as a whole is moving away from AM/FM radio and toward YouTube, Spotify and Pandora, especially among younger listeners, with 19% of a 2017 study of surveyed listeners citing it as a source for keeping up-to-date with music — down from 28% the previous year. Among 12-24 year olds who find music discovery important, AM/FM radio (50%) becomes even less influential, trailing YouTube (80%), Spotify (59%), and Pandora (53%).

*By 2020, 75% of new cars are expected to be “connected” to digital services, breaking radio’s monopoly on the car dashboard and relegating AM/FM to just one of a series of audio options behind the wheel. According to the U.S. Department of Transportation, the typical car in the U.S. was 11.6 years old in 2016, which explains why radio has not yet faced its disruption event. However, drivers are buying new cars at a faster rate than ever, and new vehicles come with more installed options for digital music services.

*The onset of “smart speakers” such as Amazon Echo, which do not have an AM/FM antenna, are rapidly shaping home entertainment without broadcast radio that does not have a digital option.

*Broadcast stations pay no royalties to record labels for the use of master recordings — the U.S. is the only country with developed intellectual-property laws where this is the case. Digital services do, which makes them more valuable to labels.

*The addition of streaming data to the Billboard Hot 100 chart, still the primary chart in the U.S., means that streaming is now playing an important part in determining which songs are played on radio rather than the other way around, reducing its status as a taste-making tool. In fact, streaming now accounts for 20-30% of the data that comprises the Hot 100, with sales at 35-45% and airplay at 30-40%.

The report makes a global-warming-level case for the terrestrial radio industry to upgrade or face obsolescence.

“AM/FM radio had been able to wait out the digital disruption that has already affected every other form of media. Now radio is the latest industry facing massive disruption from the digital age. To survive, radio must innovate, learn from other media and take control of its path to maintain its unique position with advertisers, audiences and other stakeholders into the third decade of this century and beyond.

“Unless the industry is set to make peace with a long and inevitable decline, radio needs to invest in strong and compelling digital services,” the report concludes. “If it does, radio can look forward to a robust future built on the strong foundation it already has in the marketplace leveraging the medium’s great reach, habitual listenership, local presence and brands. If it doesn’t, radio risks becoming a thing of the past, like the wax cylinder or 78 RPM record – fondly remembered but no longer relevant to an audience that has moved on.”
http://variety.com/2017/music/news/t...ys-1202542681/





24/192 Music Downloads...and Why They Make No Sense

Articles last month revealed that musician Neil Young and Apple's Steve Jobs discussed offering digital music downloads of 'uncompromised studio quality'. Much of the press and user commentary was particularly enthusiastic about the prospect of uncompressed 24 bit 192kHz downloads. 24/192 featured prominently in my own conversations with Mr. Young's group several months ago.

Unfortunately, there is no point to distributing music in 24-bit/192kHz format. Its playback fidelity is slightly inferior to 16/44.1 or 16/48, and it takes up 6 times the space.

There are a few real problems with the audio quality and 'experience' of digitally distributed music today. 24/192 solves none of them. While everyone fixates on 24/192 as a magic bullet, we're not going to see any actual improvement.

First, the bad news

In the past few weeks, I've had conversations with intelligent, scientifically minded individuals who believe in 24/192 downloads and want to know how anyone could possibly disagree. They asked good questions that deserve detailed answers.

I was also interested in what motivated high-rate digital audio advocacy. Responses indicate that few people understand basic signal theory or the sampling theorem, which is hardly surprising. Misunderstandings of the mathematics, technology, and physiology arose in most of the conversations, often asserted by professionals who otherwise possessed significant audio expertise. Some even argued that the sampling theorem doesn't really explain how digital audio actually works [1].

Misinformation and superstition only serve charlatans. So, let's cover some of the basics of why 24/192 distribution makes no sense before suggesting some improvements that actually do.

Gentlemen, meet your ears

The ear hears via hair cells that sit on the resonant basilar membrane in the cochlea. Each hair cell is effectively tuned to a narrow frequency band determined by its position on the membrane. Sensitivity peaks in the middle of the band and falls off to either side in a lopsided cone shape overlapping the bands of other nearby hair cells. A sound is inaudible if there are no hair cells tuned to hear it.

Above left: anatomical cutaway drawing of a human cochlea with the basilar membrane colored in beige. The membrane is tuned to resonate at different frequencies along its length, with higher frequencies near the base and lower frequencies at the apex. Approximate locations of several frequencies are marked.

Above right: schematic diagram representing hair cell response along the basilar membrane as a bank of overlapping filters.

This is similar to an analog radio that picks up the frequency of a strong station near where the tuner is actually set. The farther off the station's frequency is, the weaker and more distorted it gets until it disappears completely, no matter how strong. There is an upper (and lower) audible frequency limit, past which the sensitivity of the last hair cells drops to zero, and hearing ends.
Sampling rate and the audible spectrum

I'm sure you've heard this many, many times: The human hearing range spans 20Hz to 20kHz. It's important to know how researchers arrive at those specific numbers.

First, we measure the 'absolute threshold of hearing' across the entire audio range for a group of listeners. This gives us a curve representing the very quietest sound the human ear can perceive for any given frequency as measured in ideal circumstances on healthy ears. Anechoic surroundings, precision calibrated playback equipment, and rigorous statistical analysis are the easy part. Ears and auditory concentration both fatigue quickly, so testing must be done when a listener is fresh. That means lots of breaks and pauses. Testing takes anywhere from many hours to many days depending on the methodology.

Then we collect data for the opposite extreme, the 'threshold of pain'. This is the point where the audio amplitude is so high that the ear's physical and neural hardware is not only completely overwhelmed by the input, but experiences physical pain. Collecting this data is trickier. You don't want to permanently damage anyone's hearing in the process.

Above: Approximate equal loudness curves derived from Fletcher and Munson (1933) plus modern sources for frequencies > 16kHz. The absolute threshold of hearing and threshold of pain curves are marked in red. Subsequent researchers refined these readings, culminating in the Phon scale and the ISO 226 standard equal loudness curves. Modern data indicates that the ear is significantly less sensitive to low frequencies than Fletcher and Munson's results.

The upper limit of the human audio range is defined to be where the absolute threshold of hearing curve crosses the threshold of pain. To even faintly perceive the audio at that point (or beyond), it must simultaneously be unbearably loud.

At low frequencies, the cochlea works like a bass reflex cabinet. The helicotrema is an opening at the apex of the basilar membrane that acts as a port tuned to somewhere between 40Hz to 65Hz depending on the individual. Response rolls off steeply below this frequency.

Thus, 20Hz - 20kHz is a generous range. It thoroughly covers the audible spectrum, an assertion backed by nearly a century of experimental data.

Genetic gifts and golden ears

Based on my correspondences, many people believe in individuals with extraordinary gifts of hearing. Do such 'golden ears' really exist?

It depends on what you call a golden ear.

Young, healthy ears hear better than old or damaged ears. Some people are exceptionally well trained to hear nuances in sound and music most people don't even know exist. There was a time in the 1990s when I could identify every major mp3 encoder by sound (back when they were all pretty bad), and could demonstrate this reliably in double-blind testing [2].

When healthy ears combine with highly trained discrimination abilities, I would call that person a golden ear. Even so, below-average hearing can also be trained to notice details that escape untrained listeners. Golden ears are more about training than hearing beyond the physical ability of average mortals.

Auditory researchers would love to find, test, and document individuals with truly exceptional hearing, such as a greatly extended hearing range. Normal people are nice and all, but everyone wants to find a genetic freak for a really juicy paper. We haven't found any such people in the past 100 years of testing, so they probably don't exist. Sorry. We'll keep looking.

Spectrophiles

Perhaps you're skeptical about everything I've just written; it certainly goes against most marketing material. Instead, let's consider a hypothetical Wide Spectrum Video craze that doesn't carry preexisting audiophile baggage.

Above: The approximate log scale response of the human eye's rods and cones, superimposed on the visible spectrum. These sensory organs respond to light in overlapping spectral bands, just as the ear's hair cells are tuned to respond to overlapping bands of sound frequencies.

The human eye sees a limited range of frequencies of light, aka, the visible spectrum. This is directly analogous to the audible spectrum of sound waves. Like the ear, the eye has sensory cells (rods and cones) that detect light in different but overlapping frequency bands.

The visible spectrum extends from about 400THz (deep red) to 850THz (deep violet) [3]. Perception falls off steeply at the edges. Beyond these approximate limits, the light power needed for the slightest perception can fry your retinas. Thus, this is a generous span even for young, healthy, genetically gifted individuals, analogous to the generous limits of the audible spectrum.

In our hypothetical Wide Spectrum Video craze, consider a fervent group of Spectrophiles who believe these limits aren't generous enough. They propose that video represent not only the visible spectrum, but also infrared and ultraviolet. Continuing the comparison, there's an even more hardcore [and proud of it!] faction that insists this expanded range is yet insufficient, and that video feels so much more natural when it also includes microwaves and some of the X-ray spectrum. To a Golden Eye, they insist, the difference is night and day!

Of course this is ludicrous.

No one can see X-rays (or infrared, or ultraviolet, or microwaves). It doesn't matter how much a person believes he can. Retinas simply don't have the sensory hardware.

Here's an experiment anyone can do: Go get your Apple IR remote. The LED emits at 980nm, or about 306THz, in the near-IR spectrum. This is not far outside of the visible range. Take the remote into the basement, or the darkest room in your house, in the middle of the night, with the lights off. Let your eyes adjust to the blackness.

Above: Apple IR remote photographed using a digital camera. Though the emitter is quite bright and the frequency emitted is not far past the red portion of the visible spectrum, it's completely invisible to the eye.

Can you see the Apple Remote's LED flash when you press a button [4]? No? Not even the tiniest amount? Try a few other IR remotes; many use an IR wavelength a bit closer to the visible band, around 310-350THz. You won't be able to see them either. The rest emit right at the edge of visibility from 350-380 THz and may be just barely visible in complete blackness with dark-adjusted eyes [5]. All would be blindingly, painfully bright if they were well inside the visible spectrum.

These near-IR LEDs emit from the visible boundry to at most 20% beyond the visible frequency limit. 192kHz audio extends to 400% of the audible limit. Lest I be accused of comparing apples and oranges, auditory and visual perception drop off similarly toward the edges.

192kHz considered harmful

192kHz digital music files offer no benefits. They're not quite neutral either; practical fidelity is slightly worse. The ultrasonics are a liability during playback.

Neither audio transducers nor power amplifiers are free of distortion, and distortion tends to increase rapidly at the lowest and highest frequencies. If the same transducer reproduces ultrasonics along with audible content, any nonlinearity will shift some of the ultrasonic content down into the audible range as an uncontrolled spray of intermodulation distortion products covering the entire audible spectrum. Nonlinearity in a power amplifier will produce the same effect. The effect is very slight, but listening tests have confirmed that both effects can be audible.

Above: Illustration of distortion products resulting from intermodulation of a 30kHz and a 33kHz tone in a theoretical amplifier with a nonvarying total harmonic distortion (THD) of about .09%. Distortion products appear throughout the spectrum, including at frequencies lower than either tone.

Inaudible ultrasonics contribute to intermodulation distortion in the audible range (light blue area). Systems not designed to reproduce ultrasonics typically have much higher levels of distortion above 20kHz, further contributing to intermodulation. Widening a design's frequency range to account for ultrasonics requires compromises that decrease noise and distortion performance within the audible spectrum. Either way, unneccessary reproduction of ultrasonic content diminishes performance.

There are a few ways to avoid the extra distortion:

A dedicated ultrasonic-only speaker, amplifier, and crossover stage to separate and independently reproduce the ultrasonics you can't hear, just so they don't mess up the sounds you can.

Amplifiers and transducers designed for wider frequency reproduction, so ultrasonics don't cause audible intermodulation. Given equal expense and complexity, this additional frequency range must come at the cost of some performance reduction in the audible portion of the spectrum.

Speakers and amplifiers carefully designed not to reproduce ultrasonics anyway.

Not encoding such a wide frequency range to begin with. You can't and won't have ultrasonic intermodulation distortion in the audible band if there's no ultrasonic content.

They all amount to the same thing, but only 4) makes any sense.

If you're curious about the performance of your own system, the following samples contain a 30kHz and a 33kHz tone in a 24/96 WAV file, a longer version in a FLAC, some tri-tone warbles, and a normal song clip shifted up by 24kHz so that it's entirely in the ultrasonic range from 24kHz to 46kHz:

Intermod Tests:
30kHz tone + 33kHz tone (24 bit / 96kHz) [5 second WAV] [30 second FLAC]
26kHz - 48kHz warbling tones (24 bit / 96kHz) [10 second WAV]
26kHz - 96kHz warbling tones (24 bit / 192kHz) [10 second WAV]
Song clip shifted up by 24kHz (24 bit / 96kHz WAV) [10 second WAV]
(original version of above clip) (16 bit / 44.1kHz WAV)

Assuming your system is actually capable of full 96kHz playback [6], the above files should be completely silent with no audible noises, tones, whistles, clicks, or other sounds. If you hear anything, your system has a nonlinearity causing audible intermodulation of the ultrasonics. Be careful when increasing volume; running into digital or analog clipping, even soft clipping, will suddenly cause loud intermodulation tones.

In summary, it's not certain that intermodulation from ultrasonics will be audible on a given system. The added distortion could be insignificant or it could be noticable. Either way, ultrasonic content is never a benefit, and on plenty of systems it will audibly hurt fidelity. On the systems it doesn't hurt, the cost and complexity of handling ultrasonics could have been saved, or spent on improved audible range performance instead.

Sampling fallacies and misconceptions

Sampling theory is often unintuitive without a signal processing background. It's not surprising most people, even brilliant PhDs in other fields, routinely misunderstand it. It's also not surprising many people don't even realize they have it wrong.

Above: Sampled signals are often depicted as a rough stairstep (red) that seems a poor approximation of the original signal. However, the representation is mathematically exact and the signal recovers the exact smooth shape of the original (blue) when converted back to analog.

The most common misconception is that sampling is fundamentally rough and lossy. A sampled signal is often depicted as a jagged, hard-cornered stair-step facsimile of the original perfectly smooth waveform. If this is how you envision sampling working, you may believe that the faster the sampling rate (and more bits per sample), the finer the stair-step and the closer the approximation will be. The digital signal would sound closer and closer to the original analog signal as sampling rate approaches infinity.

Similarly, many non-DSP people would look at the following:

And say, "Ugh!" It might appear that a sampled signal represents higher frequency analog waveforms badly. Or, that as audio frequency increases, the sampled quality falls and frequency response falls off, or becomes sensitive to input phase.

Looks are deceiving. These beliefs are incorrect!

added 2013-04-04:
As a followup to all the mail I got about digital waveforms and stairsteps, I demonstrate actual digital behavior on real equipment in our video Digital Show & Tell so you need not simply take me at my word here!

All signals with content entirely below the Nyquist frequency (half the sampling rate) are captured perfectly and completely by sampling; an infinite sampling rate is not required. Sampling doesn't affect frequency response or phase. The analog signal can be reconstructed losslessly, smoothly, and with the exact timing of the original analog signal.

So the math is ideal, but what of real world complications? The most notorious is the band-limiting requirement. Signals with content over the Nyquist frequency must be lowpassed before sampling to avoid aliasing distortion; this analog lowpass is the infamous antialiasing filter. Antialiasing can't be ideal in practice, but modern techniques bring it very close. ...and with that we come to oversampling.

Oversampling

Sampling rates over 48kHz are irrelevant to high fidelity audio data, but they are internally essential to several modern digital audio techniques.

Oversampling is the most relevant example [7].

Oversampling is simple and clever. You may recall from my A Digital Media Primer for Geeks that high sampling rates provide a great deal more space between the highest frequency audio we care about (20kHz) and the Nyquist frequency (half the sampling rate). This allows for simpler, smoother, more reliable analog anti-aliasing filters, and thus higher fidelity. This extra space between 20kHz and the Nyquist frequency is essentially just spectral padding for the analog filter.

Above: Whiteboard diagram from A Digital Media Primer for Geeks illustrating the transition band width available for a 48kHz ADC/DAC (left) and a 96kHz ADC/DAC (right).

That's only half the story. Because digital filters have few of the practical limitations of an analog filter, we can complete the anti-aliasing process with greater efficiency and precision digitally. The very high rate raw digital signal passes through a digital anti-aliasing filter, which has no trouble fitting a transition band into a tight space. After this further digital anti-aliasing, the extra padding samples are simply thrown away. Oversampled playback approximately works in reverse.

This means we can use low rate 44.1kHz or 48kHz audio with all the fidelity benefits of 192kHz or higher sampling (smooth frequency response, low aliasing) and none of the drawbacks (ultrasonics that cause intermodulation distortion, wasted space). Nearly all of today's analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) oversample at very high rates. Few people realize this is happening because it's completely automatic and hidden.

ADCs and DACs didn't always transparently oversample. Thirty years ago, some recording consoles recorded at high sampling rates using only analog filters, and production and mastering simply used that high rate signal. The digital anti-aliasing and decimation steps (resampling to a lower rate for CDs or DAT) happened in the final stages of mastering. This may well be one of the early reasons 96kHz and 192kHz became associated with professional music production [8].

16 bit vs 24 bit

OK, so 192kHz music files make no sense. Covered, done. What about 16 bit vs. 24 bit audio?

It's true that 16 bit linear PCM audio does not quite cover the entire theoretical dynamic range of the human ear in ideal conditions. Also, there are (and always will be) reasons to use more than 16 bits in recording and production.

None of that is relevant to playback; here 24 bit audio is as useless as 192kHz sampling. The good news is that at least 24 bit depth doesn't harm fidelity. It just doesn't help, and also wastes space.
Revisiting your ears

We've discussed the frequency range of the ear, but what about the dynamic range from the softest possible sound to the loudest possible sound?

One way to define absolute dynamic range would be to look again at the absolute threshold of hearing and threshold of pain curves. The distance between the highest point on the threshold of pain curve and the lowest point on the absolute threshold of hearing curve is about 140 decibels for a young, healthy listener. That wouldn't last long though; +130dB is loud enough to damage hearing permanently in seconds to minutes. For reference purposes, a jackhammer at one meter is only about 100-110dB.

The absolute threshold of hearing increases with age and hearing loss. Interestingly, the threshold of pain decreases with age rather than increasing. The hair cells of the cochlea themselves posses only a fraction of the ear's 140dB range; musculature in the ear continuously adjust the amount of sound reaching the cochlea by shifting the ossicles, much as the iris regulates the amount of light entering the eye [9]. This mechanism stiffens with age, limiting the ear's dynamic range and reducing the effectiveness of its protection mechanisms [10].

Environmental noise

Few people realize how quiet the absolute threshold of hearing really is.

The very quietest perceptible sound is about -8dbSPL [11]. Using an A-weighted scale, the hum from a 100 watt incandescent light bulb one meter away is about 10dBSPL, so about 18dB louder. The bulb will be much louder on a dimmer.

20dBSPL (or 28dB louder than the quietest audible sound) is often quoted for an empty broadcasting/recording studio or sound isolation room. This is the baseline for an exceptionally quiet environment, and one reason you've probably never noticed hearing a light bulb.

The dynamic range of 16 bits

16 bit linear PCM has a dynamic range of 96dB according to the most common definition, which calculates dynamic range as (6*bits)dB. Many believe that 16 bit audio cannot represent arbitrary sounds quieter than -96dB. This is incorrect.

I have linked to two 16 bit audio files here; one contains a 1kHz tone at 0 dB (where 0dB is the loudest possible tone) and the other a 1kHz tone at -105dB.

Sample 1: 1kHz tone at 0 dB (16 bit / 48kHz WAV)

Sample 2: 1kHz tone at -105 dB (16 bit / 48kHz WAV)

Above: Spectral analysis of a -105dB tone encoded as 16 bit / 48kHz PCM. 16 bit PCM is clearly deeper than 96dB, else a -105dB tone could not be represented, nor would it be audible.

How is it possible to encode this signal, encode it with no distortion, and encode it well above the noise floor, when its peak amplitude is one third of a bit?

Part of this puzzle is solved by proper dither, which renders quantization noise independent of the input signal. By implication, this means that dithered quantization introduces no distortion, just uncorrelated noise. That in turn implies that we can encode signals of arbitrary depth, even those with peak amplitudes much smaller than one bit [12]. However, dither doesn't change the fact that once a signal sinks below the noise floor, it should effectively disappear. How is the -105dB tone still clearly audible above a -96dB noise floor?

The answer: Our -96dB noise floor figure is effectively wrong; we're using an inappropriate definition of dynamic range. (6*bits)dB gives us the RMS noise of the entire broadband signal, but each hair cell in the ear is sensitive to only a narrow fraction of the total bandwidth. As each hair cell hears only a fraction of the total noise floor energy, the noise floor at that hair cell will be much lower than the broadband figure of -96dB.

Thus, 16 bit audio can go considerably deeper than 96dB. With use of shaped dither, which moves quantization noise energy into frequencies where it's harder to hear, the effective dynamic range of 16 bit audio reaches 120dB in practice [13], more than fifteen times deeper than the 96dB claim.

120dB is greater than the difference between a mosquito somewhere in the same room and a jackhammer a foot away.... or the difference between a deserted 'soundproof' room and a sound loud enough to cause hearing damage in seconds.

16 bits is enough to store all we can hear, and will be enough forever.

Signal-to-noise ratio

It's worth mentioning briefly that the ear's S/N ratio is smaller than its absolute dynamic range. Within a given critical band, typical S/N is estimated to only be about 30dB. Relative S/N does not reach the full dynamic range even when considering widely spaced bands. This assures that linear 16 bit PCM offers higher resolution than is actually required.

It is also worth mentioning that increasing the bit depth of the audio representation from 16 to 24 bits does not increase the perceptible resolution or 'fineness' of the audio. It only increases the dynamic range, the range between the softest possible and the loudest possible sound, by lowering the noise floor. However, a 16-bit noise floor is already below what we can hear.

When does 24 bit matter?

Professionals use 24 bit samples in recording and production [14] for headroom, noise floor, and convenience reasons.

16 bits is enough to span the real hearing range with room to spare. It does not span the entire possible signal range of audio equipment. The primary reason to use 24 bits when recording is to prevent mistakes; rather than being careful to center 16 bit recording-- risking clipping if you guess too high and adding noise if you guess too low-- 24 bits allows an operator to set an approximate level and not worry too much about it. Missing the optimal gain setting by a few bits has no consequences, and effects that dynamically compress the recorded range have a deep floor to work with.

An engineer also requires more than 16 bits during mixing and mastering. Modern work flows may involve literally thousands of effects and operations. The quantization noise and noise floor of a 16 bit sample may be undetectable during playback, but multiplying that noise by a few thousand times eventually becomes noticeable. 24 bits keeps the accumulated noise at a very low level. Once the music is ready to distribute, there's no reason to keep more than 16 bits.

Listening tests

Understanding is where theory and reality meet. A matter is settled only when the two agree.

Empirical evidence from listening tests backs up the assertion that 44.1kHz/16 bit provides highest-possible fidelity playback. There are numerous controlled tests confirming this, but I'll plug a recent paper, Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback, done by local folks here at the Boston Audio Society.

Unfortunately, downloading the full paper requires an AES membership. However it's been discussed widely in articles and on forums, with the authors joining in. Here's a few links:

The Emperor's New Sampling Rate
Hydrogen Audio forum discussion thread
Supplemental information page at the Boston Audio Society, including the equipment and sample lists

This paper presented listeners with a choice between high-rate DVD-A/SACD content, chosen by high-definition audio advocates to show off high-def's superiority, and that same content resampled on the spot down to 16-bit / 44.1kHz Compact Disc rate. The listeners were challenged to identify any difference whatsoever between the two using an ABX methodology. BAS conducted the test using high-end professional equipment in noise-isolated studio listening environments with both amateur and trained professional listeners.

In 554 trials, listeners chose correctly 49.8% of the time. In other words, they were guessing. Not one listener throughout the entire test was able to identify which was 16/44.1 and which was high rate [15], and the 16-bit signal wasn't even dithered!

Another recent study [16] investigated the possibility that ultrasonics were audible, as earlier studies had suggested. The test was constructed to maximize the possibility of detection by placing the intermodulation products where they'd be most audible. It found that the ultrasonic tones were not audible... but the intermodulation distortion products introduced by the loudspeakers could be.

This paper inspired a great deal of further research, much of it with mixed results. Some of the ambiguity is explained by finding that ultrasonics can induce more intermodulation distortion than expected in power amplifiers as well. For example, David Griesinger reproduced this experiment [17] and found that his loudspeaker setup did not introduce audible intermodulation distortion from ultrasonics, but his stereo amplifier did.

Caveat Lector

It's important not to cherry-pick individual papers or 'expert commentary' out of context or from self-interested sources. Not all papers agree completely with these results (and a few disagree in large part), so it's easy to find minority opinions that appear to vindicate every imaginable conclusion. Regardless, the papers and links above are representative of the vast weight and breadth of the experimental record. No peer-reviewed paper that has stood the test of time disagrees substantially with these results. Controversy exists only within the consumer and enthusiast audiophile communities.

If anything, the number of ambiguous, inconclusive, and outright invalid experimental results available through Google highlights how tricky it is to construct an accurate, objective test. The differences researchers look for are minute; they require rigorous statistical analysis to spot subconscious choices that escape test subjects' awareness. That we're likely trying to 'prove' something that doesn't exist makes it even more difficult. Proving a null hypothesis is akin to proving the halting problem; you can't. You can only collect evidence that lends overwhelming weight.

Despite this, papers that confirm the null hypothesis are especially strong evidence; confirming inaudibility is far more experimentally difficult than disputing it. Undiscovered mistakes in test methodologies and equipment nearly always produce false positive results (by accidentally introducing audible differences) rather than false negatives.

If professional researchers have such a hard time properly testing for minute, isolated audible differences, you can imagine how hard it is for amateurs.

How to [inadvertently] screw up a listening comparison

The number one comment I heard from believers in super high rate audio was [paraphrasing]: "I've listened to high rate audio myself and the improvement is obvious. Are you seriously telling me not to trust my own ears?"

Of course you can trust your ears. It's brains that are gullible. I don't mean that flippantly; as human beings, we're all wired that way.
Confirmation bias, the placebo effect, and double-blind

In any test where a listener can tell two choices apart via any means apart from listening, the results will usually be what the listener expected in advance; this is called confirmation bias and it's similar to the placebo effect. It means people 'hear' differences because of subconscious cues and preferences that have nothing to do with the audio, like preferring a more expensive (or more attractive) amplifier over a cheaper option.

The human brain is designed to notice patterns and differences, even where none exist. This tendency can't just be turned off when a person is asked to make objective decisions; it's completely subconscious. Nor can a bias be defeated by mere skepticism. Controlled experimentation shows that awareness of confirmation bias can increase rather than decreases the effect! A test that doesn't carefully eliminate confirmation bias is worthless [18].

In single-blind testing, a listener knows nothing in advance about the test choices, and receives no feedback during the course of the test. Single-blind testing is better than casual comparison, but it does not eliminate the experimenter's bias. The test administrator can easily inadvertently influence the test or transfer his own subconscious bias to the listener through inadvertent cues (eg, "Are you sure that's what you're hearing?", body language indicating a 'wrong' choice, hesitating inadvertently, etc). An experimenter's bias has also been experimentally proven to influence a test subject's results.

Double-blind listening tests are the gold standard; in these tests neither the test administrator nor the testee have any knowledge of the test contents or ongoing results. Computer-run ABX tests are the most famous example, and there are freely available tools for performing ABX tests on your own computer[19]. ABX is considered a minimum bar for a listening test to be meaningful; reputable audio forums such as Hydrogen Audio often do not even allow discussion of listening results unless they meet this minimum objectivity requirement [20].

Above: Squishyball, a simple command-line ABX tool, running in an xterm.

I personally don't do any quality comparison tests during development, no matter how casual, without an ABX tool. Science is science, no slacking.

Loudness tricks

The human ear can consciously discriminate amplitude differences of about 1dB, and experiments show subconscious awareness of amplitude differences under .2dB. Humans almost universally consider louder audio to sound better, and .2dB is enough to establish this preference. Any comparison that fails to carefully amplitude-match the choices will see the louder choice preferred, even if the amplitude difference is too small to consciously notice. Stereo salesmen have known this trick for a long time.

The professional testing standard is to match sources to within .1dB or better. This often requires use of an oscilloscope or signal analyzer. Guessing by turning the knobs until two sources sound about the same is not good enough.

Clipping

Clipping is another easy mistake, sometimes obvious only in retrospect. Even a few clipped samples or their aftereffects are easy to hear compared to an unclipped signal.

The danger of clipping is especially pernicious in tests that create, resample, or otherwise manipulate digital signals on the fly. Suppose we want to compare the fidelity of 48kHz sampling to a 192kHz source sample. A typical way is to downsample from 192kHz to 48kHz, upsample it back to 192kHz, and then compare it to the original 192kHz sample in an ABX test [21]. This arrangement allows us to eliminate any possibility of equipment variation or sample switching influencing the results; we can use the same DAC to play both samples and switch between without any hardware mode changes.

Unfortunately, most samples are mastered to use the full digital range. Naive resampling can and often will clip occasionally. It is necessary to either monitor for clipping (and discard clipped audio) or avoid clipping via some other means such as attenuation.

Different media, different master

I've run across a few articles and blog posts that declare the virtues of 24 bit or 96/192kHz by comparing a CD to an audio DVD (or SACD) of the 'same' recording. This comparison is invalid; the masters are usually different.

Inadvertent cues

Inadvertant audible cues are almost inescapable in older analog and hybrid digital/analog testing setups. Purely digital testing setups can completely eliminate the problem in some forms of testing, but also multiply the potential of complex software bugs. Such limitations and bugs have a long history of causing false-positive results in testing [22].

The Digital Challenge - More on ABX Testing, tells a fascinating story of a specific listening test conducted in 1984 to rebut audiophile authorities of the time who asserted that CDs were inherently inferior to vinyl. The article is not concerned so much with the results of the test (which I suspect you'll be able to guess), but the processes and real-world messiness involved in conducting such a test. For example, an error on the part of the testers inadvertantly revealed that an invited audiophile expert had not been making choices based on audio fidelity, but rather by listening to the slightly different clicks produced by the ABX switch's analog relays!

Anecdotes do not replace data, but this story is instructive of the ease with which undiscovered flaws can bias listening tests. Some of the audiophile beliefs discussed within are also highly entertaining; one hopes that some modern examples are considered just as silly 20 years from now.

Finally, the good news

What actually works to improve the quality of the digital audio to which we're listening?

Better headphones

The easiest fix isn't digital. The most dramatic possible fidelity improvement for the cost comes from a good pair of headphones. Over-ear, in ear, open or closed, it doesn't much matter. They don't even need to be expensive, though expensive headphones can be worth the money.

Keep in mind that some headphones are expensive because they're well made, durable and sound great. Others are expensive because they're $20 headphones under a several hundred dollar layer of styling, brand name, and marketing. I won't make specfic recommendations here, but I will say you're not likely to find good headphones in a big box store, even if it specializes in electronics or music. As in all other aspects of consumer hi-fi, do your research (and caveat emptor).

Lossless formats

It's true enough that a properly encoded Ogg file (or MP3, or AAC file) will be indistinguishable from the original at a moderate bitrate.

But what of badly encoded files?

Twenty years ago, all mp3 encoders were really bad by today's standards. Plenty of these old, bad encoders are still in use, presumably because the licenses are cheaper and most people can't tell or don't care about the difference anyway. Why would any company spend money to fix what it's completely unaware is broken?

Moving to a newer format like Vorbis or AAC doesn't necessarily help. For example, many companies and individuals used (and still use) FFmpeg's very-low-quality built-in Vorbis encoder because it was the default in FFmpeg and they were unaware how bad it was. AAC has an even longer history of widely-deployed, low-quality encoders; all mainstream lossy formats do.

Lossless formats like FLAC avoid any possibility of damaging audio fidelity [23] with a poor quality lossy encoder, or even by a good lossy encoder used incorrectly.

A second reason to distribute lossless formats is to avoid generational loss. Each reencode or transcode loses more data; even if the first encoding is transparent, it's very possible the second will have audible artifacts. This matters to anyone who might want to remix or sample from downloads. It especially matters to us codec researchers; we need clean audio to work with.
Better masters

The BAS test I linked earlier mentions as an aside that the SACD version of a recording can sound substantially better than the CD release. It's not because of increased sample rate or depth but because the SACD used a higher-quality master. When bounced to a CD-R, the SACD version still sounds as good as the original SACD and better than the CD release because the original audio used to make the SACD was better. Good production and mastering obviously contribute to the final quality of the music [24].

The recent coverage of 'Mastered for iTunes' and similar initiatives from other industry labels is somewhat encouraging. What remains to be seen is whether or not Apple and the others actually 'get it' or if this is merely a hook for selling consumers yet another, more expensive copy of music they already own.

Surround

Another possible 'sales hook', one I'd enthusiastically buy into myself, is surround recordings. Unfortunately, there's some technical peril here.

Old-style discrete surround with many channels (5.1, 7.1, etc) is a technical relic dating back to the theaters of the 1960s. It is inefficient, using more channels than competing systems. The surround image is limited, and tends to collapse toward the nearer speakers when a listener sits or shifts out of position.

We can represent and encode excellent and robust localization with systems like Ambisonics. The problems are the cost of equipment for reproduction and the fact that something encoded for a natural soundfield both sounds bad when mixed down to stereo, and can't be created artificially in a convincing way. It's hard to fake ambisonics or holographic audio, sort of like how 3D video always seems to degenerate into a gaudy gimmick that reliably makes 5% of the population motion sick.

Binaural audio is similarly difficult. You can't simulate it because it works slightly differently in every person. It's a learned skill tuned to the self-assembling system of the pinnae, ear canals, and neural processing, and it never assembles exactly the same way in any two individuals. People also subconsciously shift their heads to enhance localization, and can't localize well unless they do. That's something that can't be captured in a binaural recording, though it can to an extent in fixed surround.

These are hardly impossible technical hurdles. Discrete surround has a proven following in the marketplace, and I'm personally especially excited by the possibilities offered by Ambisonics.
Outro

"I never did care for music much.
It's the high fidelity!"
—Flanders & Swann, A Song of Reproduction

The point is enjoying the music, right? Modern playback fidelity is incomprehensibly better than the already excellent analog systems available a generation ago. Is the logical extreme any more than just another first world problem? Perhaps, but bad mixes and encodings do bother me; they distract me from the music, and I'm probably not alone.

Why push back against 24/192? Because it's a solution to a problem that doesn't exist, a business model based on willful ignorance and scamming people. The more that pseudoscience goes unchecked in the world at large, the harder it is for truth to overcome truthiness... even if this is a small and relatively insignificant example.

"For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring."
—Carl Sagan

Further reading

Readers have alerted me to a pair of excellent papers of which I wasn't aware before beginning my own article. They tackle many of the same points I do in greater detail.

Coding High Quality Digital Audio by Bob Stuart of Meridian Audio is beautifully concise despite its greater length. Our conclusions differ somewhat (he takes as given the need for a slightly wider frequency range and bit depth without much justification), but the presentation is clear and easy to follow. [Edit: I may not agree with many of Mr. Stuart's other articles, but I like this one a lot.]

Sampling Theory For Digital Audio [Updated link 2012-10-04] by Dan Lavry of Lavry Engineering is another article that several readers pointed out. It expands my two pages or so about sampling, oversampling, and filtering into a more detailed 27 page treatment. Worry not, there are plenty of graphs, examples and references.

Stephane Pigeon of audiocheck.net wrote to plug the browser-based listening tests featured on his web site. The set of tests is relatively small as yet, but several were directly relevant in the context of this article. They worked well and I found the quality to be quite good.
Footnotes

1. As one frustrated poster wrote,

"[The Sampling Theorem] hasn't been invented to explain how digital audio works, it's the other way around. Digital Audio was invented from the theorem, if you don't believe the theorem then you can't believe in digital audio either!!"

http://www.head-fi.org/t/415361/24bi...-myth-exploded

2. If it wasn't the most boring party trick ever, it was pretty close.

3. It's more typical to speak of visible light as wavelengths measured in nanometers or angstroms. I'm using frequency to be consistent with sound. They're equivalent, as frequency is just the inverse of wavelength.

4. The LED experiment doesn't work with 'ultraviolet' LEDs, mainly because they're not really ultraviolet. They're deep enough violet to cause a little bit of fluorescence, but they're still well within the visible range. Real ultraviolet LEDs cost anywhere from $100-$1000 apiece and would cause eye damage if used for this test. Consumer grade not-really-UV LEDs also emit some faint white light in order to appear brighter, so you'd be able to see them even if the emission peak really was in the ultraviolet.

5. The original version of this article stated that IR LEDs operate from 300-325THz (about 920-980nm), wavelengths that are invisible. Quite a few readers wrote to say that they could in fact just barely see the LEDs in some (or all) of their remotes. Several were kind enough to let me know which remotes these were, and I was able to test several on a spectrometer. Lo and behold, these remotes were using higher-frequency LEDs operating from 350-380THz (800-850nm), just overlapping the extreme edge of the visible range.

6. Many systems that cannot play back 96kHz samples will silently downsample to 48kHz, rather than refuse to play the file. In this case, the tones will not be played at all and playback would be silent no matter how nonlinear the system is.

7. Oversampling is not the only application for high sampling rates in signal processing. There are a few theoretical advantages to producing band-limited audio at a high sampling rate eschewing decimation, even if it is to be downsampled for distribution. It's not clear what if any are used in practice, as the workings of most professional consoles are trade secrets.

8. Historical reasoning or not, there's no question that many professionals today use high rates because they mistakenly assume that retaining content beyond 20kHz sounds better, just as consumers do.

9. The sensation of eardrums 'uncringing' after turning off loud music is quite real!

10. Some nice diagrams can be found at the HyperPhysics site:
http://hyperphysics.phy-astr.gsu.edu...rotect.html#c1

11. 20µPa is commonly defined to be 0dB for auditory measurement purposes; it is approximately equal to the threshold of hearing at 1kHz. The ear is as much as 8dB more sensitive between 2 and 4kHz however.

12. The following paper has the best explanation of dither that I've run across. Although it's about image dither, the first half covers the theory and practice of dither in audio before extending its use into images:

Cameron Nicklaus Christou, Optimal Dither and Noise Shaping in Image Processing

13. DSP engineers may point out, as one of my own smart-alec compatriots did, that 16 bit audio has a theoretically infinite dynamic range for a pure tone if you're allowed to use an infinite Fourier transform to extract it; this concept is very important to radio astronomy.

Although the ear works not entirely unlike a Fourier transform, its resolution is relatively limited. This places a limit on the maximum practical dynamic depth of 16 bit audio signals.

14. Production increasingly uses 32 bit float, both because it's very convenient on modern processors, and because it completely eliminates the possibility of accidental clipping at any point going undiscovered and ruining a mix.

15. Several readers have wanted to know how, if ultrasonics can cause audible intermodulation distortion, the Meyer and Moran 2007 test could have produced a null result.

It should be obvious that 'can' and 'sometimes' are not the same as 'will' and 'always'. Intermodulation distortion from ultrasonics is a possibility, not a certainty, in any given system for a given set of material. The Meyer and Moran null result indicates that intermodulation distortion was inaudible on the systems used during the course of their testing.

Readers are invited to try the simple ultrasonic intermodulation distortion test above for a quick check of the intermodulation potential of their own equipment.

16. Karou and Shogo, Detection of Threshold for tones above 22kHz (2001). Convention paper 5401 presented at the 110th Convention, May 12-15 2001, Amsterdam.

17. Griesinger, Perception of mid-frequency and high-frequency intermodulation distortion in loudspeakers, and its relationship to high definition audio

18. Since publication, several commentators wrote to me with similar versions of the same anecdote [paraphrased]: "I once listened to some headphones / amps / recordings expecting result [A] but was totally surprised to find [b] instead! Confirmation bias is hooey!"

I offer two thoughts.

First, confirmation bias does not replace all correct results with incorrect results. It skews the results in some uncontrolled direction by an unknown amount. How can you tell right or wrong for sure if the test is rigged by your own subconscious? Let's say you expected to hear a large difference but were shocked to hear a small difference. What if there was actually no difference at all? Or, maybe there was a difference and, being aware of a potential bias, your well meaning skepticism overcompensated? Or maybe you were completely right? Objective testing, such as ABX, eliminates all this uncertainty.

Second, "So you think you're not biased? Great! Prove it!" The value of an objective test lies not only in its ability to inform one's own understanding, but also to convince others. Claims require proof. Extraordinary claims require extraordinary proof.

19. The easiest tools to use for ABX testing are probably:

Foobar2000 with the ABX plug-in

Squishyball, a Linux command-line tool we use within Xiph

20. At Hydrogen Audio, the objective testing requirement is abbreviated TOS8 as it's the eighth item in the Terms Of Service.

21. It is commonly assumed that resampling irreparably damages a signal; this isn't the case. Unless one makes an obvious mistake, such as causing clipping, the downsampled and then upsampled signal will be audibly indistinguishable from the original. This is the usual test used to establish that higher sampling rates are unneccessary.

22. It may not be strictly audio related, but... faster-than-light neutrinos, anyone?

23. Wired magazine implies that lossless formats like FLAC are not always completely lossless:

"Some purists will tell you to skip FLACs altogether and just buy WAVs. [...] By buying WAVs, you can avoid the potential data loss incurred when the file is compressed into a FLAC. This data loss is rare, but it happens."

This is false. A lossless compression process never alters the original data in any way, and FLAC is no exception.

In the event that Wired was referring to hardware corruption of data files (disk failure, memory failure, sunspots), FLAC and WAV would both be affected. A FLAC file, however, is checksummed and would detect the corruption. The FLAC file is also smaller than the WAV, and so a random corruption would be less likely because there's less data that could be affected.

24. The 'Loudness War' is a commonly cited example of bad mastering practices in the industry today, though it's not the only one. Loudness is also an older phenomenon than the Wikipedia article leads the reader to believe; as early as the 1950s, artists and producers pushed for the loudest possible recordings. Equipment vendors increasingly researched and marketed new technology to allow hotter and hotter masters. Advanced vinyl mastering equipment in the 1970s and 1980s, for example, tracked and nested groove envelopes when possible in order to allow higher amplitudes than the groove spacing would normally permit.

Today's digital technology has allowed loudness to be pumped up to an absurd level. It's also provided a plethora of automatic, highly complex, proprietary DAW plugins that are deployed en-masse without a wide understanding of how they work or what they're really doing.
https://xiph.org/~xiphmont/demo/neil-young.html

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

August 26th, August 19th, August 12th, August 5th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - November 24th, '12 JackSpratts Peer to Peer 0 21-11-12 09:20 AM
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 04:36 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)