P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 10-01-18, 09:04 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - January 13th, ’18

Since 2002


































"No president should have this power." – Neema Singh Guliani


"We’re happy to see the Ninth Circuit clarify, again, that violating a website’s terms of service is not a crime." – EFF


"A lack of competition in countless US broadband markets consistently contributes to not only high prices and slower speeds, but some of the worst customer service ratings in any industry in America." – Karl Bode






































January 13th, 2018




California Introduces Its Own Bill to Protect Net Neutrality
Katharine Trendacosta

2018 has barely begun, and so has the fight to preserve net neutrality. January 3 was the first day of business in the California state legislature, and state Sen. Scott Wiener used it to introduce legislation to protect net neutrality for Californians.

As the FCC has sought to abandon its role as the protector of a free and open Internet at the federal level, states are seeking ways to step into the void. Prior to December, the FCC’s rules prevented Internet service providers (ISPs) from blocking or slowing down traffic to websites. The rules also kept ISPs from charging users higher rates for faster access to certain websites or charging websites to be automatically included in any sort of “fast lane.” On December 14th, the FCC voted to remove these restrictions and even tried to make it harder for anyone else to regulate ISPs in a similar way.

Wiener’s proposed legislation, co-authored by ten state assembly and Senate Democrats, has a number of ways to ensure that telecom companies operating in California adhere to the principals of net neutrality. Washington and New York have similar bills in progress and Wiener isn’t even the only California legislator proposing legislation, as state Sen. Kevin de León has introduced a net neutrality bill as well.

The substance of the legislation is still in the works, but the intent is to leverage the state's assets as a means to require networks to operate neutrally. In essence, the California bill would require net neutrality of businesses that operate within the state of California if they are relying on state infrastructure or state funding to provide the service.

EFF supports this bill, as the FCC’s actions in December mean states must provide whatever protections they can to safeguard the Internet as we know it. However, state laws can only restore network neutrality for some Americans, and only a federal rule can ensure that everyone in the country has access to a neutral net.

Even as state legislatures craft bills, state attorneys general are joining public interest groups and members of Congress to challenge the FCC in federal court. Congress has the ability to reverse a change in federal regulation—which is technically what the FCC’s rule change is—with a simple majority within 60 legislative days of the order being published in the federal register. That means you can ask your member of Congress to save net neutrality now, since the rule is expected to be published and the vote therefore required this year.
https://www.eff.org/deeplinks/2018/0...net-neutrality





Nebraska Introduces Law to Re-Instate Net Neutrality

Statehouses across the country: the next battlefield for a free internet?
Eileen Guo

A state legislator, Adam Morfeld, introduced a bill Friday to restore rules net neutrality rules in the state of Nebraska.

The “Internet Neutrality Act” (LB856) would restore the former federal rules and prohibit broadband internet service providers from “limiting or restricting access to web sites, applications, or content.”

As Morfeld told a local newspaper, “For me, this is an economic development and consumer protection bill. The internet drives the economy now and it’s critical people have open and fair access to the internet.”

Meet the Nebraskan senator fighting the repeal of net neutrality.

Morfield said that he’s received widespread support for the bill, across the political spectrum. “I was passionate about it, but I was shocked at the support I received from Republicans, from Democrats and Libertarians,” he said.

Nebraska is not the only state to be using state law to fight the deeply unpopular repeal. In Washington state, lawmakers hope to force broadband companies to disclose accurate information about the price and speed of their services and prevent them from creating “fast lanes” of Internet access for consumers that pay more.

Meanwhile, in California last week, lawmakers introduced a bill that attacks the net neutrality repeal at several different levels: it would treat internet service providers as public utilities, block companies that are not following net neutrality rules from from using utility poles, and prohibit government agencies from contracting with internet service providers that do not follow those rules.

Additionally, 16 state attorney generals have pledged to sue the FCC to stop the repeal, led by New York Attorney General Eric Schneiderman.

The state laws being proposed or considered in Nebraska, Washington, and California, meanwhile, are likely to result in another, separate lawsuit, since in the FCC’s new rules, published Jan. 4, the commission specifically tried to preempt actions to enforce net neutrality at a local level.
https://www.inverse.com/article/3999...net-neutrality





AT&T and Comcast Finalize Court Victory Over Nashville and Google Fiber

Nashville won't appeal as Google Fiber-backed utility pole rule is invalidated.
Jon Brodkin

AT&T and Comcast have solidified a court victory over the metro government in Nashville, Tennessee, nullifying a rule that was meant to help Google Fiber compete against the incumbent broadband providers.

The case involved Nashville's "One Touch Make Ready" ordinance that was supposed to give Google Fiber and other new ISPs faster access to utility poles. The ordinance let a single company make all of the necessary wire adjustments on utility poles itself instead of having to wait for incumbent providers like AT&T and Comcast to send work crews to move their own wires.

But AT&T and Comcast sued the metro government to eliminate the rule and won a preliminary victory in November when a US District Court judge in Tennessee nullified the rule as it applies to poles owned by AT&T and other private parties.

The next step for AT&T and Comcast was overturning the rule as it applies to poles owned by the municipal Nashville Electric Service (NES), which owns around 80 percent of the Nashville poles. AT&T and Comcast achieved that on Friday with a new ruling from US District Court Judge Aleta Trauger.

Nashville's One Touch Make Ready ordinance "is ultra vires and void or voidable as to utility poles owned by Nashville Electric Service because adoption of the Ordinance exceeded Metro Nashville’s authority and violated the Metro Charter," the ruling said. Nashville is "permanently enjoined from applying the Ordinance to utility poles owned by Nashville Electric Service."

The Nashville Electric Service declined to take a position on the validity of the ordinance, and it said in a court filing last month that it "has no objection to the Court entering the declaration and injunction sought by Plaintiffs [AT&T and Comcast]." The utility's neutral stance helped AT&T and Comcast win the case.

The court previously ruled that the Nashville ordinance is preempted by federal law when it comes to poles owned by AT&T and other private parties.

Nashville won't appeal

The Nashville government isn't planning to appeal the decision, a spokesperson for Nashville Mayor Megan Barry told Ars today.

"While Metro is disappointed in the court's decision, at this point we don't anticipate pursuing an appeal," the spokesperson said. "As a result, the One Touch Make Ready ordinance will not be enforced in Nashville. We hope that advancements in technology and construction methods for high-speed Internet will address the problems intended to be solved through this legislation."

Google Fiber has been using "microtrenching" to install fiber underground in parts of Nashville and other cities, but it is still frustrated by long waits in getting access to utility poles.

"The company has launched service in several Nashville neighborhoods and in apartment and condo buildings," The Tennessean wrote this week.

When contacted by Ars today, a spokesperson for Google Fiber said the ISP "has made progress with innovative deployment techniques in some areas of the city, but access to poles remains an important issue where underground deployment presents challenges, especially for new providers working hard to enhance broadband access and competition. We continue to support the city of Nashville in its efforts to expand access to super-fast Internet to residents."

Google Fiber was not a party to the case, which was disputed between the incumbent ISPs and local government.

Despite Nashville's loss, it is not impossible for local governments to enforce One Touch Make Ready rules. The federal preemption of local rules does not apply in states that have opted out of the Federal Communications Commission's pole attachment rules. Louisville was able to beat an AT&T lawsuit against its own One Touch Make Ready ordinance in part because Kentucky had opted out of the federal pole attachment regime and imposed its own rules.
https://arstechnica.com/tech-policy/...-google-fiber/





Harvard Study Shows Why Big Telecom Is Terrified of Community-Run Broadband

Community-owned internet service providers are cheaper and better.
Karl Bode

A new study out of Harvard once again makes it clear why incumbent ISPs like Comcast, Verizon and AT&T are so terrified by the idea of communities building their own broadband networks.

According to the new study by the Berkman Klein Center for Internet and Society at Harvard University, community-owned broadband networks provide consumers with significantly lower rates than their private-sector counterparts.

The study examined data collected from 40 municipal broadband providers and private throughout 2015 and 2016. Pricing data was collected predominately by visiting carrier websites, where pricing is (quite intentionally) often hidden behind prequalification walls, since pricing varies dramatically based on regional competition.

In many markets, analysts couldn’t make direct comparisons with a private ISP, either because the ISP failed to meet the FCC’s 25 Mbps down, 3 Mbps up standard definition of broadband (a problem for countless telcos who refuse to upgrade aging DSL lines), or because the ISP prequalification website terms of service “deterred or prohibited” data collection.

But out of the 27 markets where they could make direct comparisons, researchers found that in 23 cases, the community-owned ISPs’ pricing was lower when the service costs and fees were averaged over four years.

“When considering entry-level broadband service—the least-expensive plan that provides at least 25/3 Mbps service—23 out of 27 community-owned [fiber to the home] providers we studied charged the lowest prices in their community when considering the annual average cost of service over a four-year period, taking into account installation and equipment costs and averaging any initial teaser rates with later, higher, rates,” they noted.

In these 23 communities, prices for the lowest-cost service meeting the FCC’s definition of broadband were between 2.9 percent and 50 percent less than the lowest-cost such service offered by a private ISP in that market.

Running an open access network (where multiple ISPs can come in and compete) usually dramatically ramps up this competition. In fact, a 2009 FCC-sponsored Harvard study found that open access networks routinely result in lower prices and better service. The more competition, the better the service, faster the speeds, and lower the rates.

That’s not particularly surprising. A lack of competition in countless US broadband markets consistently contributes to not only high prices and slower speeds, but some of the worst customer service ratings in any industry in America. This lack of competition is another reason why ISPs can get away with implementing punitive and arbitrary usage caps and overage fees.

Harvard’s latest study found that community-owned broadband networks are not only consistently cheaper than traditional private networks, but pricing for broadband service also tends to be notably more transparent, more consistent, and less confusing.

“We also found that almost all community-owned [fiber to the home] networks offered prices that were clear and unchanging, whereas private ISPs typically charged initial low promotional or 'teaser' rates that later sharply rose, usually after 12 months,” the researchers said.

ISPs like Comcast and Charter are currently facing numerous lawsuits for using sneaky fees to covertly jack up advertised prices post sale. Even when there is competition, incumbent ISPs often try to lock customers down in long-term contracts before said competition (most commonly Google Fiber or a municipal broadband provider) comes to town.

Again, the impact of competition on rates can be dramatic. For example, AT&T charges $70 per month for gigabit broadband in markets where they face competition from a municipal broadband network or alternative ISP like Google Fiber, but can charge up to $40 to $60 more for the same service in a less competitive market.

ISPs also have a nasty habit of trying to make direct price comparisons impossible as well, lest the public realize what a profound impact the lack of competition has on broadband pricing. It’s a major reason why the FCC spent $300 million in taxpayer dollars on a national broadband map that completely omits pricing data at incumbent ISP request.

“Language in the website “terms of service” (TOS) of some private ISPs strongly inhibits research on pricing,” noted the Harvard study. “The TOS for AT&T, Verizon, and Time Warner Cable (now owned by Charter), were particularly strong in deterring such efforts; as a result, we did not record data from these three companies.”

All told, the study found that direct price comparisons among broadband ISPs “is extraordinarily difficult because the U.S. Federal Communications Commission (FCC) does not collect any pricing data and does not track broadband availability by address.” Efforts to shore up this problem are consistently blocked by incumbent ISP lobbyists.

The data the FCC does collect still manages to indicate that two-thirds of American households lack access to the 25 Mbps service from more than one ISP. And while gigabit networks receive a lot of hype, the reality is this lack of competition is actually getting worse in many markets as telcos like AT&T and Verizon give up on upgrading DSL networks they no longer want, giving cable operators like Charter (Spectrum) and Comcast a stronger regional monopoly than ever.

To retain this status quo, ISPs have spent decades writing and buying state laws that prohibit towns and cities from exploring community owned and operated broadband networks. More than twenty-one states have passed such laws, which not only hamstring municipal broadband providers, but often ban towns and cities from striking public/private partnerships.

It’s also why ISPs like Comcast pay countless think tankers, academics, consultants, and other policy voices to endlessly demonize community-run broadband networks as an automatic taxpayer boondoggles, ignoring the countless areas where such networks (like in Chattanooga) have dramatically benefited the local community.

ISPs making this argument tend to ignore the fact that these communities wouldn’t be building their own networks if they were happy with the services and pricing being offered by entrenched incumbent ISPs. ISPs also like to ignore the fact that the decision to build such a network should rest with the communities’ themselves, not a duopoly ISP executive half a world away, whose only real motivation is to keep the broken status quo intact.

ISPs like Comcast could nip this movement in the bud by simply offering cheaper, better service. Instead, they’ve decided to buy protectionist laws, spread disinformation about how these networks operate, and sue local communities for simply trying to find creative solutions to the broadband monopoly logjam.

As we’ve noted previously, community owned and operated broadband networks are a fantastic alternative to the broken status quo. For those outraged by the Trump administration’s attempt to kill net neutrality (and soon all remaining oversight of the nation’s entrenched monopolies) building or supporting local broadband networks is one practical avenue for retaliation.
https://motherboard.vice.com/en_us/a...-run-broadband





At the Behest of T-Mobile, the FCC Is Undoing Rules That Make it Easier for Small ISPs to Compete With Big Telecom

The rules around the Citizens Broadband Radio Service spectrum were changed in 2015 to make it easier for wireless ISPs to license space, but might be changed right before they are useful.
Kaleigh Rogers

Even as President Donald Trump spends his time promising rural Americans that closing the digital divide is a top priority, his agencies are taking steps that will only make that goal harder to achieve.

The Federal Communications Commission is currently considering a rule change that would alter how it doles out licenses for wireless spectrum. These changes would make it easier and more affordable for Big Telecom to scoop up licenses, while making it almost impossible for small, local wireless ISPs to compete.

The Citizens Broadband Radio Service (CBRS) spectrum is the rather earnest name for a chunk of spectrum that the federal government licenses out to businesses. It covers 3550-3700 MHz, which is considered a “midband” spectrum. It can get complicated, but it helps to think of it how radio channels work: There are specific channels that can be used to broadcast, and companies buy the license to broadcast over that particular channel.

The FCC will be auctioning off licenses for the CBRS, and many local wireless ISPs—internet service providers that use wireless signal, rather than cables, to connect customers to the internet—have been hoping to buy licenses to make it easier to reach their most remote customers.

“The vast majority of wireless ISPs are using unlicensed 5Ghz spectrum to connect the customer to their tower,” said Jimmy Carr, the policy committee chair for the Wireless Internet Service Provider Association (WISPA), a trade group representing wireless ISP companies. “5Ghz spectrum is great, you can pack a lot of data on it, but the problem is that it requires a true line of sight between the customer’s home and the tower. Any trees, any hills in the way and you can’t connect the customer.”

With midband spectrum, like CBRS, however, line of sight isn’t a problem. And because it’s licensed spectrum, wireless ISPs would be able to broadcast at a higher power.

The CBRS spectrum was designed for Navy radar, and when it was opened up for auction, the traditional model favored Big Telecom cell phone service providers. That’s because the spectrum would be auctioned off in pieces that were too big for smaller companies to afford—and covered more area than they needed to serve their customers.

“Say you’re a community college and you want to set up a secure LTE network on a licensed spectrum,” Carr told me. “In the past, you couldn’t do that, because you’d have to buy a third of the state’s spectrum in a license, when you only planned to use a small portion of that.”

But in 2015, under the Obama administration, the FCC changed the rules for how the CBRS spectrum would be divvied up, allowing companies to bid on the spectrum for a much smaller area of land.

Just as these changes were being finalized this past fall, Trump’s FCC proposed going back to the old method. This would work out well for Big Telecom, which would want larger swaths of coverage anyway, and would have the added bonus of being able to price out smaller competitors (because the larger areas of coverage will inherently cost more.)

So why is the FCC even considering this? According to the agency’s proposal, because T-Mobile and CTIA, a trade group that represents all major cell phone providers, “ask[ed] the Commission to reexamine several of the […] licensing rules.” Oh, also, it seems like doing smaller sized lots would be too much work.

“Licensing on a census tract-basis—which could result in over 500,000 [licenses]—will be challenging for Administrators, the Commission, and licensees to manage, and will create unnecessary interference risks due to the large number of border areas that will need to be managed and maintained,” the proposal reads.

Motherboard reached out to the FCC for comment but have not yet received a response.

The FCC is also considering other rule changes that would make it even more difficult for small ISPs to participate in the auction, such as extending the minimum license term from three years to 10. It’s a bit like a landlord requiring a business to sign a 25-year lease, which is obviously risky and expensive for mom-and-pop shops, but standard for chain stores and restaurants.

Wireless internet is far from a panacea for the digital divide. It’s not as strong or reliable as fiber to the home, and it’s worth noting that WISPA was in favor of the FCC’s decision to repeal net neutrality. But many community internet efforts have had success using wireless technology to bridge some of the gaps, and the CBRS spectrum promised to be a great opportunity for more groups to find space to send a signal. Wireless is one tool that can be used to help bridge the digital divide.

But if these rule changes go through (they’re still open for public comment until the end of January), it will be one less path to expanding rural internet, and one more win for Big Telecom.
https://motherboard.vice.com/en_us/a...th-big-telecom





Broadband Companies, Public Officials Set Sights on Super-Fast Internet
Paul Schott

The race is on to expand high-speed internet service across the country.

Stamford-based Charter Communications has emerged as a leader in the broadband industry, as it has connected millions of customers across the country in recent months to super-fast “gigabit” service. In Connecticut, public officials are also pushing ahead with a number of rapid-connection initiatives, which they argue are engines of economic growth. But these programs must tackle significant challenges — including recent regulatory changes — to fulfill their potential.

“Gigabit is the future,” said Sudip Bhattacharjee, a professor in the University of Connecticut’s business school and chief of U.S. Census Bureau’s Center for Big Data Research. “Any business that needs extremely fast internet connections will benefit. And it will be a huge asset for any cities or towns here in Connecticut.”

Embracing gigabit

Last month, Charter announced it had added 1 gigabit per second connections with its Spectrum Internet Gig service to seven markets covering about 8.8 million people.

The term “gig” refers to internet speeds of 1 gigabit per second. One gigabit equals 1,000 megabits. Nationally, internet speeds average about 9 megabits per second, by some measures

Charter officials said the service would help customers to quickly stream video, play online games, download music and do other activities on multiple devices without sacrificing broadband quality.

The company declined to comment for this article.

The new markets — Austin, Texas; Charlotte, North Carolina; Cincinnati; Kansas City, Missouri; New York City; Raleigh-Durham, North Carolina; and San Antonio — join Oahu, Hawaii, its first gigabit market, which launched in late November.

In 2018, Internet Gig is set to launch in additional cities across Charter’s 41-state footprint. The company has not disclosed those planned locations.

“I think that they want to basically provide this service in high-concentration cities, where they may consider this to be economically viable,” said Ramesh Subramanian, a professor of information systems at Quinnipiac University.

Going high speed in the Nutmeg State

In Connecticut, some 72 percent of internet connections ran at speeds of more than 10 MBPS in the first quarter of 2017, ranking 10th among the states, according to a report by technology firm Akamai. The rate represented a 10-point improvement over the rate from one year earlier.

But many want to further develop fiber-optic systems that support high-speed connections, to encourage more private-sector competition and fill in gaps in communities underserved by broadband providers.

Launched in 2014, the CT Gig Project comprises a coalition of local and state public officials and other interested parties who want to bring high-speed, low-cost internet to residents and businesses throughout the state.

CT Gig does not provide funding for municipalities to develop their internet infrastructure, but it represents an important advocate and source of expertise and contact for communities looking to upgrade their broadband systems. The state’s Office of Consumer Counsel oversees the initiative.

“We want to have the best infrastructure and best available technology,” said Elin Swanson Katz, the state’s consumer counsel. “Right now, we don’t have ubiquitous fiber-optic-to-the-home access in any community. But we have the intellectual capital, and we have demand for it.”

More than 100 towns and cities have expressed interest in creating “open-access” gigabit service, according to Katz’s office.

Stamford has emerged as one of the state’s leading cities for high-speed internet access. Companies based in the city have cumulatively invested millions of dollars to connect their buildings to gigabit fiber lines.

City officials, meanwhile, have been working with several internet service providers — including Frontier Communications and Altice — to expand the area’s gigabit infrastructure. Among key initiatives, the city plans to install public gigabit Wi-Fi connections within a half-mile of the downtown Metro-North station by the end of June.

Some $360,000 in “Innovation Places” funds that Stamford has received from the CTNext economic development agency will support the project. The nonprofit Stamford Partnership is also supporting the endeavor.

“Stamford’s gigabit broadband infrastructure, with its faster speeds, reliable service and low latency is our central platform to help spur innovation,” said Thomas Madden, the city’s economic development director. “With the ability for talent and capital to flow throughout the world, it is important that we continue to look toward the future and invest in our innovation infrastructure.”

Equal access for all?

Amid the push by the likes of Charter to expand gigabit access, high-speed internet service still languishes as a faint hope for many residents and businesses in low-income urban neighborhoods and rural areas in Connecticut and across the U.S.

The U.S. ranked 10th worldwide in average megabit speeds, at 18.7, compared with No. 1 South Korea, whose speeds average 28.6, according to the Akamai report.

“The large and populous cities will definitely benefit in the short run,” Subramanian said of gigabit expansion. “But it would still likely leave small and less populous rural areas out in the cold. ... They may not even get the basic minimum speeds to qualify as broadband. Unless the companies are able to show a long-term strategy or proposal for expanding gigabit networks, these localized gigabit networks will not help large segments of the country.”

The Federal Communications Commission’s repeal last month of “net neutrality” rules that had regulated how broadband providers deliver and charge for content has further heightened concerns about access.

Net neutrality supporters argue the regulations were needed to ensure large internet service providers would treat all web traffic equally and protect free speech. Among other arguments, they said many firms could risk facing content slowdowns or additional charges for unobstructed broadband access.

“If companies like Charter create a ‘road tax’ to use the internet highway, then that is going to cause an issue with innovation,” said UConn’s Bhattacharjee. “If it costs too much for me to hop onto the high-speed internet, a lot of people will be negatively affected by it.”

Charter and Norwalk-based Frontier backed the elimination of net neutrality. Officials at both companies said they would not hit customers with new charges or restrict access.
http://www.newstimes.com/business/ar...s-12475924.php





Starry Internet and Marvell Want to Bust Open the ISP Industry
Jordan Crook

Expanding and upgrading wireless networks requires an astounding amount of investment, both in terms of time and resources. As we head into the era of 5G connectivity, that investment only increases.

But Starry Internet, founded by Chet Kanojia, is looking to lower the cost for the entire industry through a new partnership with Marvell.

Partnering with Marvell, the maker of the 802.11ac and new 802.11ax chipsets, Starry plans to release the reference designs for their fixed wireless technologies. This will incorporate elements of Starry’s millimeter wave fixed wireless IP for pre-standard 5G connectivity, letting any operator across the globe manufacture their own Starry Point devices and sell/distribute their own 5G network.

But let’s back up.

Starry Internet launched back in January of 2016 with a brand new way to deliver internet to urban areas. Using a phased array laser atop a building in a city center, users could connect to ultrafast internet through a device called the Starry Point. The Starry Point would sit outside the user’s window or on their roof and receive connectivity, via millimeter wave technology (the same stuff used in the TSA scanners at the airport), to their home.

The company has raised $63 million thus far, but revolutionizing an industry can be expensive nonetheless, especially when it’s dominated by a small number of incumbents.

To lower costs for both Starry and the wireless industry as a whole, Starry is getting the help of Marvell to build the actual radio chipset for the Starry Point system in the 802.11ax chip. Moreover, Starry and Marvell are now licensing their reference designs so that anyone can get into the wireless game. You can imagine electric companies, home security companies, smaller operators and contract manufacturers themselves getting into the game and creating more competition, all on the back of Starry’s technology.

Kanojia likens the move to Tesla’s battery business. Tesla is investing long-term in Gigafactories and opening up access to Supercharging stations and Tesla patents in hopes that the whole industry will move forward with electric. This will open up the market to electric vehicles, lowering overall cost and putting Tesla at an advantage through the Gigafactories.

This isn’t Kanojia’s first project, and it may not even be his most ambitious.

Aereo was founded back in 2010, and used large collections of micro antennae to let users watch broadcast television through their computer, phone or set top box. In essence, the antenna functioned as rabbit ears which users would rent on a monthly basis for access to a small variety of channels, complete with DVR.

The broadcast industry hated this, as Aereo and those users paid nothing to access these broadcast channels (which are technically free), and sued Aereo to high hell. Eventually, the Supreme Court ruled in favor of the broadcast networks and sent Aereo into bankruptcy.

This time, Kanojia is targeting ISPs with a brand new technology. And it comes at an interesting time. With the recent ruling on repealing Net Neutrality rules, Starry’s decision to release reference designs should theoretically create a greater level of competition within the industry, which will put more power in the hands of consumers.
https://techcrunch.com/2018/01/08/st...-isp-industry/





Trump Pushes to Expand High-Speed Internet in Rural America

U.S. President Donald Trump was expected on Monday to sign an executive order to make it easier for the private sector to locate broadband infrastructure on federal land and buildings, part of a push to expand high-speed internet in rural America.

Faster internet speeds in rural areas have long been seen as key to addressing the economic divide between rural and urban America, but the costs have so far been prohibitive.

About 39 percent of rural Americans lack access to high- speed internet service, the Federal Communications Commission said in a 2016 report.

“We need to get rural America more connected. We need it for our tractors, we need it for our schools, we need it for our home-based businesses,” a White House official told reporters ahead of Trump’s speech at the annual convention of the American Farm Bureau Federation.

“We’re not moving mountains but we’re certainly getting started,” the official said, speaking on condition of anonymity to preview Trump’s actions.

The White House described the moves as an incremental step to help spur private development while the administration figures out what it can do to help with funding, something that could become part of Trump’s plan to invest in infrastructure.

“We know that funding is really the key thing to actually changing rural broadband,” a second White House official said. (Reporting by Jeff Mason in Nashville and Roberta Rampton in Washington; Editing by Lisa Shumaker)
https://www.reuters.com/article/usa-...-idUSL1N1P30ZT





Hauling the Internet to an Ex-Soviet Outpost High in the Caucasus Mountains
Nyani Quarmyne and Kevin Granville

Several months ago, a team of men ascended the Greater Caucasus Mountains in Georgia. They led horses loaded with electrical wire, solar panels, batteries, toolboxes and drills powerful enough to grind through rock.

With jagged ridgelines above and shadowed valleys below, the men were on a quest to bring the internet to one of the world’s most remote places: Tusheti, a rural province on the Russian border.

Tusheti’s clean air, crisp blue skies and mountain-studded landscape already attract some tourists, but government officials think there is potential for many more. Access to the internet will make it easier for travelers to book reservations online, but it will also stir e-commerce and local business development, and give a lift to health care and education services in the area.

For now, though, Tusheti has little electricity, and maybe more sheep than people.

For centuries, Tusheti’s rugged terrain has encouraged a nomadic lifestyle that continues to this day. Shepherds roam the mountainsides with their flocks in the summer, and then move down to more temperate pastures for the rest of the year.

Many people who aren’t shepherds do the same. For much of the year, they live in lowland communities near Tusheti that have schools, hospitals and, yes, internet service. Come summer, they return to the rocky slopes.

Tusheti spans about 370 square miles – an area a bit larger than Berlin – but only about 50 people stay through the winter. Temperatures can drop to near zero Fahrenheit, and snow covers the main roads for up to six months.

In Shenako, one of several dozen villages in the region, only one couple stays through the winter. As the weeks pass, other people arrive by helicopter, hitching a ride on the Georgian Border Police’s monthly trips to change crews at border outposts.

For the workers on the internet mission, time could feel short.

Dusk was coming on fast when the crew reached Bochorna, a tiny hamlet whose 7,700-foot elevation makes it the highest continuously inhabited village in Europe, according to the Georgian government. The workers’ chief focus: establishing an internet connection for the lone year-round resident, Irakli Khvedaguridze, a 76-year-old doctor.

When winter arrives, the government pays Dr. Khvedaguridze to buy medical supplies; if he runs out of anything, a border police helicopter delivers what he needs.

He spends the winter treating the sick in nearby communities, reading medical journals and listening to the radio. When he needs company, he goes into Omalo, one of Tusheti’s larger villages, on his homemade skis.

Although he had no computer or smartphone, he was curious about how it all worked. His first thought: people without cell coverage would now be able to call him in the winter if they were sick or injured.

“It will be easier to get in touch now,” he said.

Financed largely through a $40,000 grant from the Internet Society, a global nonprofit, the workers came from local organizations: the Tusheti Development Fund, the Small and Medium Telecommunications Operators’ Association and Freenet, a local internet service provider. They offered materials or services for free, or at little cost. They all felt a connection of some kind to the Tusheti mountains.

“I am originally from the mountains,” said Ucha Seturi, a lawyer and engineer who is coordinating the project. “The mountain people are a little bit separate from the rest of society. They are not developing. So for me it’s really important to create real points of connection with the rest of society so they can develop.”

Many people in nearby communities are quite familiar with the allure of the internet. Temuri Babulaidze, whose father helped start the Tusheti Development Fund, enjoys playing online video games at the family’s home in Kvemo Alvani, a town where many Tushetians live in the winter.

In addition to the residents of 26 villages in Tusheti who will now be able to log on to the internet, dozens of hostels and small hotels will benefit from the service.

“Tourism is a beacon of hope for us,” said Ia Buchaidze, who owns a local bakery, “and the internet is very important for that.”
tusheti-darkhorse.jpg

Not everyone is happy with the change.

“It's not good,” said Yochanan Herman, an Israeli traveler who was hiking the mountains, “because there are not many places in the world without Internet.”

The long days in the mountains can be hazardous for the entire crew, including the horses. The men watched helplessly one day as a packhorse stumbled and plunged off the trail, rolling down like a barrel and scattering its load as it tumbled.

The horse eventually righted itself, but it was bleeding beneath its saddle blanket. It would be O.K., the workers learned. They combed the hillside for its load. Two toolboxes were smashed; among the other casualties was a power inverter. That was a major problem: Without it, the heavy drill wouldn’t work.

The masts holding internet antennas were not the only towers visible on the mountainsides. There were also skeletal memorials to the Communist era: pylons that carried electricity across the border from the Soviet Union into Tusheti and the rest of Georgia.

The towers remain, but the power lines are gone, pulled down after the end of the Soviet Union in 1991.

There were other reminders of that era: villages abandoned after, some residents say, the Soviets forced people off the mountains.

The internet network is helping stir a turnaround for Tusheti. In the past year, a new, 100-bed hotel was proposed for Omalo. There is talk of a major ski resort and a new roadway.

The internet will undoubtedly bring the world closer to Tusheti. The question is whether its rustic otherworldliness will disappear.
https://www.nytimes.com/interactive/...-internet.html





Why Mickey Mouse’s 1998 Copyright Extension Probably Won’t Happen Again

Copyrights from the 1920s will start expiring next year if Congress doesn’t act.
Timothy B. Lee

On January 1, 2019, every book, film, and song published in 1923 will fall out of copyright protection—something that hasn't happened in 40 years. At least, that's what will happen if Congress doesn't retrospectively change copyright law to prevent it—as Congress has done two previous times.

Until the 1970s, copyright terms only lasted for 56 years. But Congress retroactively extended the term of older works to 75 years in 1976. Then on October 27, 1998—just weeks before works from 1923 were scheduled to fall into the public domain—President Bill Clinton signed legislation retroactively extending the term of older works to 95 years, locking up works published in 1923 or later for another 20 years.

Will Congress do the same thing again this year? To find out, we talked to groups on both sides of the nation's copyright debate—to digital rights advocates at the Electronic Frontier Foundation and Public Knowledge and to industry groups like the Motion Picture Association of America and the Recording Industry Association of America. To our surprise, there seemed to be universal agreement that another copyright extension was unlikely to be on the agenda this year.

"We are not aware of any such efforts, and it's not something we are pursuing," an RIAA spokesman told us when we asked about legislation to retroactively extend copyright terms.

"While copyright term has been a longstanding topic of conversation in policy circles, we are not aware of any legislative proposals to address the issue," the MPAA told us.

Presumably, many of the MPAA's members would gladly take a longer copyright term if they could get it. For example, Disney's copyright for the first Mickey Mouse film, Steamboat Willie, is scheduled to expire in 2024. But the political environment has shifted so much since 1998 that major copyright holders may not even try to extend copyright terms before they start to expire again.

The politics of copyright have changed dramatically

In 2013, on the 15th anniversary of the 1998 Copyright Term Extension Act, I wrote an in-depth look at the legislative fight over that bill. I talked to Dennis Karjala, a law professor who was part of the lonely opposition to longer copyright terms in the 1990s. He died last year.

"There was not a single argument that actually can stand up to any kind of reasonable analysis," Karjala told me. But that didn't matter very much because the lobbying muscle was entirely on one side. Major movie studios joined forces with the estates of famous authors and musicians to push for a copyright extension.

Most of the public considered copyright to be a boring subject with little relevance to their daily lives, so there was little grassroots interest in the issue. Karjala hoped that professional associations of librarians and historians—which had traditionally been important advocates for the public interest on copyright issues—would help stop the bill. But the legislation had so much momentum that these groups decided to settle for minor changes to the legislation. So the bill wound up passing without a significant fight.

The rise of the Internet has totally changed the political landscape on copyright issues. The Electronic Frontier Foundation is much larger than it was in 1998. Other groups, including Public Knowledge, didn't even exist 20 years ago. Internet companies—especially Google—have become powerful opponents of expanding copyright protections.

Most importantly, there's now a broad grassroots engagement on copyright issues—something that became evident with the massive online protests against the infamous Stop Online Piracy Act in 2012. SOPA would have forced ISPs to enforce DNS-based blacklists of sites accused of promoting piracy. It was such a bad idea that Wikipedia, Google, and other major sites blacked themselves out in protest. The digital rights activist group Demand Progress emerged from the SOPA fight and has gone on to play a key role organizing protests over network neutrality and other issues.

The protest against SOPA "was a big show of force," says Meredith Rose, a lawyer at Public Knowledge. The protest showed that "the public really cares about this stuff."

The defeat of SOPA was so complete that it has essentially ended efforts by copyright interests to expand copyright protection via legislation. Prior to SOPA, Congress would regularly pass bills ratcheting up copyright protections (like the 2008 PRO-IP Act, which beefed up anti-piracy efforts). Since 2012, copyright has been a legislative stalemate, with neither side passing significant legislation.

“The public would fight back”

And that means that advocates of a new copyright term extension bill wouldn't be able to steamroll opponents the way they did 20 years ago. Any term extension proposal would face a well-organized and well-funded opposition with significant grassroots support.

"After the SOPA fight, Hollywood likely knows that the public would fight back," wrote Daniel Nazer, an attorney at the Electronic Frontier Foundation, in an email to Ars. "I suspect that Big Content knows it would lose the battle and is smart enough not to fight."

"I haven't seen any evidence that Big Content companies plan to push for another term extension," Nazer added. "This is an election year, so if they wanted to get a big ticket like that through Congress, you would expect to see them laying the groundwork with lobbying and op-eds."

Of course, copyright interests might try to slip a copyright term extension into a must-pass bill in hopes opponents wouldn't notice until it was too late. But Rose doesn't think that would work.

Not only are there many more copyright reform advocates in Washington now than there were 20 years ago, but they're also well-networked with other public interest groups, she told Ars in a phone interview. As a result, there are "a lot of different eyes on different bills."

"The likelihood of it slipping by unnoticed" is low, Rose said.

And even some content creators aren't keen on ever-longer copyright terms. The Authors Guild, for example, "does not support extending the copyright term, especially since many of our members benefit from having access to a thriving and substantial public domain of older works," a Guild spokeswoman told Ars in an email. "If anything, we would likely support a rollback to a term of life-plus-50 if it were politically feasible."

In my 2013 article, I wrote that "the question for the coming legislative battle on copyright is who will prevail." But now it looks like there probably won't be a legislative battle at all because hardly anyone is pushing for another extension. And that means we might actually see works start to fall into the public domain next year.
https://arstechnica.com/tech-policy/...xtension-push/





The Fight For Patent-Unencumbered Media Codecs Is Nearly Won
Robert O'Callahan

Apple joining the Alliance for Open Media is a really big deal. Now all the most powerful tech companies — Google, Microsoft, Apple, Mozilla, Facebook, Amazon, Intel, AMD, ARM, Nvidia — plus content providers like Netflix and Hulu are on board. I guess there's still no guarantee Apple products will support AV1, but it would seem pointless for Apple to join AOM if they're not going to use it: apparently AOM membership obliges Apple to provide a royalty-free license to any "essential patents" it holds for AV1 usage.

It seems that the only thing that can stop AOM and AV1 eclipsing patent-encumbered codecs like HEVC is patent-infringement lawsuits (probably from HEVC-associated entities). However, the AOM Patent License makes that difficult. Under that license, the AOM members and contributors grant rights to use their patents royalty-free to anyone using an AV1 implementation — but your rights terminate if you sue anyone else for patent infringement for using AV1. (It's a little more complicated than that — read the license — but that's the idea.) It's safe to assume AOM members do hold some essential patents covering AV1, so every company has to choose between being able to use AV1, and suing AV1 users. They won't be able to do both. Assuming AV1 is broadly adopted, in practice that will mean choosing between making products that work with video, or being a patent troll. No doubt some companies will try the latter path, but the AOM members have deep pockets and every incentive to crush the trolls.

Opus (audio) has been around for a while now, uses a similar license, and AFAIK no patent attacks are hanging over it.

Xiph, Mozilla, Google and others have been fighting against patent-encumbered media for a long time. Mozilla joined the fight about 11 years ago, and lately it has not been a cause célèbre, being eclipsed by other issues. Regardless, this is still an important victory. Thanks to everyone who worked so hard for it for so long, and special thanks to the HEVC patent holders, whose greed gave free-codec proponents a huge boost.
http://robert.ocallahan.org/2018/01/...red-media.html





TiVo Sues Comcast Again, Alleging Operator’s X1 Infringes Eight Patents
Todd Spangler

TiVo has launched a new legal attack on Comcast aimed a pushing the cable giant to reach a settlement to license TiVo-owned patents.

TiVo’s Rovi subsidiary on Wednesday filed two lawsuits in federal district courts, alleging Comcast’s X1 platform infringes eight TiVo-owned patents. That includes technology covering pausing and resuming shows on different devices; restarting live programming in progress; certain advanced DVR recording features; and advanced search and voice functionality.

A Comcast spokeswoman said the company will “aggressively defend” itself.

“Comcast engineers independently created our X1 products and services, and through its litigation campaign against Comcast, Rovi seeks to charge Comcast and its customers for technology Rovi didn’t create,” the Comcast rep said in a statement. “Rovi’s attempt to extract these unfounded payments for its aging and increasingly obsolete patent portfolio has failed to date.“

TiVo’s legal action comes after entertainment-tech vendor Rovi (which acquired the DVR company in 2016 and adopted the TiVo name) sued Comcast and its set-top suppliers in April 2016, alleging infringement of 14 patents. In November 2017, the U.S. International Trade Commission ruled that Comcast infringed two Rovi patents — with the cable operator prevailing on most of the patents at issue. However, because one of the TiVo patents Comcast was found to have violated covered cloud-based DVR functions, the cable operator disabled that feature for X1 customers. Comcast is appealing the ITC ruling.

Both TiVo and Rovi have long histories of aggressive patent litigation. In addition to the pair of federal lawsuits against Comcast, Tivo said it will file a complaint with the ITC regarding the same patents seeking an exclusion order preventing X1 set-top boxes from being imported into the United States.

“Our goal is for Comcast to renew its long-standing license so it can continue providing its customers the many popular features Rovi invented,” TiVo president and CEO Enrique Rodriguez said in a statement.

In a research note, B. Riley FBR analyst Eric Wold said he views the latest litigation by TiVo positively. “The company is now putting increased pressure on Comcast” to reach a settlement, according to Wold. An initial analysis of the suit indicates the patents cover features that “would be difficult for Comcast to remove from its X1 platform without significantly degrading the offering to its subscribers,” he added.

However, according to Wold, there’s uncertainty in TiVo’s legal case, noting that there’s no guarantee the federal courts or the ITC will find the patents valid or that Comcast infringes them. The analyst maintains a “netural” rating on TiVo with a price target of $18 per share.

The new TiVo lawsuit alleges Comcast’s X1 infringes U.S. Patent Nos. 9,294,799; 9,369,741; 7,827,585; 9,578,363; 9,668,014; 9,621,956; 7,779,011; and 7,937,394. TiVo filed the lawsuits on Jan. 10 in the U.S. District Court for the Central District of California and the U.S. District Court for the District of Massachusetts.
http://variety.com/2018/digital/news...nt-1202661161/





Studios Sue Dragon Box in Latest Crackdown on Streaming Devices
Gene Maddaus

Netflix and Amazon joined with the major studios on Wednesday in a lawsuit against Dragon Box, as the studios continue their crackdown on streaming devices.

The suit accuses Dragon Box of facilitating piracy by making it easy for customers to access illegal streams of movies and TV shows. Some of the films available are still in theaters, including Disney’s “Coco,” the suit alleges.

Dragon Box has advertised the product as a means to avoid paying for authorized subscription services, the complaint alleges, quoting marketing material that encourages users to “Get rid of your premium channels … [and] Stop paying for Netflix and Hulu.”

The same studios filed a similar complaint in October against TickBox, another device that enables users to watch streaming content. Both TickBox and Dragon Box make use of Kodi add-ons, a third-party software application.

Dragon Box, which is based in Carlsbad, Calif., did not immediately respond to a request for comment.

“The commercial value of Defendants’ Dragon Box business depends on high-volume use of unauthorized content through the Dragon Box devices,” the suit alleges. “Defendants promise their customers reliable and convenient access to all the content they can stream and customers purchase Dragon Box devices based on Defendants’ apparent success in delivering infringing content to their customers.”

TickBox, which is based in Georgia, has argued that it merely offers a hardware device, akin to a laptop or a tablet, and is not responsible for any copyright infringement that may occur on that device. TickBox recently removed marketing language that seemed to promise viewers could use the device to watch subscription channels for free.

Dragon Box CEO Paul Christoforo is named as a defendant in the suit. On his LinkedIn page, Christoforo advertises the Dragon Box device “opens up a whole new world of possibilities, where free movies and TV channels online are endless.”

Christoforo goes on to state that the device is legal.

“It is legal to stream content on the internet,” he writes, in all caps. “We can’t be held liable for the movies and TV channels online that people are watching, because all the software is doing is accessing content that is readily available online.”

The Dragon Box device lists for $350 on the company’s website. According to a recent call for resellers, the company has 250,000 customers in all 50 states.

Christoforo was formerly president of Ocean Distribution, where he became notorious online for a hostile customer service exchange, in which he advised a customer to “put on your big boy hat and wait it out like everyone else.”
http://variety.com/2018/digital/news...nt-1202660358/





On-Demand Streaming Now Accounts for the Majority of Audio Consumption, Says Nielsen
Sarah Perez

U.S. album sales declined in 2017 as streaming continues to grow, according to Nielsen’s year-end music report released this week. The report found that album sales, including both digital and physical, fell 17.7 percent last year to 169.15 million copies, down from 205.5 million in 2016. Meanwhile, streaming once again soared, leading the overall music industry to growth, largely due to the significant 58.7 percent increase in on-demand audio streams over last year.

In total, on-demand audio streams surpassed 400 billion streams in 2017, compared to 252 billion in 2016, and overall on-demand streams, including video, exceeded 618 billion. This led to the music industry’s growth of 12.5 percent in total volume, over 2016.

On-demand audio streaming now accounts for 54 percent of total audio consumption, Nielsen also said, up from 38 percent of the total in 2016 and 22 percent in 2015.

Notably, that makes 2017 the first time on-demand audio has accounted for the majority of audio consumption.

On-demand audio streaming has also now passed all other ownership formats, including physical and digital album sales and other digital track equivalents, for the first time in history in 2017.

The growth in on-demand streaming even contributed to a 20.5 percent rise in total digital volume in 2017, even while digital album and track purchases were down. With video and audio combined, on-demand streaming grew 43 percent this past year.

Not as surprising, Nielsen found that audio streaming is more popular on weekdays, while video streaming is tops on weekends.

Streaming’s growth is shaping the music market in other ways, too.

For example, R&B/Hip-Hop’s popularity with streaming consumers – like those on soon-to-IPO Spotify, where RapCaviar has become the most influential playlist in music – has now passed Rock as the largest genre in terms of total consumption.

R&B/Hip-Hop artists again led total volume this year, scoring 8 out of the top 10 spots for highest volume artists. Drake led with 4.8 million total track equivalents, followed by Kendrick Lamar (3.7M).

Ed Sheeran (3.6M) and Taylor Swift (3.4M), from the Pop genre, came in at number 3 and number 4 respectively, said Nielsen.

Streaming’s growth contributed individual tracks reaching notable milestones, too. For example, in 2017 there were 19 songs that reached 500 million in on-demand streams, compared with only 6 in 2016. (And 17 of those 19 tracks were from the R&B/Hip-Hop genre).

In addition, 10 songs surpassed 400 million on-demand audio streams, compared with just one in 2016.

The top streaming track, in terms of both video and audio, was “Despacito” by Luis Fonsi & Daddy Yankee featuring Justin Bieber. It saw a whopping 1.3 billion+ streams last year.

But even while streaming gained at the expense of album sales, there was one bright spot: vinyl.

Digital album sales were down 19.6 percent to 66.2 million in 2017; physical albums were down 16.5 percent to 102.9 million; but vinyl grew by 9 percent to 14.3 million, up from 13.1 million in 2016.

That means vinyl, which has seen 12 straight years of year-over-year increases, now accounts for 14 percent of total physical album sales. Physical albums, meanwhile, accounted for 61 percent of total albums sold.

Nielsen’s full report, which delves into individual artist successes and other music events, is available for download here.
https://techcrunch.com/2018/01/04/on...-says-nielsen/





Facebook, Sony/ATV Sign Music Licensing Deal

Sony/ATV said on Monday it signed a licensing agreement with Facebook Inc that will allow the social media platform’s users to upload and share videos from the music publishing company’s catalogue on Facebook and Instagram.

The multi-year deal will allow artists associated with Sony/ATV — whose catalogue includes Bob Dylan, Taylor Swift and Ed Sheeran — to earn royalties from the use of their music on the social media platforms.

Facebook’s deal with Sony/ATV follows a similar deal with Universal Music Group in December, to retain users and attract advertisers.

Reporting by Sonam Rai in Bengaluru; Editing by Shounak Dasgupta
https://uk.reuters.com/article/uk-fa...-idUKKBN1EX1Y9





Inside the Amish Town that Builds U2, Lady Gaga, and Taylor Swift's Live Shows

Deep in Amish country, Tait Towers designs live sets for the world's biggest music acts. Its aim? To make rock stars’ visions come alive
Stephen Armstrong

In December 2016, designer Ric Lipson was in New York on a conference call with Bono, The Edge, Adam Clayton and Larry Mullen Jr. Lipson is a senior associate at London-based design firm Stufish, the company that, along with U2's set designer Willie Williams, has created all of the band's tours since 1992's Zoo TV. In October 2016, U2 had played software giant Salesforce's annual conference on the site of the old Geneva Drive-In Theatre in Daly City, California. In homage to the Geneva, the stage had a movie screen and little else.

Now, the band wanted something similar for The Joshua Tree anniversary tour in 2017. The four musicians were leafing through proposed designs from Stufish and Williams when Bono grabbed a Sharpie and drew a rough outline of a Joshua tree breaking out through the top of the screen. That's what should be on the stage, he told Lipson.

It's always a difficult moment for designers such as Lipson and Williams when rock stars doodle their concepts for stage shows. To get a stadium tour from notion to opening night costs tens of millions. Thousands of people are needed to design, build, assemble, market and sell the show. The technology involved often doesn't exist yet.

In this case, at first, the set design looked simple - a 61-metre-wide, 14-metre-high 8K LED video screen painted gold with a silhouette of a Joshua tree picked out in silver. During the second half of the show, the screen would show epic high-definition American landscapes shot by photographer and director Anton Corbijn. There would also be a tree-shaped catwalk and satellite stage extending into the audience, plus steel trusses that dangled lights and speakers high above the stage.

To deliver that concept, however, required at least three world-first equipment prototypes: a video-controlled follow-spotlight that tracked performers using a CCTV system; a state-of-the-art carbon-fibre video screen (the largest and highest resolution ever used for a concert tour, with pixels just 8.5mm apart); and prototype speakers from audio specialists Clair Brothers that are so powerful, only 16 speakers are needed to flood even the largest stadium with sound. Furthermore, the various technical and safety standards involved meant that the stage would take three days to put up and take down, so there would need to be two sets of steel supports moving around the world at the same time, with, for instance, one under construction in Berlin as the band walked on stage in London.

"At that point, we didn't know what the kit would be, beyond the hope that technology just on the cusp of being possible would be invented in time for the start of the tour in May," Lipson says. "But rock stars don't want to hear problems and our job is not to say, 'That's impossible' - our job is to say, 'Yes, of course.'"

To get Bono's tree from sketch to stadium, Stufish and the band decamped to Lititz, a rural town in Pennsylvania. Lititz is home to Tait Towers, the architectural engineering and software company that has built the sets for every one of the ten highest-grossing tours in history using a blend of rock'n'roll engineering, technology - and a little help from the Amish community.

In 1968, a young Australian backpacker called Michael Tait took a job behind the bar at The Speakeasy Club, a late-night music industry haunt just off Oxford Street in London run by a friend of the infamous Kray twins. If anyone wanted a career in music, getting into - or best of all, getting to play at - The Speakeasy was the fastest route to stardom until it closed in 1978. The Beatles, David Bowie, Bob Marley, Pink Floyd, the Rolling Stones, Elton John and Jimi Hendrix all graced its dingy stage.

When the manager of a bunch of prog-rock newbies called Yes spent the evening touting for a van driver to get his boys to a gig in Leeds, Tait volunteered. He was stunned at the shoddiness of the band's equipment and lighting - guitarist Peter Banks kept stamping on his effects pedals, breaking them almost every time. "I realised that I could make all this stuff work," he explains. Tait became Yes's tour manager, sound engineer and lighting designer for the next 15 years.

Out on the road he leveraged his childhood love of electrical circuit kits, batteries and bulbs to devise edged boards that kept wah-wah pedals and fuzzboxes safe from stomping, create the first revolving stage in rock and design one of the first self-contained lighting towers. Other acts loved his ideas. Soon, he was working with Barry Manilow and Neil Diamond.

"Before I knew it, I was in the set business," Tait explains. He founded Tait Towers in 1978, naming the company after his industry-famous lighting tower, and located its headquarters out in Lititz, to be near his close collaborators, the Clair Brothers.

The Clair brothers - Roy and Gene - built their first speakers in 1966 when Frankie Valli and the Four Seasons played Franklin & Marshall College in Lancaster, near Lititz. Roy and Gene's PA so impressed the band that Valli took them on the road with him. In 1970, the brothers designed and built the first stage monitor, and two years later the first hanging sound system for indoor arenas. By 1978, the brothers were the first port of call for any band heading out on the road. They saw no reason to leave Lititz, so Tait set up nearby.

In the 80s, Tait built the stage that Michael Jackson moonwalked on, as well as sets for Bruce Springsteen and U2. The company built the stage for the Rolling Stones' record-breaking Voodoo Lounge tour in 1994 and the video screen for Janet Jackson's Velvet Rope tour in 1998. "Even then, it was more like a hobby," explains James Fairorth, Tait's president and CEO - a well-built, genial man with a loose ponytail who everyone knows as "Winky". "Michael Tait was Willy Wonka and we were working in a dream factory - building stage sets because nobody else was."

Then 1999 arrived, the file-sharing site Napster launched - and Tait's world changed overnight.

Alan Krueger, the Princeton economist and co-author of the 2005 paper Rockonomics: The Economics of Popular Music, describes the post-Napster music industry using what he calls the "Bowie theory". Back in the 80s and 90s, Krueger explains, most artists made most of their money from music sales, using tours as promotional vehicles for their latest album. U2 sold 14 million copies of The Joshua Tree in its year of release, earning the band around $37 million (£28m) in the US. The original 111-date Joshua Tree tour grossed roughly the same, at $40 million.

Post-Napster, the link between recorded and live revenues has been severed, a trend spotted by David Bowie in 2002 when he told The New York Times, "Music itself is going to become like running water or electricity. Artists better be prepared for doing a lot of touring, because that's really the only unique situation that's going to be left."

Crispin Hunt agrees. He experienced a brief flash of fame in the 90s as the singer in Britpop band Longpigs, best known for their indie anthem "She Said". He became a successful songwriter after the band broke up, writing hits for the likes of Lana Del Rey, Ellie Goulding, Florence + the Machine, Jake Bugg and Rod Stewart. It's a living, he explains, but the post-Napster world of streaming services and online video hasn't rewarded the songwriter.

"If I'd written songs that reached the same chart position in the 80s or 90s, I wouldn't be talking to you now," he grins wryly. "I'd be by the pool in LA. But as long as Spotify pays, on average, between $0.006 and $0.008 per stream, and while YouTube's royalties are cloaked in secrecy, that's impossible to imagine. I recently had a song on BBC Radio 1's C-list - that's six plays a week. In the same week, a Jake Bugg track I wrote had 12 million views on YouTube. I earned £75 for six plays on Radio 1 and £65 from 12 million YouTube plays. The only way to make money is to be able to sell out 2,000-seat or larger venues. Any tour, any gig, for any size of band has basic running costs - transport, crew, PA. Unless you sell over 2,000 tickets you're losing money."

In 1999, recorded music in the US - the world's biggest music market - earned an inflation-adjusted $20.6 billion, according to the Recording Industry Association of America. In 2015, auditors PwC estimated global music-industry revenues from recorded music, whether sold or streamed, totalled around $15 billion. Across that same period, the live touring industry saw the kind of expansion rarely seen outside Silicon Valley, with US concert ticket sales tripling in value between 1999 and 2009. In 2016, live music took more than $25 billion per year in ticket sales and another $5 billion in sponsorship - around double the global revenues for recorded music and larger than the GDP of Iceland.

For artists, the difference is stark. U2's album sales have been in decline since The Joshua Tree, from Achtung Baby's eight million in 1991 to, in 2009, No Line on the Horizon's 3.4 million copies sold. Ticket sales, meanwhile, have been rising: 1992-1993's Zoo TV tour, supporting Achtung Baby and Zooropa, saw box-office revenue top $151 million; 2009-2011's 360° tour took a record-breaking $736 million. The Joshua Tree's 2017 tour has fewer than half the dates of the 360° tour, but it took $62 million in its first month.

"Live music is competing for the same entertainment dollar as movies, box sets, restaurants, nightclubs and theme parks," Winky explains. "Shows have had to become spectacles to compete but the relationship between fan and star is incredibly intimate. Our challenge is, how do we wow tens of thousands of people? If you're sitting at the back of the hall, how do we deliver the artist to you in a way that feels intimate and personal? Otherwise, you're not coming back."

With a population of around 10,000, Lititz is a small market town perched in the middle of rolling wheat fields and dairy pastureland. Most of the town was built before the 20th century and comprises a mix of wooden colonial houses, Regency-era classical stone buildings, gothic Victorian red-brick shops and converted warehouses.

The surrounding area, Lancaster County, has the highest concentration of Amish - the Anabaptist sect that rejects modern technology and conveniences - in the US. Driving to Lititz from Philadelphia, you see a road dotted with small, boxy, four-wheel horse-drawn buggies. The black buggies belong to the Amish and the grey buggies belong to the more tech-savvy Mennonites.

Both communities are crucial parts of the tech-focused ecosystem spreading out from Tait's headquarters, an industrial estate at the edge of town called Rock Lititz. It's a sprawling campus of buildings built by Tait and Clair Brothers in 2014 to host companies looking to join them. It's what University of Toronto professor Richard Florida calls a place-based ecosystem. Besides Tait and Clair, businesses on site include lighting and design company Atomic; video experts Control Freak; barrier company Mojo; Stageco, which creates large steel structures such as the Claw used in U2's 360° tour; engineering firm Pyrotek; Yamaha instruments; and Tour Supply, an instrument-rental company.

It's cluster innovation in the purest sense. Artists and companies can experiment at a lower cost, test ideas and quickly change their minds. The cost of making mistakes reduces, allowing people to take greater risks. The close proximity also brings people together. "Success in this business - just like any other - is about relationships," explains Troy Clair, president and CEO of Clair Global. "You get to know people and you work with them and they trust you."

This company based on technological innovation is not only situated in the heart of Amish country, it's entirely symbiotic with the back-to-basics ethos and economy. The agricultural supply chain and network of small metalwork forges allows Tait's designers and architects to build anything. A Mennonite company that makes steel cattle grids, for instance, also cuts the metal supports for Tait's rock shows.

"All my neighbours are Amish," explains Adam Davis, Tait's chief creative officer, an enthusiastic tousled man in his late 40s. "When you're a farmer and you break something, you have to fix it, especially if you're still using traditional tools and not computer-driven combine harvesters. So, when it comes to creative problem-solving, the Amish are the masters - they just get on with it. All of these farms are enterprises, with this incredible culture of innovation and making that doesn't exist in most places. If a show designer needs something made, we'll prototype custom shapes and sizes in our steel shop within 15 minutes. Then we go to an Amish forge and they'll turn out 10,000 of them almost overnight."

Rock Lititz feels like Nasa's Cape Canaveral, with outlying buildings surrounding an enormous warehouse that resembles outsized rocket assembly rooms. Walking in, you get a brief sense of what it must be like entering the TARDIS - the space feels even bigger on the inside. It's large enough to hold one stadium stage or two arena stages, with room to build and change things.

Tait's main building is a short drive from the assembly and rehearsal room. It covers 232,000 square metres and hosts a design space, project management, a metal shop, electrical-control shop, hoist and winch department, LED-video-screen team, scenic department, print shop and a complex loading dock. It's like an old Victorian family company: everyone, down to the packers and loaders, is on the payroll and the only outsourcing is to Amish craftsmen. "Everything we do is a prototype," Davis explains, as he drives across the sprawling space between rehearsal room and head office. "U2, Katy Perry, Taylor Swift… they're the CEOs of their brand. They don't want the same stuff Justin Bieber or the Rolling Stones had last year. They want something brand new. So we're in a spectacular arms race. It's probably fun to look at from the outside but it's a fairly horrible place to be because every day we have to reinvent ourselves, create something new to get to the next level with the knowledge that we can't fail, especially with the bigger flying-through-the-air stuff. That can't go wrong as people may get hurt."

Taylor Swift, Usher, Mumford & Sons, U2 and Lady Gaga have built and rehearsed shows there since it opened, "and the beauty of it is that when they go into town after rehearsals, the Amish don't know who they are," Davis grins. "We wanted a perfect techie space because I was tired of showing up in front of our clients and testing something for the first time. The problem is, there were no spaces large enough to do it. So, we built it for ourselves, for techies. But what's happening is the artists are coming - with the band, the choreographers, lighting, pyro, sound, automation, staging, content… and the creative process happens here."

Lititz offers a curious case study, fusing creativity, construction, craft, community and computing in a global billion-dollar, boutique, artisanal tech firm. So that if you were, say, Lady Gaga, you could walk through the door and follow your concept from design to build to rehearsal to load out across this one site. Which is exactly what she did for Joanne, her 2017 tour.

Lady Gaga's shows are known for their spectacle. In 2012, she had Tait build a five-storey castle on stage for her Born This Way tour. The final design for her current show featured a 26-metre-wide stage based around three lifts and five performer wave lifts surrounded by LED panels. The wave lifts are moving platforms that are compared to Tetris blocks because they can be configured in so many ways. The wave lifts move almost constantly in formations such as staircases and zigzags. This made for a great show but lacked an element of dive-bar intimacy that Gaga requested. The answer was including a stripped-down, dive bar-style B-stage at the opposite end of the arena.

Jim Shumway, a project manager and integrator at Tait, who started out as a rigger for Cirque du Soleil, walked me through the process a month before the Joanne tour began. Stage designers were noodling with animation software on three-screen monitors, changing parts of the stage once the lighting and sound had been incorporated. One was manipulating a strange oval disc that seemed to be flying in the air.

"They're bridges," Shumway explained. "The B-stage has this heart-shaped acrylic piano that's got 44 lasers shooting beams through the arena whenever she hits a key. She needs to get there via a bridge. It turns out there needs to be five people dancing on that bridge, but it must be somewhere else during the rest of the show. There's the impossible, which we do all the time, and the unachievable. For a while, I thought this was unachievable."

The solution was three custom-built inflatable lighting pods that hang 18 metres above the audience, housing billboard-style video screens. Each can fly down and convert into a bridge. The bridges can then reach one of three satellite stages dotted around the main stage. When combined to form a catwalk, they stretch all the way to the B-stage. The bridges fly out over the audience while carrying Lady Gaga and her dancers, and sync with lights, lifts and music. It looks impossible, but Tait's proprietary software Navigator, says Shumway, "turns maths into art".

Navigator is a flexible piece of automation software designed to control any interface, system or device, from industrial-factory robots to light and sound desks to the winches and pulleys that move Gaga through the air. Automation software, such as that used to operate factory robots, is reliable through simplicity and repetition. Navigator, Shuman explains, has to be infinitely flexible and utterly reliable because if it fails, someone could die. At the same time, Navigator is often controlled by people with little or no technical training.

"Most of the time, the people who make the decisions about what Navigator should do aren't engineers or developers, they're people working for directors or artists," explains Jim Love, Tait's vice president of engineering. "They're interpreting a creative person's wishes on the fly. So it needs to be as intuitive and simple as possible to do some basic programming, but the system needs to stop you from doing anything stupid."

In 2013, Navigator synced two industrial robot arms designed to build cars on factory floors and had them dance at deadmau5's Las Vegas residency. In 2015, Navigator lifted the catwalk at the front of Taylor Swift's stage and flew it, her and her team of dancers over the heads of the crowd. In 2016, Navigator rippled waves and oscillating patterns through a vast kinetic-light installation above the Red Hot Chili Peppers on their Getaway tour.

In creating the wave patterns for the lights on the Getaway tour, designers exported a video file of an animated wave to Navigator, which the software used as cues to operate Tait's Nano Winches and change the colour and position of every light. All the operator had to do was press "go" at the start. Navigator will do the rest.

The roots of Navigator lie with 80s synthesizers and the technical demands of Broadway and Las Vegas shows. In 1983, synthesizer manufacturers agreed a simplified common language - MIDI - which allowed drum machines to kickstart basslines or a single keyboard to control an orchestra. Theatre picked up its principles, sending cues to trigger a task such as setting off a pyrotechnic.

Navigator uses similar principles. The building blocks of the system were put in place 15 years ago using hardware built with Intel's x86 desktop CPU and a real-time operating system. This is a similar set-up to the fly-by-wireless systems used in autonomous-vehicle design. Navigator can talk to any device such as a factory robot arm, no matter what its original coding. It can then get it to sync with a lighting rig and simplify the interface into something that any roadie could operate.

"The core principles of the architecture have stayed the same but it's a modular platform so we can build all sorts of things on top of it," Love explains. "There's machine learning in it, bits of autonomous-vehicle control and weather-measurement modules. All we've been doing for the past 15 years is writing new modules that keep giving it more power. It remembers everything we've ever asked it to do."

In a recently built theme park in China, for instance, Navigator controls a fountain that flings drops of water from post to post to give the illusion of bouncing. It has a module that understands where to point a fountain. Attaching them so the fountain is on target whatever the weather proved relatively simple. Setting up Navigator for Lady Gaga was equally straightforward, involving only a handful of modules. It was building two 36,000kg main wave lifts and three smaller lifts, then tying them in with the show's choreography, that took longer.

Crucial to Navigator's success, argues Love, is where its coder is based: Boulder, Colorado. "When you're writing code, the last thing you need is a project designer looking over your shoulder asking you to solve their problem," he explains. "That means you're always reacting to short-term issues rather than building a long-term solution."

If you were to watch the life of a large touring stage as a time-lapse film, you'd see almost every piece was in a constant state of motion, broken up by short periods of stability. "What you see at the gig is the one moment where the set stands stationary in one piece," explains Stufish CEO Ray Winkler at Twickenham Stadium as the audience files in on a sweaty July Sunday. "For most of its life it's in a box on a truck, in a plane, on a ship, being handled by stagehands in South America or Europe. This is the one break it has and this is what everyone sees."

And that breaks down further. The biggest question artist managers have for Stufish and Tait is, "What's the Instagram moment?" As tours were once tools to sell albums, Instagram is the tool that sells tours and, ultimately, the artist's brand. Research by Nielsen in 2016 found that of those in the audience who used social media during gigs, 83 per cent used Instagram. Everything boils down to a handful of frozen images to be sent, shared, copied and liked.

On the Joshua Tree tour, U2's performance was divided. For the first part of the gig, as the sun beat down on the thousands of middle-aged men packing the stadium, the band ran through early hits such as "Sunday Bloody Sunday" on the low catwalk b-stage. At sunset, the four musicians walked back to the main stage to begin playing songs from The Joshua Tree and stopped briefly centre stage to wave at the crowd. Behind them, the screen glowed blood red and they were shown in silhouette, under the pitch-black shape of the tree.

"We posed the band," Lipson says. "Tait built a platform and played with it for a day or so until we had the perfect position. Then we told them to wait there for 30 seconds." It worked. The audience yelled like teenage girls at a Justin Bieber gig and held their phones aloft to take photo after photo to be shared millions of times, pushing the tour out to billions of people on social media. It's the ecstatic pause, the live-album cover shot that no longer needs the album cover.

Slowly, this is influencing the way theatres and other buildings are designed. Tait is pitching the kinetic architecture from the Red Hot Chili Peppers' Getaway tour as an installation for airports and parlaying its understanding of live shows into building London's newest theatre, the Bridge Theatre - for former National Theatre artistic director Nick Hytner and executive director Nick Starr's London Theatre Company - in a first-of-its-kind modular concept.

"This technology that we deal with has to be scaleable and transferable," Winkler says. "Popular culture and pop imagery is the currency of our generation. It doesn't matter if you're dealing with a rock'n'roll stage or a railway station, people take pictures in the same format, whether it's of an airport terminal or a video screen. That's what they trade in. If something doesn't look good on Instagram, no one's going to give a shit."

If Winkler's right, and the trend for connection with the perfect picture continues to be central to offline art, architecture, food and friendship, we'll soon be living in Tait's world of Instagram moments in every kind of design. In that world, when Bono draws a tree, it could be shared around the world by millions.
http://www.wired.co.uk/article/tait-...concerts-stage





As Low-Power Local Radio Rises, Tiny Voices Become a Collective Shout
Kirk Johnson

A knowledge of geography is essential if you are running a tiny, 100-watt radio station. Hills are bad, for example, as are tall buildings. Salt water, though, which lies at this city’s doorstep, can boost a radio signal for miles, like a skipped rock.

For a low-power FM radio station, anything measurable in miles is good.

But on a recent Thursday night, one station, KBFG, was struggling to even get on the air. The station’s signal, audible since November in an area measurable in square blocks, had flatlined. The Ballard High School basketball team was about to take the court and the live play-by-play was in doubt.

“We’re bootstrapping it,” said Eric Muhs, a physics and astronomy teacher. Headphones were slung around his neck, and a mop of unruly gray hair came further undone as he leaned into his laptop trying to fix a software glitch. But Mr. Muhs, 60, one of KBFG’s founders, admitted that the stakes for failure were relatively low. “Almost nobody knows that we exist,” he said.

Low-power nonprofit FM stations are the still, small voices of media. They whisper out from basements and attics, and from miniscule studios and on-the-fly live broadcasts like KBFG’s. They have traditionally been rural and often run by churches; many date to the early 2000s, when the first surge of federal licenses were issued.

But in the last year, a diverse new wave of stations has arrived in urban America, cranking up in cities from Miami to the Twin Cities in Minnesota, and especially here in the Northwest, where six community stations began to broadcast in Seattle. At least four more have started in Portland. Some are trying to become neighborhood bulletin boards, or voices of the counterculture or social justice. “Alternative” is the word that unites them.

“It’s an unprecedented time in our radio history when we have so many stations getting on the air at the same time,” said Jennifer Waits, the social media director at Radio Survivor, a group in San Francisco that tracks and advocates for noncommercial radio.

Weird Is Good

Low-power FM stations can typically be heard for about three and a half miles if a bigger station or obstacle does not block the signal. O f the nearly 2,500 such stations in some stage of licensing, construction or active broadcast across the nation, more than 850 have a license holder with a religious affiliation.

Many bigger stations, by contrast, are being programmed far from the cities they serve, with corporate budgets to buy transmitters that can then boost a signal beyond its home base. The low-power licenses are exclusively local, restricted to nonprofit groups that might have a civic cause — the South Philadelphia Rainbow Committee, for example — or were formed solely for the sake of a station and the dreams that fuel its existence.

Washington has the second-highest concentration of them among the nation’s 15 most populous states, with 68 stations for 7.4 million people, according to the Federal Communications Commission, second only to Florida. New York, by contrast, has 54 stations, but nearly three times Washington’s population. Oregon — while not among the 15 most populous states, with 4.1 million people — is even more saturated than Washington and Florida; it has 80 low-power stations, most in rural areas.

You want weird? Just turn the dial. One station in Seattle invites listeners to phone their dreams and fantasies into a recorded line, then puts them on the air, at least the ones that don’t raise concerns about F.C.C. indecency rules.

Russian-speaking residents in Portland, Ore., have their own tiny station.

And if you want be charmed by a 5-year-old boy chatting with his father at bedtime about dinosaurs, music and his sometimes bothersome sisters, you can find that at Tristan’s Bedtime Radio Hour, broadcast on Sunday nights on KBFG in Northwest Seattle, where Tristan lives. It also streams on the web.

Help From Community Groups

What low-power urban radio creates, believers say, is a sense of community, a defined physical stamp of existence that goes only as far as it can be heard. So new licensees and programmers are knocking on doors near their antennas and holding fund-raisers at the local brewpub. That’s a stark contrast to the amorphous everywhere-but-nowhere world of the web, and the web-streaming radio or podcasts that a few years ago seemed most likely to take center stage in low-budget community communications.

“When you start broadcasting, it’s like you have a storefront,” said Rebecca Webb, founder of the Portland Radio Project, KSFL 99.1, which broadcasts from two rooms above a closed silent-movie-era theater built around 1915. The station promises to play a Portland-area music group every 15 minutes, and in a time of media consolidation, Ms. Webb said, that’s a political act.

“The fact that we have gathered ourselves up by our bootstraps and created a community radio station is in direct response to the ownership concentration of large media companies,” she said.

Many community groups with no money and often no experience in radio got help in starting their stations. A Seattle-based event ticketing company with a social mission in working with nonprofits allowed a staff organizer, Sabrina Roach, to help people manage the F.C.C. process with seminars, training and advice.

In Oregon and California, a group called Common Frequency jumped in, especially in rural regions, helping people get licenses as they came available. In Philadelphia, the Prometheus Radio Project led a fight to get the F.C.C. to relax rules to allow more low-power FM stations, especially in urban markets, which big broadcasters had opposed.

The recent F.C.C. vote to end so-called net neutrality, under which internet users were guaranteed equal speed and access, might not directly affect small radio broadcasters that do not livestream. B ut advocates said the decision amplified the importance of small voices, however they are expressed.

“If it gets harder for independent media to stream online, the low-power FM stations will become even more important,” said Todd Urick, a radio engineer who helped lead Common Frequency.

Voices From the Trenches

Clara Pluton, a stand-up comedian in Seattle who pays the bills by waiting tables, hosts a radio show with Val Nigro, who also does comedy, every other Tuesday night on Hollow Earth Radio, KHUH 104.9. The station began broadcasting in the Central District of Seattle in September, and Ms. Pluton and Ms. Nigro are now in their second month of “queer talk,” as they describe their show. There’s no salary, no fame, no certainty of an audience of any kind, the women said in an interview at the station, but deep rewards nonetheless in knowing that they speaking out about their lives and their city.

When things go wrong, she said, and they do — a curse word slipping out, or a bad, skipping CD — it’s part of the experience for volunteers and listeners alike. Lack of polish is part of the authenticity.

“It’s like members of the community broadcasting to members of their community,” Ms. Pluton said.

Some volunteer D.J.s, like Bob Knowles in Portland, found a place in local radio after 25 years fishing for halibut in Alaska. He tuned in one day to KSFL, the Portland Radio Project, and liked it so much that he went in and got his own show, “Throwin’ it Back Thursdays,” playing obscure or forgotten musical tracks as one of the station’s 40-odd volunteers.

Gary Dunn, a 17-year-old junior at Ballard High in Seattle who was helping to broadcast the varsity basketball game, said he liked the surprise that KBFG offered of not knowing what might come next, a song he has never heard, a perspective in politics or in life that is unfamiliar. He also likes the fact that radio is an old technology, one his great-grandparents would have known. “Old devices still help us,” he said.

Mr. Muhs, the physics teacher, ultimately got his software back up and running in time for Ballard Beavers’ tip-off against the Franklin High Quakers. But it wasn’t the Beavers night, and they lost 39 to 70.
https://www.nytimes.com/2018/01/06/u...wer-radio.html





Samsung Smartphones will have their FM Chip Enabled in the US and Canada, in Partnership with NextRadio
Rita El Khoury

Some things baffle me about the US. The blind love of SMS is one, and the fact that the FM chip in smartphones isn't activated on many devices in the country for some reason (read: operator greed) is another. But things have been moving in the right direction: LG announced a partnership with NextRadio to unlock the FM chip in its smartphones a few months ago and now the same is happening with Samsung.

NextRadio made the announcement, rightly explaining that FM radio is essential in areas with low connectivity and in emergency and disaster situations where a connection might be difficult to obtain or maintain and where access to information could be a matter of life and death. With the chip unlocked, users will be able to listen to local radio on their phone using the NextRadio Android app.

The press release mentions that "upcoming [Samsung] smartphone models in the U.S. and Canada" will have the FM chip unlocked, however I did find several existing Samsung devices with their FM chip enabled on NextRadio's site. Huh. Maybe those of you living in the US can shed a light on those contradicting details.
http://www.androidpolice.com/2018/01...hip-nextradio/





Two Major Apple Shareholders Push for Study of iPhone Addiction in Children
Luke Kawa

• Investors push tech giant to give parents more access control
• Studies needed to determine effect of usage on mental health

Bloomberg’s Jason Kelly discusses investors urging Apple to do more to protect children from smartphone addiction.

Two big shareholders of Apple Inc. are concerned that the entrancing qualities of the iPhone have fostered a public health crisis that could hurt children -- and the company as well.

In a letter to the smartphone maker dated Jan. 6, activist investor Jana Partners LLC and the California State Teachers’ Retirement System urged Apple to create ways for parents to restrict children’s access to their mobile phones. They also want the company to study the effects of heavy usage on mental health.

“There is a growing body of evidence that, for at least some of the most frequent young users, this may be having unintentional negative consequences,” according to the letter from the investors, who combined own about $2 billion in Apple shares. The “growing societal unease” is “at some point is likely to impact even Apple.”

“Addressing this issue now will enhance long-term value for all shareholders,” the letter said.

An Apple spokesman declined to comment on the letter, which was reported earlier by the Wall Street Journal.

Parental Controls

It’s a problem most companies would kill to have: Young people liking a product too much. But as smartphones become ubiquitous, government leaders and Silicon Valley alike have wrestled for ways to limit their inherent intrusiveness.

France, for instance, has moved to ban the use of smartphones in its primary and middle schools. Meanwhile, Android co-founder Andy Rubin is seeking to apply artificial intelligence to phones so that they perform relatively routine tasks without needing to be physically handled.

Apple already offers some parental controls, such as the Ask to Buy feature, which requires parental approval to buy goods and services. Restrictions can also be placed on access to some apps, content and data usage.

The activist pressure is the latest in a series of challenges for the tech giant. Last week, Cupertino, California-based Apple said that all of its Mac computers and iOS devices, which include both the iPhones and iPads, faced security vulnerabilities due to flawed chips made by Intel Corp. At the tail end of 2017, the company apologized to customers for software changes that resulted in older versions of its iPhones running slower than newly introduced editions.

— With assistance by Scott Deveau, and Alex Webb
https://www.bloomberg.com/news/artic...on-in-children





Ninth Circuit Doubles Down: Violating a Website’s Terms of Service Is Not a Crime
Jamie Williams

Good news out of the Ninth Circuit: the federal court of appeals heeded EFF’s advice and rejected an attempt by Oracle to hold a company criminally liable for accessing Oracle’s website in a manner it didn’t like. The court ruled back in 2012 that merely violating a website’s terms of use is not a crime under the federal computer crime statute, the Computer Fraud and Abuse Act. But some companies, like Oracle, turned to state computer crime statutes—in this case, California and Nevada—to enforce their computer use preferences.

This decision shores up the good precedent from 2012 and makes clear—if it wasn’t clear already—that violating a corporate computer use policy is not a crime.

Oracle v. Rimini involves Oracle’s terms of use prohibition on the use of automated methods to download support materials from the company’s website. Rimini, which provides Oracle clients with software support that competes with Oracle’s own services, violated that provision by using automated scripts instead of downloading each file individually. Oracle sent Rimini a cease and desist letter demanding that it stop using automated scripts, but Oracle didn’t rescind Rimini’s authorization to access the files outright. Rimini still had authorization from Oracle to access the files, but Oracle wanted them to access them manually—which would have seriously slowed down Rimini’s ability to service customers.

Rimini stopped using automatic downloading tools for about a year but then resumed using automated scripts to download support documents and files, since downloading all of the materials manually would have been burdensome, and Oracle sued. The jury found Rimini guilty under both the California and Nevada computer crime statues, and the judge upheld that verdict—concluding that, under both statutes, violating a website’s terms of service counts as using a computer without authorization or permission.

Rimini Street appealed, and we filed an amicus brief last year urging the court to reject Oracle’s position. As we told the court, the district court’s reasoning turns millions of Internet users into criminals on the basis of innocuous and routine online conduct. By making it completely unclear what conduct is criminal at any given time on any given website, the district court’s holding is in violation of the long-held Rule of Lenity—which requires that criminal statutes be interpreted to give clear notice of what conduct is criminal. Not only do people rarely (if ever) read terms of use agreements, but the bounds of criminal law should not be defined by the preferences of website operators. And private companies shouldn’t be using criminal laws meant to target malicious actors as tool to enforce their computer use preferences or to interfere with competitors.

At oral argument in July 2017, Judge Susan Graber pushed back [at around 33:40] on Oracle’s argument that automated scraping was a violation of the computer crime law. And Monday, the 3-judge panel issued a unanimous decision rejecting Oracle’s position. As the court held:

“[T]aking data using a method prohibited by the applicable terms of use”— i.e., scraping — “when the taking itself generally is permitted, does not violate” the state computer crime laws.

The court even refers to our brief:

“As EFF puts it, ‘[n]either statute . . . applies to bare violations of a website’s terms of use—such as when a computer user has permission and authorization to access and use the computer or data at issue, but simply accesses or uses the information in a manner the website owner does not like.’”

We’re happy to see the Ninth Circuit clarify, again, that violating a website’s terms of service is not a crime. And we hope this decision influences another case pending before the court involving an attempt to use a computer crime statute to enforce terms of service and stifle competition, hiQ v. LinkedIn. That case addresses whether using automated tools to access publicly available information on the Internet—information that we are all authorized to access under the Web’s open access norms—is a crime. It’s not, and we hope the court agrees. It will hear oral argument in March in San Francisco.
https://www.eff.org/deeplinks/2018/0...vice-not-crime





'All Happening Very Quickly': Tesla Battery Sends a Jolt Through Energy Markets
Peter Hannam

When it comes to hype, there is probably nobody as outlandish as US-based billionaire Elon Musk and his Tesla corporation.

Who else would plan to blast one of his new electric vehicles into space aboard his company's SpaceX rocket bound for Mars?

"Payload will be my midnight cherry Tesla Roadster playing Space Oddity," Musk tweeted last month. "Destination is Mars orbit. Will be in deep space for a billion years or so if it doesn't blow up on ascent."

A mini version of the hyperbole has been on show in Australia, following the installation late last year of a 100-megawatt lithium ion battery – the world's largest. Musk famously offered to supply it for free if his firm couldn't build it within 100 days.

It arrived in time for this summer's strains on the electricity system, and may come in handy as the mercury soars this weekend over a region of Australia in arc from Adelaide to Tasmania and up to Brisbane and beyond.

The big battery, located next to the Hornsdale wind farm in the mid-north region of South Australia, has already been active, drawing interest from well beyond these shores.

The Los Angeles Times and Washington Post were among international publications to cover the battery's early success in shoring up Australia's electricity grid.

Interest was sparked in part by the battery's quickfire response – just 0.14 seconds – to inject electricity into the network following the failure of a 559-MW unit of Loy Yang A in Victoria's Latrobe Valley.

"It appears to be far exceeding expectations," the LA Times trumpeted. "In the last three weeks alone, the Hornsdale Power Reserve [as the battery is known] has smoothed out at least two major energy outages, responding even more quickly than the coal-fired back-ups that were supposed to provide emergency power."

'Lumbering coal'

The concept of the battery beating out coal-fired power was a key part of the story, prompted by an article on the RenewEconomy website, headlined: "Tesla big battery outsmarts lumbering coal units after Loy Yang trips".

"By the time that the contracted Gladstone coal unit had gotten out of bed and put its socks on so it can inject more into the grid – it is paid to respond in six seconds – the fall in frequency had already been arrested and was being reversed," the report said.

As impressive as it seemed, the reality, though, was probably more prosaic.

Analysis by Dylan McConnell, a researcher at Melbourne University's Climate & Energy College, found each of Gladstone's six units increased output and had supplied 12.7 MW of the shortfall before the battery "had done anything".

It also appears that at least for the Loy Yang unit tripping on December 14, the battery was not "enabled" in the back-up market for Frequency Control Ancillary Services (FCAS). In other words, the battery operators probably weren't paid for that intervention.

'Outstanding'

While the hype rings a bit hollow in that instance, there's no doubt the battery has been making a difference, responding to four coal generator trips in December alone.

Franck Woitiez, managing director at Neoen – the French operator of the battery – told Fairfax Media its performance had been "outstanding". (Tesla, as is its wont, declined to comment.)

"We are very proud of the battery performance throughout December and the start of January," Mr Woitiez said, adding the company had received "quite a few inquiries" about its operations.

Critics have quibbled at the battery's size, highlighting that alone it could only supply perhaps 30,000 homes for an hour or so, at a cost guessed at $US50 million ($64 million).

But such figures ignore the many benefits – including supporting the security of the grid – that are only beginning to be understood.

"The battery has been dispatched on multiple occasions for both energy and FCAS," a spokesman for the the Australian Energy Market Operator tells Fairfax.

For December, "the battery was dispatched for energy on over 380 separate five-minute dispatch intervals, and enabled on over 4600 separate dispatch intervals in one or more FCAS markets", he said.

'Significant' savings

The SA government also spruiks the benefits.

"It is difficult to determine price trends at this early stage, however the battery has been active in the Raise and Lower Regulation Frequency Control and Ancillary Services [R-FCAS] markets since commissioning," a SA government spokesman tells Fairfax.

"The cost of Raise and Lower R-FCAS in SA in December 2016 was $502,320, compared with just $39,661 in December 2017, following the operation of the battery," he said.

"In recent times FCAS services have cost South Australians about $50 million each year," he says. "The battery is expected to significantly reduce the cost."

According to Mr McConnell, the battery dispatched about 2.5 gigawatt-hours of electricity while consuming about 3 gigawatt-hours, representing a round-trip efficiency of about 80 per cent.

"The performance to date has been very impressive. It's ramp-up from zero output to maximum in seconds (or less) is something that we haven't seen in the electricity market before," Mr McConnell said, noting the current fleet of "fast start" units take five to 10 minutes to synchronise to the grid and start providing power.

Cases of the Tesla battery responding without being "enabled" could also be part of its testing, and there may also be arrangements with the SA government separate from the FCAS market, he said.

Victoria moves too

A smaller 20-MW battery deal signed with last week between Neoen and the Victorian government – again using Tesla – will provide similar benefits to Victoria when it comes online in mid-2019.

The site, next to a wind farm near the western Victorian town of Stawell, could be the first of perhaps a dozen or more battery and storage ventures in the pipeline, according to the Smart Energy Council.

"What we're seeing in South Australia and in Victoria is really the tip of the iceberg for projects that will be coming along," John Grimes, the head of the council, said.

Bruce Mountain, director of Carbon and Energy Markets, a consultancy, said the battery is already proving its worth with the full implications still to come.

"The biggest single source of insecurity to the power system is a trip of a major coal thermal generator unit simply because they are so large – [it's] not the wind or the sun, or people switching on their airconditioners," he says.

Batteries are also useful in taking up excessive supply should demand suddenly drop, affecting the frequency of the grid on the upside.

"In the olden days, this was simply sent out to large heatloads, which would just heat up, and waste all the energy into the air," he said.

Slowing down progress

The arrival of batteries and other storage that can be immediately released has exposed flaws in the existing market. One issue remains the fact generators supply at five-minute intervals that are priced on the average over half an hour, with an alignment of the two not due to kick in for years.

"Bringing the settlement period in line with the trading period, which will come from 2021, will be a major step in allowing batteries to compete effectively and get their full value," Mountain said.

That delayed implementation is "symptomatic" of how the industry, including regulators, continues to be dominated by major, centralised operators, he said. (AGL, Energy Australia and Origin Energy are the three biggest so-called gentailers, combining generation and energy retailing.)

"They do all they can to slow down progress and ensure the market compensation mechanisms don't suit them," Mountain said. "And the energy market authorities have generally been in their pockets."

The power industry has been struggling for years as ageing coal-fired power plants close and shifting federal and state policies have created busts and booms in renewable energy. Troubles included South Australia and its 1.7 million residents being hit by a blackout following a storm in September 2016 and NSW narrowly dodging major forced outages during a heatwave in February 2017.

Snow job

Mountain is scathing of the federal government's response, not least its promotion of the Snowy 2.0 pumped hydro scheme as a way to support the grid.

By his estimates, it will need 1.8 megawatt-hours to generate each MW-hour of storage for the proposed scheme that Prime Minister Malcolm Turnbull has touted as one of his government's major responses to the nation's energy crunch.

"There is no doubt at all that the revenues it will produce won't compensate the capital costs," Mountain says, estimating it would take as much as a decade to build and balloon out to $8 billion – or four times Turnbull's initial estimate.

"It took them a couple of months to build Musk's battery, which is essentially a white good that you plug and play," he said.

"I've got absolutely no doubt that batteries will win hand over fist."

Households and businesses are also seeing batteries emerge as a viable option to add to solar panels, reducing exposure to higher power prices.

Mountain estimates batteries and solar PV with grid back-up are "now competitive on any grid offer" in South Australia, and the same is true for about a third of residents in Victoria and NSW.

"It will be true for two-thirds in a couple of years' time – if not in a year's time," Mountain predicts. "It's all happening very quickly."
http://www.smh.com.au/environment/al...03-h0cxr7.html





A Laptop with Three Days of Battery Life: It's Coming

A laptop with three days of battery life: It's coming
Marc Saltzman

Is this the dawn of the multi-day laptop battery?

“Always connected personal computers” — or ACPCs — refer to a new breed of Windows laptops with three key features: a battery that can last multiple days; instant-on access when you open the lid or touch a key; and an optional high-speed cellular connection, to avoid hunting for a Wi-Fi hotspot to get online.

In other words, your laptop is going to behave a lot more like your smartphone.

Qualcomm – the world’s largest smartphone chip maker — is largely spearheading this emerging category. This marks the San Diego-based company’s second foray into the computer space, after the Windows RT mobile operating system failed to catch on after it debuted in 2012.

Intel is also a major player in this space, having worked on the first cellular-supported PC back in 2005 (with Sony). It's been heavily involved in battery improvements over the past few years.

But if you believe the hype, what we’ll see debut in 2018 will be nothing like we’ve witnessed in the past.

“With computers we have today, you’re lucky if you can get 15 hours of battery performance — and in most cases, it’s 8 to 10 hours, if that – so where I see the breakthrough here is a new benchmark of 22 hours, and standby of at least a week,” says technology analyst Tim Bajarin, who also serves as president at Creative Strategies, one of the first market research firms in Silicon Valley.

In fact, with the Qualcomm Snapdragon 835 processor, ASUS is claiming battery life of up to 22 hours of continuous video playback, and up to 30 days on standby.

At $799, the ASUS NovaGo (model # TP370) will also be the first always-connected PC with a 360-degree flip hinge – making it a “2-in-1” that can convert from laptop mode to a tablet by bending back the 13.3-inch screen – and the first with Gigabit LTE speeds, for an always on, always connected experience.

“I’ve been using these devices for many months, and the one thing that often gets overlooked is the ‘always on’ feature,” adds Miguel Nunes, senior director of product management at Qualcomm Technologies, Inc. “Like your smartphone, even when the screen is off, it’s still connected, so when I open the lid, it does facial recognition, and I’m in.”

Speed boon, too

Along with multi-day battery performance, these always-on PC's take advantage of ubiquitous cellular connectivity.

“With the NovaGo, you don’t have to find a Wi-Fi hotspot, and you get fast 1 gigabyte-per-second wireless Internet speeds that are between 3 to 7 times faster than the average broadband speed,” says Randall Grilli, director of media relations at ASUS North America. “It allows you to download a 2-hour movie in about 10 seconds.”

ASUS is supporting both a nano SIM and built-in eSIM, the latter of which allows you to easily switch networks in areas that support it, says Grilli.

Cellular connectivity is optional with always-connected PCs, since the user must pay for data. Details are still scarce on provider pricing plans — ASUS says users may work directly with Microsoft on data plan activation or directly with a provider, for instance — but SIM-supported laptops haven’t been adopted by the mainstream in the past, reminds Bajarin.

“Consumers are often reluctant to pay extra money for an additional data SIM, so until we see people actually putting down dollars for connectivity, I’m not sure if that will drive ACPCs,” says Bajarin. “What will drive this is 22 hours of battery. Ultimately, the consumer wants all-day computing, even though always-connected would be a good feature, too.”

Tradeoffs? Perhaps

These always-on PCs sound amazing, no doubt, between long battery life, always-on architecture, and LTE connectivity. So, what’s the catch, you ask?

Though it’s too early to know for sure, power and compatibility might not be what you’re used to with previous Windows laptops.

Intel has been trying to make PC's more mobile for six or seven years, with ultrabooks, then with 3G and LTE connectivity options — say with the LTE-supported Samsung Galaxy Book 12, with Verizon.

“First and foremost, it must be a great PC. It has to deliver performance…and PC experiences…that consumer expect,” said Josh Newman, general manager of mobile innovation segment for Intel.

Newman says when consumers take the new PC out of the box “it should just work with all the software they’re used to working – and work better than the 4- or 5-year-old PC they may be replacing – and same goes for multitasking, and peripherals, too.”

Even Qualcomm concedes it’s not going after those that demand serious horsepower. “For full disclosure, we are not a high-end gaming PC. That’s not Qualcomm,” says Nunes. “Our strength is in mobility, thin and light devices, and with Microsoft, we focused heavily focused on what people are doing with their devices.”

While ASUS and HP have confirmed support for Qualcomm’s ACPCs, and other major players will likely unveil their wares early next week at the annual tech trade show CES in Las Vegas, not everyone is onboard.

“Dell is not planning to announce any PCs with Qualcomm Snapdragon processors in the foreseeable future,” said Jay Parker, president of the client product group at Dell, in a statement provided to USA TODAY.

“We have a strong portfolio of PCs for consumer and commercial customers that deliver excellent battery life with LTE connectivity – which constitutes ‘always on’ in the customer’s mind,” says Parker. “We find that our customers don’t want to sacrifice full functionality and performance – that’s what our products deliver. The current Snapdragon processor doesn’t allow us to strike that right balance today.”
http://www.king5.com/article/news/na...d-0abc16620cb1





‘It Can’t Be True.’ Inside the Semiconductor Industry’s Meltdown
Ian King

• Technology titans work in secrecy for months to fix key flaws
• Researchers uncover security holes too big to believe

The Mounting Concerns Over Intel's Chip Vulnerabilities

It was late November and former Intel Corp. engineer Thomas Prescher was enjoying beers and burgers with friends in Dresden, Germany, when the conversation turned, ominously, to semiconductors.

Months earlier, cybersecurity researcher Anders Fogh had posted a blog suggesting a possible way to hack into chips powering most of the world’s computers, and the friends spent part of the evening trying to make sense of it. The idea nagged at Prescher, so when he got home he fired up his desktop computer and set about putting the theory into practice. At 2 a.m., a breakthrough: he’d strung together code that reinforced Fogh’s idea and suggested there was something seriously wrong.

“My immediate reaction was, ‘It can’t be true, it can’t be true,’” Prescher said.

Last week, his worst fears were proved right when Intel, one of the world’s largest chipmakers, said all modern processors can be attacked by techniques dubbed Meltdown and Spectre, exposing crucial data, such as passwords and encryption keys. The biggest technology companies, including Microsoft Corp., Apple Inc., Google and Amazon.com Inc. are rushing out fixes for PCs, smartphones and the servers that power the internet, and some have warned that their solutions may dent performance in some cases.

Prescher was one of at least 10 researchers and engineers working around the globe -- sometimes independently, sometimes together -- who uncovered Meltdown and Spectre. Interviews with several of these experts reveal a chip industry that, while talking up efforts to secure computers, failed to spot that a common feature of their products had made machines so vulnerable.

"It makes you shudder," said Paul Kocher, who helped find Spectre and started studying trade-offs between security and performance after leaving a full-time job at chip company Rambus Inc. last year. "The processor people were looking at performance and not looking at security." Kocher still works as an adviser to Rambus.

All processor makers have tried to speed up the way chips crunch data and run programs by making them guess. Using speculative execution, the microprocessor fetches data it predicts it’s going to need next.

Spectre fools the processor into running speculative operations -- ones it wouldn’t normally perform -- and then uses information about how long the hardware takes to retrieve the data to infer the details of that information. Meltdown exposes data directly by undermining the way information in different applications is kept separate by what’s known as a kernel, the key software at the core of every computer.

Researchers began writing about the potential for security weaknesses at the heart of central processing units, or CPUs, at least as early as 2005. Yuval Yarom, at the University of Adelaide in Australia, credited with helping discover Spectre last week, penned some of this early work.

By 2013, other research papers showed that CPUs let unauthorized users see the layout of the kernel, a set of instructions that guide how computers perform key tasks like managing files and security and allocating resources. This vulnerability became known as a KASLR break and was the foundation for some of last week’s revelations.

In 2016, research by Felix Wilhelm and others demonstrated how an early version of speculative execution could make chips vulnerable to data leaks. Jann Horn, a young Google researcher credited with first reporting the Meltdown and Spectre weaknesses, was inspired by some of this work, according to a recent tweet.

At Black Hat USA, a major cybersecurity conference in Las Vegas, in August 2016 a team from Graz Technical University presented their research from earlier in the year on a way to prevent attacks against the kernel memory of Intel chips. One of the group, Daniel Gruss, shared a hotel room with Fogh, a malware researcher at G Data Advanced Analytics, an IT security consulting firm. Fogh had long been interested in "side-channel" attacks, ways to use the structure of chips to force computers to reveal data.

Fogh and Gruss stayed up late at night discussing the theoretical basis for what would later become Spectre and Meltdown. But, like Prescher more than a year later, the Graz team was skeptical this was a real flaw. Gruss recalls telling Fogh that the chipmakers would have uncovered such a glaring security hole during testing and would never have shipped chips with a vulnerability like that.

Fogh made the case again at Black Hat Europe, in early November 2016 in London, this time to Graz researcher Michael Schwarz. The two discussed how side-channel attacks might overcome the security of "virtualized" computing, where single servers are sliced up into what looks, to users, like multiple machines. This is a key part of increasingly popular cloud services. It’s supposed to be secure because each virtual computing session is designed to keep different customers’ information separate even when it’s on the same server.

Despite Fogh’s encouragement, the Graz researchers still didn’t think attacks would ever work in practice. "That would be such a major f*ck-up by Intel that it can’t be possible," Schwarz recalled saying. So the team didn’t dedicate much time to it.

In January 2017, Fogh said he finally made the connection to speculative execution and how it could be used to attack the kernel. He mentioned his findings at an industry conference on Jan. 12, and in March he pitched the idea to the Graz team.

By the middle of the year, the Graz researchers had developed a software security patch they called KAISER that was designed to fix the KASLR break. It was made for Linux, the world’s most popular open-source operating system. Linux controls servers -- making it important for corporate computing -- and also supports the Android operating system used by the majority of mobile devices. Being open source, all suggested Linux updates must be shared publicly, and KAISER was well received by the developer community. The researchers did not know it then, but their patch would turn out to help prevent Meltdown attacks.

Fogh published his blog on July 28 detailing efforts to use a Meltdown-style attack to steal information from a real computer running real software. He failed, again fueling doubts among other researchers that the vulnerabilities could really be used to steal data from chips. Fogh also mentioned unfinished work on what would become Spectre, calling it "Pandora’s Box." That got little reaction, too.

The Graz team’s attitude quickly changed, though, as summer turned to fall. They noticed a spike in programming activity on their KAISER patch from researchers at Google, Amazon and Microsoft. These giants were pitching updates and trying to persuade the Linux community to accept them -- without being open about their reasons sometimes.

“That made it a bit suspicious,” Schwarz said. Developers submitting specific Linux updates usually say why they’re proposing changes, "and on some of the things they didn’t explain. We wondered why these people were investing so much time and were working on it so hard to integrate it into Linux at any cost."

To Schwarz and his fellow researchers, there was only one explanation: A potentially much bigger attack method that could blow open these vulnerabilities, and the tech giants were scrambling to fix it secretly before every malicious hacker on Earth found out.

Unbeknownst to the Graz team and Fogh, a 22-year-old wunderkind at Alphabet Inc.’s Google called Jann Horn had independently discovered Spectre and Meltdown in April. He’s part of Google’s Project Zero, a team of crack security researchers tasked with finding "zero-day" security holes -- vulnerabilities that trigger attacks on the first day they become known.

On June 1, Horn told Intel and other chip companies Advanced Micro Devices Inc. and ARM Holdings what he’d found. Intel informed Microsoft soon after. That’s when the big tech companies began working on fixes, including Graz’s KAISER patch, in private.

By November, Microsoft, Amazon, Google, ARM and Oracle Corp. were submitting so many of their own Linux updates to the community that more cybersecurity researchers began to realize something big -- and strange -- was happening.

Tests on the patches these tech giants were advocating showed serious implications for the performance of key computer systems. In one case, Amazon found that a patch increased the time it took to run certain operations by about 400 percent, and yet the cloud leader was still lobbying that every Linux user ought to take the fix, according to Gruss. He said this made no sense for their original KAISER patch, which would only ever impact a small sub-section of users.

Gruss and other researchers became more suspicious that these companies weren’t being completely honest about the rationale for their proposals. Intel said it is standard practice not to disclose vulnerabilities until a full remedy has been put in place. The chipmaker and other tech companies have also said their tests show minimal or no impact on performance, although certain unusual workloads may be slowed by as much as 30 percent.

In late November, another team of researchers at IT firm Cyberus Technology became convinced that Intel had been telling its main clients, such as Amazon and Microsoft, all about the issue, while keeping the full scale of the crisis hidden from Linux development groups.

Prescher, the former Intel engineer, was part of the Cyberus team. After his late-night discovery in Dresden, he told Cyberus Chief Technology Officer Werner Haas what he’d found. Before their next in-person meeting, Haas made sure to wear a Stetson, so he could say to Prescher, "I take my hat off to you."

On Dec. 3, a quiet Sunday afternoon, the Graz researchers ran similar tests, proving Meltdown attacks worked. "We said, ‘Oh God, that can’t be possible. We must have a mistake. There shouldn’t be this sort of mistake in processors," recalled Schwarz.

The team told Intel the next day -- around the same time Cyberus informed the chip giant. They heard nothing for more than a week. "We were amazed -- there was no response," Schwarz said.

On Dec. 13, Intel let Cyberus and the Graz team know that the problems they found had already been reported by Horn and others. The chipmaker was initially reluctant to let them contribute. But after being pressed, Intel put both groups in touch with the other researchers involved. They all began coordinating a broader response, including releasing updated patches at the same time.

Once inside the secret circle of the large tech companies, the Graz researchers expected they would have the typical 90 days to come up with comprehensive fixes before telling the world. "They said we know it, but will publish it at the beginning of January," Schwarz said. It had been roughly 180 days since Google unearthed it, and keeping such issues under wraps for more than 90 days is unusual, he noted.

A group of 10 researchers coalesced and kept in touch via Skype every two days. “It was a lot of work on Christmas. There wasn’t a single day where we didn’t work. Holidays were canceled," Schwarz said.

Their public security updates soon attracted the attention of The Register, a U.K.-based technology news site, which wrote a story on Jan. 2 saying Intel products were at risk.

Usually, flaws and their fixes are announced at the same time, so hackers don’t quickly abuse the vulnerabilities. This time, the details emerged early and patches weren’t ready. That led to a day and a night of frantic activity to arrange what all the companies would say in unison.

Intel put the statement out at 12 p.m. Pacific Time on Jan. 3 and held a conference call two hours later to explain what it said was a problem that could impact the whole industry.

The solidarity was a mirage, though. Rival AMD issued its own statement shortly before Intel’s call began, saying its products were at little or no risk of being exploited. After more than six months of coordinated work, Intel went into lock-down in the final hours and didn’t consult with its erstwhile partners to speed up a public statement, according to a person familiar with what happened.

Underlining the panic that spread following the announcement, Intel had to follow up with calming statements. The next day, the company said it had made "significant progress" in deploying updates, adding that by the end of this week 90 percent of processors made in the last five years will have been secured.

Steve Smith and Donald Parker, the two Intel executives questioned on the call, argued things progressed in the measured way that Intel approaches any report of a threat to its technology. The difference this time was that their work ended up "in the spotlight,” according to Smith. They would have preferred to complete the work in secret.

Indeed, Intel’s reticence rankled some outside researchers. The company operates on a need-to-know basis, said Cyberus’s Haas, who worked at Intel for about a decade. "I’m not a huge fan of that."

“Our first priority has been to have a complete mitigation in place,” said Intel’s Parker. “We’ve delivered a solution.”

Some in the cybersecurity community aren’t so sure. Kocher, who helped discover Spectre, thinks this is just the beginning of the industry’s woes. Now that new ways to exploit chips have been exposed, there’ll be more variations and more flaws that will require more patches and mitigation.

"This is just like peeling the lid off the can of worms," he said.

— With assistance by Mark Bergen, and Dina Bass
https://www.bloomberg.com/news/artic...try-s-meltdown





Cloud Companies Consider Intel Rivals after Security Flaws Found
Salvador Rodriguez, Stephen Nellis

Some of Intel Corp’s data centre customers, whose thousands of computers run cloud networks, are exploring using microchips from the market leader’s rivals to build new infrastructure after the discovery of security flaws affecting most chips.

Whether Intel sees a slew of defectors or is forced to offer discounts, the company could take a hit to one of its fastest growing business units. Intel chips back 98 percent of data centre operations, according to industry consultancy IDC.

Security researchers last week disclosed flaws, dubbed Meltdown and Spectre, that could allow hackers to steal passwords or encryption keys on most types of computers, phones and cloud-based servers.

Microsoft Corp said on Tuesday the patches necessary to secure the threats could have a significant performance impact on servers.

Intel will help customers find the best approach in terms of security, performance and compatibility, it said in a statement on Tuesday. “For many customers, the performance element is foremost, and we are sharply focused on doing all we can to ensure that we meet their expectations.”

Alternatives include Advanced Micro Devices, which shares with Intel a chip architecture called x86, or chips based on technology from ARM Holdings or graphics processing chips, which were developed for different tasks than Intel and AMD’s central processing units, or CPUs.

For Gleb Budman’s company, San Mateo-based online storage firm Backblaze, building with ARM chips would not be difficult.

“If ARM provides enough computing power at lower cost or lower power than x86, it would be a strong incentive for us to switch,” said Budman. “If the fix for x86 results in a dramatically decreased level of performance, that might increasingly push in favor of switching to ARM.”

Infinitely Virtual, a Los Angeles-based cloud computing vendor, is counting on Intel to replace equipment or offer a rebate to make up for the loss in computing power, Chief Executive Adam Stern said in an interview.

“If Intel doesn’t step up and do something to make this right then we’re going to have to punish them in the marketplace by not purchasing their products,” said Stern, whose company relies exclusively on Intel processors.

Cloud providers said swapping out previously installed Intel chips for rivals’ would be too complex, but moving forward they could expand their networks using alternatives. Moving from Intel to AMD is easiest since AMD and Intel chips share a common core technology called the x86 instruction set, they said.

ARM-based chips lag the speed of Intel’s x86 based chips for tasks such as searches, and software would have to be rewritten.

Nvidia Corp’s so-called graphical processing units, or GPUs, are not a direct replacement for Intel’s CPUs, but they are taking over the CPU’s role for new types of work like image recognition and speech recognition.

Major technology companies had been experimenting with Intel alternatives even before the security flaws were revealed.

Last March, Microsoft committed to using ARM processors for its Azure cloud service, and in December, Microsoft Azure deployed Advanced Micro Devices processors in its data centres.

Alphabet Inc’s Google said in 2016 that it was designing a server based on International Business Machines Corp’s Power9 processor. And Amazon.com Inc’s Amazon Web Services chose AMD graphics processing units for a graphics design service announced in September.

Both Qualcomm Inc and Cavium Inc are developing ARM chips aimed at data centres. Cavium said it aimed to rival the performance of Intel chips for applications like databases and the content-delivery networks that help speed things like how fast online videos load.

Cavium is working with Microsoft and “several other cloud” vendors, said Gopal Hegde, vice president of the data centre processor group. Cavium and ARM rival Qualcomm work together to reduce the amount of software that has to be rewritten for ARM chips.

Cloudflare, a San Francisco cloud network company, has been evaluating ARM chips. The new security patches have not slowed its performance, but it will use the security issues as an opportunity to re-evaluate its use of Intel products, said Chief Technology Officer John Graham Cumming.

Reporting by Salvador Rodriguez and Stephen Nellis; Editing by Peter Henderson and Richard Chang
https://uk.reuters.com/article/uk-cy...KBN1EZ1A8?il=0





Nvidia: Using Cheap GeForce, Titan GPUs in Servers? Haha, Nope!

Nice try, but no, you're gonna have to cough up for these expensive data center chips
Katyanna Quach

Nvidia has banned the use of its GeForce and Titan gaming graphics cards in data centers – forcing organizations to fork out for more expensive gear, like its latest Tesla V100 chips.

The chip-design giant updated its GeForce and Titan software licensing in the past few days, adding a new clause that reads: “No Datacenter Deployment. The SOFTWARE is not licensed for datacenter deployment, except that blockchain processing in a datacenter is permitted.”

In other words, if you wanted to bung a bunch of GeForce GPUs into a server box and use them to accelerate math-heavy software – such as machine learning, simulations and analytics – then, well, you can't without breaking your licensing agreement with Nvidia. Unless you're doing trendy blockchain stuff.

A copy of the license in the Google cache, dated December 31, 2017, shows no mention of the data center ban. Open the page today, and, oh look, data center use is verboten.

To be precise, the controversial end-user license agreement (EULA) terms cover the drivers for Nvidia's GeForce GTX and Titan graphics cards. However, without Nvidia's proprietary drivers, you can't unlock the full potential of the hardware, so Nv has you over a barrel.

It's not just a blow for people building their own servers and data centers, it's a blow for any computer manufacturer – such as HPE or Dell – that hoped to flog GPU-accelerated servers, using GTX or Titan hardware, much cheaper than Nvidia charges for, say, its expensive DGX family of GPU-accelerated servers. A DGX-1 with Tesla V100 chips costs about $150,000 from Nvidia. A GeForce or Titan-powered box would cost much less albeit with much less processing power.

The high-end GeForce GTX 1080 Ti graphics cards – aimed at gamers rather than deep-learning data scientists – uses Nv's Pascal architecture, and only costs $699 (~£514) a pop. Meanwhile, the latest Tesla V100 card that's flogged to data centers costs over $9,000 (~£6,620).

In terms of 64-bit double-precision floating-point math calculations, the V100 utterly smokes the 1080 Ti: about 7 TFLOPS from the V100 to 0.355 TFLOPS from its GeForce cousin. The V100 similarly smashes the video gaming part on 32-bit single and 16-bit half-precision floating-point math. The top-end GeForce Titan Xp is similarly dominated by the V100.

However, one senior techie who supports an academic medical research facility and alerted us to the licensing change, said boffins don't necessarily need Tesla powerhouses, and that gamer-grade graphics processors can be good enough, depending on the kind of work being performed and the budget available.

“This could potentially have a huge impact in research institutes that use the lower-cost GPUs to process data," they told us.

"Much of this data does not actually need to double-point precision of the Tesla cards. This is a shocking sneaky way to suddenly introduce limitations to their product. Most places would be happy to acknowledge that using a consumer product in a server may invalidate the warranty, but to limit in such a way that it would leave researchers open to possible legal threats is frankly disgusting."

A spokesperson for Nvidia told The Register that the licensing tweak was to stop the “misuse” of GeForce and Titan chips in “demanding, large-scale enterprise environments.” In other words, Nv doesn't think gaming cards are a good fit for hot, crowded, demanded and well-funded server warehouses.

The spokesperson said:

GeForce and TITAN GPUs were never designed for data center deployments with the complex hardware, software, and thermal requirements for 24x7 operation, where there are often multi-stack racks. To clarify this, we recently added a provision to our GeForce-specific EULA to discourage potential misuse of our GeForce and TITAN products in demanding, large-scale enterprise environments.

NVIDIA addresses these unique mechanical, physical, management, functional, reliability, and availability needs of servers with our Tesla products, which include a three-year warranty covering data center workloads, NVIDIA enterprise support, guaranteed continuity of supply and extended SKU life expectancy for data center components. This has been communicated to the market since the Tesla products were first released.

Will Nvidia sic its legal dogs on those who break the license by installing gaming-grade graphics processors in data centers? Nvidia claimed it does not plan to ban non-commercial uses and research.

“We recognize that researchers often adapt GeForce and TITAN products for non-commercial uses or their other research uses that do not operate at data center scale. NVIDIA does not intend to prohibit such uses,” a spokesperson for the California chip architects said.

We asked Nvidia to clarify what it defined as a data center, and the spokesperson admitted that “there are many different types of data centers.”

“In contrast to PCs and small-scale LANs used for company and academic research projects, data centers typically are larger-scale deployments, often in multi-server racks, that provide access to always-on GPUs to many users,” the rep added.

At the moment, it sounds as though rule-breakers will get a stern talking to if Nvidia discovers them flouting its license, and be asked to get out their checkbooks – or else.

"Whenever an actual or proposed use of our drivers that is contrary to the EULA is brought to our attention, NVIDIA takes steps to work with the user to understand the reasons for each unlicensed use, and works to evaluate how we can best meet the user’s needs without compromising our standards for hardware and software performance and reliability," the spokesperson said.

"Whenever any user would like to use a GeForce or TITAN driver in a manner that may be unlicensed, they should contact NVIDIA enterprise sales to discuss the use and potential options. We expect that, working together with our user base on a case-by-case basis, we will be able to resolve any customer concerns."
https://www.theregister.co.uk/AMP/20...a_server_gpus/





Western Digital 'My Cloud' Devices have a Hardcoded Backdoor -- Stop Using these NAS Drives NOW!
Brian Fagioli

I must be honest -- I am starting to become fatigued by all of the vulnerabilities and security failures in technology nowadays. Quite frankly, between Spectre and Meltdown, I don't even want to use my computer or devices anymore -- I feel exposed.

Today, yet another security blunder becomes publicized, and it is really bad. You see, many Western Digital My Cloud NAS drives have a hardcoded backdoor, meaning anyone can access them -- your files could be at risk. It isn't even hard to take advantage of it -- the username is "mydlinkBRionyg" and the password is "abc12345cba" (without quotes). To make matters worse, it was disclosed to Western Digital six months ago and the company apparently did nothing until November 2017. Let's be realistic -- not everyone stays on top of updates, and a backdoor never should have existed in the first place.

"Exploiting this issue to gain a remote shell as root is a rather trivial process. All an attacker has to do is send a post request that contains a file to upload using the parameter 'Filedata[0]', a location for the fileto be upload to which is specified within the 'folder' parameter, and of course a bogus 'Host' header," says James Bercegay, GulfTech Research and Development.

Bercegay further explains, "The triviality of exploiting this issues makes it very dangerous, and even wormable. Not only that, but users locked to a LAN are not safe either. An attacker could literally take over your WDMyCloud by just having you visit a website where an embedded iframe or img tag make a request to the vulnerable device using one of the many predictable default hostnames for the WDMyCloud such as 'wdmycloud' and 'wdmycloudmirror' etc."

But wait -- why does a Western Digital product have a hardcoded username containing dlink? Weird right? The researchers did some investigating and found that the WD NAS devices once shared code with D-Link "Sharecenter" devices. Interestingly, these D-Link devices were issued patched firmware in 2014 and no longer contain the backdoor.

Bercegay shares the timeline below. As you can see, WesternDigital had plenty of time to fix this. It was reported in June of last year, but apparently, nothing was done for many months.

• 2017-06-10: Contacted vendor via web contact form. Assigned case #061117-12088041.
• 2017-06-12: Support member Gavin referred us to WDC PSIRT. We immediately sent a PGP encrypted copy of our report to WDC PSIRT.
• 2017-06-13: Received confirmation of report from Samuel Brown.
• 2017-06-16: A period of 90 days is requested by vendor until full disclosure.
• 2017-12-15: Zenofex posts disclosure of the upload bug independantly of my research
• 2018-01-03: Public Disclosure

If you aren't sure if your My Cloud Storage device is affected, please check against the below list. If your model is listed, you should unplug it from Ethernet immediately. Apparently, firmware 2.30.172 (issued November 2017) fixes the bug, so do not reconnect to the internet until you are sure that your device is updated and the vulnerability is patched.

• MyCloud
• MyCloudMirror
• My Cloud Gen 2
• My Cloud PR2100
• My Cloud PR4100
• My Cloud EX2 Ultra
• My Cloud EX2
• My Cloud EX4
• My Cloud EX2100
• My Cloud EX4100
• My Cloud DL2100
• My Cloud DL4100

Please know, even if you updated the firmware in November, your files could have been accessed by nefarious people before then -- for years. That is very scary.
https://betanews.com/2018/01/07/west...loud-backdoor/





Taiwanese Police Give Cyber-Security Quiz Winners Infected Devices
BBC

Police have apologised after giving infected memory sticks as prizes in a government-run cyber-security quiz.

Taiwan's national police agency said 54 of the flash drives it gave out at an event highlighting a government's cybercrime crackdown contained malware.

The virus, which can steal personal data and has been linked to fraud, was added inadvertently, it said.

The Criminal Investigation Bureau (CIB) apologised for the error and blamed the mishap on a third-party contractor.

It said 20 of the drives had been recovered.

Around 250 flash drives were given out at the expo, which was hosted by Taiwan's Presidential Office from 11-15 December and aimed to highlight the government's determination to crack down on cybercrime.

Cyber-fraud ring

All the drives were manufactured in China but the CIB ruled out state-sponsored espionage, saying instead that the bug had originated from a Taiwan-based supplier.

It said a single employee at the firm had transferred data onto 54 of the drives to "test their storage capacity", infecting them in the process.

The malware, identified as the XtbSeDuA.exe program, was designed to collect personal data and transmit it to a Polish IP address which then bounces it to unidentified servers.

The CIB said it had been used by a cyber-fraud ring uncovered by Europol in 2015.

Only older, 32-bit computers are vulnerable to the bug and common anti-virus software can detect and quarantine it, it said.

The server involved in the latest infections had been shut down, it said.

In May, IBM admitted it had inadvertently shipped malware-infected flash drives to some customers.

The computer maker said drives containing its Storwize storage system had been infected with a trojan and urged customers to destroy them.

At the time, it declined to comment on how the malware ended up on the flash drives or how many customers had been affected.

The trojan, part of the Reconyc family, bombards users with pop-ups and slows down computer systems.

It is known to target users in Russia and India.
http://www.bbc.com/news/technology-42634571





WhatsApp Security Flaws Could Allow Snoops to Slide Into Group Chats

Millions of people trust WhatsApp's end-to-end encryption. But security researchers say a flaw could put some group chats at risk of infiltration.
Andy Greenberg

When WhatsApp added end-to-end encryption to every conversation for its billion users two years ago, the mobile messaging giant significantly raised the bar for the privacy of digital communications worldwide. But one of the tricky elements of encryption—and even trickier in a group chat setting—has always been ensuring that a secure conversation reaches only the intended audience, rather than some impostor or infiltrator. And according to new research from one team of German cryptographers, flaws in WhatsApp make infiltrating the app's group chats much easier than ought to be possible.

At the Real World Crypto security conference Wednesday in Zurich, Switzerland, a group of researchers from the Ruhr University Bochum in Germany plan to describe a series of flaws in encrypted messaging apps including WhatsApp, Signal, and Threema. The team argues their findings undermine each app's security claims for multi-person group conversations to varying degrees.

But while the Signal and Threema flaws they found were relatively harmless, the researchers unearthed far more significant gaps in WhatsApp's security: They say that anyone who controls WhatsApp's servers could effortlessly insert new people into an otherwise private group, even without the permission of the administrator who ostensibly controls access to that conversation.

"The confidentiality of the group is broken as soon as the uninvited member can obtain all the new messages and read them," says Paul Rösler, one of the Ruhr University researchers who co-authored a paper on the group messaging vulnerabilities. "If I hear there's end-to-end encryption for both groups and two-party communications, that means adding of new members should be protected against. And if not, the value of encryption is very little."

That any would-be eavesdropper would have to control the WhatsApp server limits the spying method to sophisticated hackers who could compromise those servers, WhatsApp staffers, or governments who legally coerce WhatsApp to give them access. But the premise of so-called end-to-end encryption has always been that even a compromised server shouldn't expose secrets. Only people in a conversation should be able to read WhatsApp's messages, not the servers themselves.

"If you build a system where everything comes down to trusting the server, you might as well dispense with all the complexity and forget about end-to-end encryption," says Matthew Green, a cryptography professor at Johns Hopkins University who reviewed the Ruhr University researchers' work. "It's just a total screwup. There's no excuse."

Group Threat

The German researchers say their WhatsApp attack takes advantage of a simple bug. Only an administrator of a WhatsApp group can invite new members, but WhatsApp doesn't use any authentication mechanism for that invitation that its own servers can't spoof. So the server can simply add a new member to a group with no interaction on the part of the administrator, and the phone of every participant in the group then automatically shares secret keys with that new member, giving him or her full access to any future messages. (Messages sent prior to an illicit invitation, luckily, still can't be decrypted.)

Everyone in the group would see a message that a new member had joined, seemingly at the invitation of the unwitting administrator. If the administrator is watching closely, he or she could warn the group's intended members about the interloper and the spoofed invitation message.

But the Ruhr University researchers and Johns Hopkins' Green point out several tricks that could be used to delay detection. Once an attacker with control of the WhatsApp server had access to the conversation, he or she could also use the server to selectively block any messages in the group, including those that ask questions, or provide warnings about the new entrant.

"He can cache all the message and then decide which get sent to whom and which not," says Rösler. And in groups with multiple administrators, the hijacked server could spoof different messages to each administrator, making it appear that another one had invited the eavesdropper, so that none raises an alarm. It could even prevent any administrator's attempt to remove the eavesdropper from the group if discovered.

Some Limits

In a phone call with WIRED, a WhatsApp spokesperson confirmed the researchers' findings, but emphasized that no one can secretly add a new member to a group—a notification does go through that a new, unknown member has joined the group. The staffer added that if an administrator spots a fishy new addition to a group, they can always tell other users via another group, or in one-to-one messages. And the WhatsApp spokesperson also noted that preventing the Ruhr University researchers' attack would likely break a popular WhatsApp feature that allows anyone to join a group simply by clicking on a URL.

“We've looked at this issue carefully," a WhatsApp spokesperson wrote in an email. "Existing members are notified when new people are added to a WhatsApp group. We built WhatsApp so group messages cannot be sent to a hidden user. The privacy and security of our users is incredibly important to WhatsApp. It's why we collect very little information and all messages sent on WhatsApp are end-to-end encrypted.”

To be fair, this technique wouldn't be a very stealthy strategy in the long run for government spying. Sooner or later, users would likely notice that unexpected strangers were showing up in their chats. But that possibility of detection isn't an adequate solution to WhatsApp's underlying problem, argues John Hopkins' Green. "That's like leaving the front door of a bank unlocked and then saying no one will rob it because there’s a security camera," Green says. "It's dumb."

The Ruhr University researchers say they alerted WhatsApp to the problem with group messaging security last July. In response to their report, WhatsApp's staff told the researchers the bug they'd found didn't even qualify for the so-called bug bounty program run by Facebook, WhatsApp's corporate owner, in which security researchers are paid for reporting hackable flaws in the company's software.

For some of WhatsApp's users, the stakes of the app's security could be high. WhatsApp's convenient group messaging system, in combination with its encryption promises, have made it a popular tool for "whisper networks" of grassroots organizing around sensitive or dangerous topics. Victims of sexual abuse and harassment have used it to organize the campaign against abusers, for instance. So have political insiders and Syria's embattled White Helmets, volunteer rescue brigades in Syria who are often targeted by the ruling regime.

But the shoddy security around WhatsApp's group chats should make its most sensitive users wary of interlopers, Rösler argues. If WhatsApp were to comply with a government request—in the US or abroad—agents could join any private group and listen along.

Smaller Problems

The researchers dug up less serious flaws in the more specialized secure messaging apps Signal and Threema, too. They warn that Signal allows the same group chat attack as WhatsApp, letting uninvited eavesdroppers join groups. But in Signal's case, that eavesdropper would have to not only control the Signal server, but also know a virtually unguessable number called the Group ID. That essentially blocks the attack, unless the Group ID can be obtained from one of the group member's phones—in which case the group is likely already compromised. The researchers say that Open Whisper Systems, the non-profit that runs and maintains Signal, nonetheless responded to their work, saying that it's currently redesigning how Signal handles group messaging. Open Whisper Systems declined to comment on the record to WIRED about the Ruhr researchers' findings.

For Threema, the researchers found even smaller bugs: An attacker who controls the server can replay messages or add users back into a group who have been removed. The researchers say Threema responded to their findings with a fix in an earlier version of its software.

As for WhatsApp, the researchers write that the company could fix its more egregious group chat flaw by adding an authentication mechanism for new group invitations. Using a secret key only the administrator possesses to sign those invitations could let the admin prove his or her identity and prevent the spoofed invites, locking out uninvited guests. WhatsApp has yet to take their advice.

Until they do, WhatsApp's most sensitive users should consider sticking with one-to-one conversations, or switching to a more secure group messaging app like Signal. Otherwise, they'd be wise to keep a vigilant eye out for any new entrants sliding into their private conversations. Until an administrator actively vouches for that newcomer, there's a small chance he or she might just be something other than a new friend.
https://www.wired.com/story/whatsapp...n-group-chats/





Huawei's Big US Push is in Tatters after AT&T Cancelled a Distribution Deal for its Latest Phone
Shona Ghosh

• AT&T has reneged on Huawei's first ever distribution deal with a US carrier, leaving the firm's expansion ambitions in tatters.
• Huawei is reportedly trying to secure a new deal with rival Verizon, but political pressure from US politicians may mean the firm will struggle to find a new partner.
• Huawei is the fourth biggest smartphone maker in the world, and hoped to finally crack the US market with the launch of the new Mate 10 Pro flagship phone.
• It looks like the firm will launch the phone without a carrier partner — but most US consumers only buy their phones through a carrier.
• The collapse of the deal coincides with US politicians expressing concerns that Huawei is a security threat.

Chinese smartphone maker Huawei hit a major stumbling block in its US expansion plans, after its first ever deal with an American carrier fell through due to reported political pressure. A second deal is also looking shaky.

Huawei had been due to launch its new flagship smartphone, the Huawei Mate 10 Pro, with AT&T in February. But according to The Wall Street Journal, the carrier backed out of the deal without any explanation. AT&T is the second biggest carrier in the US, and the collapse of the deal essentially shuts Huawei out of the US smartphone market.

A second deal with Verizon is looking unlikely, with the carrier also under political pressure to cancel a planned summer launch of the Mate 10 Pro, according to Android Police.

At&T and Verizon have not commented, but Huawei said it would be launching the Mate 10 Pro in the US without carrier partners.

AT&T's cancellation comes weeks after the US Senate and House Intelligence committees reportedly sent letters to the FCC arguing that Huawei was a security threat. It was also worried about US carrier deals with the smartphone maker.

It's awkward timing for Huawei, which launched the Mate 10 Pro at the Consumer Electronics Show this week. During the launch, consumer CEO Richard Yu said the firm had the " highest standard in privacy and security" and said the carrier deals were a "big loss" for customers, who now have less choice.

Huawei said it would sell the Mate 10 Pro in the US unlocked, meaning it won't be attached to any particular carrier. Most US consumers don't buy unlocked phones though, instead going through carriers to buy new phones.

Huawei is the fourth biggest smartphone maker globally due to its massive popularity in Asia, according to IDC. It's also made strong inroads in the UK, and is the third biggest brand behind Samsung and Apple, according to Counterpoint. But it's never managed to crack the US, and the firm will have to rethink its expansion plans after this latest blow.

This isn't the first time US politicians have suspected Huawei of being a security threat, though they have never offered proof in public.

Back in 2012, when Huawei was better known for making broadband equipment, the US warned that the Chinese government might be using the firm's kit to spy on foreign countries. At the time, Huawei hit out at international "protectionism." The UK's security watchdog also worried about a telecoms equipment deal between BT and Huawei. Huawei eventually established a UK office to probe its own kit for security flaws, overseen by the government.
http://www.newstimes.com/technology/...T-12486917.php





Some Chinese Apple Users Warned by Firm on Dodging New Data Law

Some Chinese users of Apple Inc’s products who have created Apple IDs overseas to circumvent a new law that requires their personal data to be stored within China say they have been warned by the tech giant that they risk losing the data.

China introduced a new cyber security law on June 1 that imposes tougher controls over data than in Europe and the United States, including mandating that companies store all data within China and pass security reviews.

Apple announced last year that it will migrate all user data from Chinese iCloud accounts to local servers run by Guizhou-Cloud Big Data Industry Co Ltd (GCBD), which is part of a state-backed infrastructure project overseen by local Communist Party officials.

The move is scheduled to happen on Feb. 28, and Apple this week notified users of the shift via emails linked to Chinese iCloud accounts.

However, despite earlier assurances by Apple that only local accounts will be affected, five Chinese users who had set up Apple iCloud accounts using U.S. emails, payment methods and addresses said they had received notices that they needed to opt into the transfer or risk losing their data. Their U.S. accounts did not contain any of their Chinese information, they said.

“As far as they [Apple] know I‘m a U.S. resident with a U.S. address, phone, email and payment details,” one user in Beijing surnamed Liu, 26, told Reuters. He declined to share his full name in case his secondary account is identified and closed.

“How could they possibly judge that I‘m a Chinese citizen?”

It was not immediately clear how many users in all received these warnings from Apple or how the company identified the users to send the notices.

But thousands of users have taken to social media to criticise the firm and crowd source workaround solutions to the new rules, which don’t give users an opt-out option that would allow them to keep their data.

Apple declined to comment on why users with U.S. Apple IDs received the notices.

Opening foreign Apple ID accounts is one way that Chinese users are able to evade strict censorship controls by accessing overseas App Stores which host virtual private network (VPN) apps and other features.

Apple last year removed dozens of local and foreign VPN apps from its Chinese app store, as well as popular media apps and communication tools including Microsoft Corp’s Skype, cutting off a major resource for Chinese users looking to access uncensored content.

Chinese authorities maintain that the bans on foreign content - which include popular U.S. services by Facebook Inc, Alphabet Inc and Twitter Inc - are designed to maintain social stability and eliminate threats to China’s sovereignty.

Apple said last year that it has strong data privacy protections in place for its Chinese data operations, and that no backdoors will be created in any of its systems.

However, some users say they are unconvinced and will disable their accounts. Other Apple loyalists told Reuters that they feared that refusing the new data terms or attempting risky solutions could mean losing their accounts or data.

“I‘m pretty sure the is a technical issue, [but] even if you choose a different region now, you will still be unclear where your data will go after the handover,” said one user surnamed Wang, who told Reuters he has been using U.S. Apple IDs since before iPhones were available in China.

“My iCloud service is paid in U.S. dollars and the billing address is in Portland, how can this data be in a Chinese server?”

Reporting by Cate Cadell; Editing by Brenda Goh and Muralikumar Anantharaman
https://uk.reuters.com/article/uk-ch...-idUKKBN1F119J





Cisco Can Now Sniff Out Malware Inside Encrypted Traffic

This is Switchzilla’s kit-plus-cloud plan in action
Simon Sharwood

Cisco has switched on latent features in its recent routers and switches, plus a cloud service, that together make it possible to detect the fingerprints of malware in encrypted traffic.

Switchzilla has not made a dent in transport layer security (TLS) to make this possible. Instead, as we reported in July 2016, Cisco researchers found that malware leaves recognisable traces even in encrypted traffic. The company announced its intention to productise that research last year and this week exited trials to make the service – now known as Encrypted Traffic Analytics (ETA) - available to purchasers of its 4000 Series Integrated Service Routers, the 1000-series Aggregation Services Router and the model 1000V Cloud Services Router 1000V.

Those devices can’t do the job alone: users need to sign up for Cisco’s StealthWatch service and let traffic from their kit flow to a cloud-based analytics service that inspects traffic and uses self-improving machine learning algorithms to spot dodgy traffic.

Some of the techniques used to spot malware’s activities aren’t super-sophisticated: Cisco looks at unencrypted handshake packets for known dodgy destinations, searches for things like self-signed certificates and other signs of either sloppiness or slippery intentions.

The cloud service does the heavier lifting, with over 400 “classifiers” hunting for signs of malware at work.

To make the magic happen, Cisco users have to send metadata - parsed NetFlow data - to Switchzilla's cloud. By doing so, they'll get the ETA service and help it to improve by feeding it more data for its algorithms to consume and learn from.

The new tool has applications beyond defence, as it can also detect the encryption applied to traffic. That’s a useful function for organisations that must encrypt traffic to stay on the right side of industry or government regulations. Cisco has therefore geared up to sell ETA as a compliance tool as well as a malware-spotter.

ETA is already present in IOS XE 16.6 and Cisco says 50,000 of its customers have hardware capable of accessing the service today. They'll just need to turn it on and start sending telemetry to Cisco's cloud.

The company’s also contemplated taking the tech beyond its hardware, with ETA as a service and ETA on fabrics already contemplated by Cisco suits.
http://www.theregister.co.uk/2018/01...ypted_traffic/





FBI Chief Calls Unbreakable Encryption 'Urgent Public Safety Issue'
Dustin Volz

The inability of law enforcement authorities to access data from electronic devices due to powerful encryption is an “urgent public safety issue,” FBI Director Christopher Wray said on Tuesday in remarks that sought to renew a contentious debate over privacy and security.

The FBI was unable to access data from nearly 7,800 devices in the fiscal year that ended Sept. 30 with technical tools despite possessing proper legal authority to pry them open, a growing figure that impacts every area of the agency’s work, Wray said during a speech at a cyber security conference in New York.

“This is an urgent public safety issue,” Wray added, while saying that a solution is “not so clear cut.”

Technology companies and many digital security experts have said that the FBI’s attempts to require that devices allow investigators a way to access a criminal suspect’s cellphone would harm internet security and empower malicious hackers.

The comments at the International Conference on Cyber Security were among Wray’s first extensive remarks about encryption, which the FBI and local law enforcement for years has said bedevils countless investigations. Wray took over as FBI chief in August. [nL1N1KN200]

Reporting by Dustin Volz; Editing by Will Dunham
https://www.reuters.com/article/us-u...-idUSKBN1EY1S7





FBI Expert Lashes Apple 'Jerks' Over iPhone Security
Sam Varghese

A forensics expert from the FBI has lashed out at Apple, calling the company's security team a bunch of "jerks" and "evil geniuses" for making it more difficult to circumvent the encryption on its devices.

Stephen Flatley told the International Conference on Cyber Security in New York on Wednesday that one example of the way that Apple had made it harder for him and his colleagues to break into the iPhone was by recently making the password guesses slower, with a change in hash iterations from 10,000 to 10,000,000.

A report on the Motherboard website said Flatley explained that this change meant that the speed at which one could brute-force passwords went from 45 attempts a second to one every 18 seconds.

"Your crack time just went from two days to two months," he was quoted as saying.

“At what point is it just trying to one up things and at what point is it to thwart law enforcement? Apple is pretty good at evil genius stuff," Flatley added.

In 2016, the FBI clashed with Apple in court after the spy agency ordered the company to give it access to an iPhone belonging to one of two people involved in killing 14 people in California in December the previous year.

The FBI finally withdrew its case, having got a third party — which many suspect is the Israeli firm Cellebrite — to break into the iPhone and obtain the data sought by the FBI.

There have been differing reports about what the FBI paid for gaining access to the device, with amounts ranging from US$15,000 to US$90,000 being cited.

Flatley praised Cellebrite, which sells cracking devices and like technologies to authorities around the world.

“If you have another evil genius, Cellebrite, then maybe we can get into that front," he said.
https://www.itwire.com/security/8136...-security.html





Microsoft Partners with Signal to Bring End-To-End Encryption to Skype

Skype adds support for the Signal protocol
Catalin Cimpanu

In a move that surprised many, Microsoft and Open Whisper Systems (makers of the Signal app) announced today they are partnering to bring support for end-to-end (E2E) encrypted conversations to Skype.

The new feature, called Skype Private Conversations has been rolled out for initial tests with Skype Insider builds.

Private Conversations will encrypt Skype audio calls and text messages. Images, audio or video files sent via Skype's text messaging feature will also be encrypted.

Skype will integrate the Signal protocol

Microsoft will be using the Signal open-source protocol to encrypt these communications. This is the same end-to-end encryption protocol used by Facebook for WhatsApp and Facebook Messenger, and by Google for the Allo app.

A version of this protocol is also used by the eponymous Signal mobile IM service, Open Whisper Systems' most known product, and the favorite app of all whistleblowers, activists, dissidents, and anyone looking for an app supporting solid E2E encrypted conversations.

If Microsoft gives the go-ahead and E2E support lands in the Skype stable release, all of today's major IM platforms will be supporting encrypted conversations as optional, non-default features.

No support for Skype video calls just yet

You can test the new Skype Private Conversations feature right now by downloading and installing Skype Insider version 8.13.76.8 for Android, iOS, Linux, Mac, and Windows. Keep in mind that you won't be able to use the feature unless you're talking to another person also using a Skype Insider app.

The Skype version under testing right now does not support E2E encryption with Skype video calls or group chats.

"Give it a try by selecting “New Private Conversation” from the compose menu or from the recipient’s profile. After the recipient accepts your invite, all calls and messages in that conversation will be encrypted end-to-end until you choose to end it," explained Ellen Kilbourne, manager of the Skype Insider Program.

In other Skype news, Microsoft dropped support for Facebook sign-in today, meaning users will no longer to log into their Skype account with their Facebook credentials.
https://www.bleepingcomputer.com/new...tion-to-skype/





House Votes to Renew Surveillance Law, Rejecting New Privacy Protections
Charlie Savage, Eileen Sullivan and Nicholas Fandos

The House of Representatives voted on Thursday to extend the National Security Agency’s warrantless surveillance program for six years with minimal changes, rejecting a yearslong effort by a bipartisan group of lawmakers to impose significant new privacy limits when it sweeps up Americans’ emails and other personal communications.

The vote, 256 to 164, centered on an expiring law that permits the government, without a warrant, to collect communications of foreigners abroad from United States firms like Google and AT&T — even when those targets are talking to Americans. Congress had enacted the law in 2008 to legalize a form of a once-secret warrantless surveillance program created after the Sept. 11 terrorist attacks.

The legislation approved on Thursday still has to go through the Senate. But fewer lawmakers there appear to favor major changes to spying laws, so the House vote is likely the effective end of a debate over 21st-century surveillance technology and privacy rights that broke out in 2013 following the leaks by the intelligence contractor Edward J. Snowden.

Congress did, in 2015, vote to end and replace another program that Mr. Snowden exposed, under which the N.S.A. had been secretly collecting logs of Americans’ domestic phone calls in bulk. But reform-minded lawmakers who hoped to add significant new privacy constraints to the warrantless surveillance program fell short on Thursday.

The vote was a victory for the Trump administration and the intelligence community, which opposed imposing major new curbs on the program, and for Republican leadership, including House Speaker Paul D. Ryan, who had blocked the House from an opportunity to consider a less-sweeping compromise package developed by the House Judiciary Committee. They gambled that faced with an all-or-essentially-nothing choice, a majority of lawmakers would choose the status quo — and won.

Before approving the extension of the law, known as Section 702 of the FISA Amendments Act, the House voted 233 to 183 to reject an amendment that proposed a series of overhauls. Among them was a requirement that officials get warrants in most cases before hunting for and reading emails and other messages of Americans swept up under the program.

Earlier on Thursday, President Trump contradicted his own White House and top national security officials in a Twitter post that criticized an important surveillance law just as Congress began debating whether to approve it. But less than two hours later, the president appeared to reverse himself, telling lawmakers to “Get smart!”

Mr. Trump’s first tweet on the topic appeared to encourage lawmakers to support limiting the law.

“House votes on controversial FISA ACT today.” This is the act that may have been used, with the help of the discredited and phony Dossier, to so badly surveil and abuse the Trump Campaign by the previous administration and others?
— Donald J. Trump (@realDonaldTrump) Jan. 11, 2018

He was referring to an explosive and largely uncorroborated dossier that details claims about ties between Russia and Mr. Trump and his aides.

The tweet enraged Republican leaders on Capitol Hill who have been trying to chart a course to renew it, more or less intact. Speaker Paul D. Ryan and Mr. Trump spoke by phone between the president’s two tweets, according to a senior Republican congressional aide. Asked about the president’s conflicting tweets, Mr. Ryan said Mr. Trump has always been in support of foreign surveillance.

“His administration’s position has been really clear from Day 1, which is: 702 is really important, it’s got to be renewed,” Mr. Ryan told reporters after the vote.

Representative Nancy Pelosi, the House Democratic leader, asked Mr. Ryan to pull the bill from consideration, according to a senior Democratic aide familiar with the request. But Republicans, battling a last-minute push from conservative lawmakers, gambled on moving forward with a vote.

After it was approved, the American Civil Liberties Union said the legislation will give more spying power to the Trump administration.

“No president should have this power,” Neema Singh Guliani, a policy counsel with the A.C.L.U., said in a statement. “Yet, members of Congress just voted to hand it to an administration that has labeled individuals as threats based merely on their religion, nationality or viewpoints.”

Republican leaders in both the House and the Senate had counted on enough moderate Democrats and Republicans to stick together to extend the legal basis for the surveillance program, with only minimal changes. John F. Kelly, the White House chief of staff, was spotted in a House cloakroom talking to members before the vote in a last-minute lobbying push.

.@deirdrewalshcnn reports WH COS John Kelly is in the cloakroom talking to members about the Sec. 702 reauth vote
— Phil Mattingly (@Phil_Mattingly) Jan. 11, 2018

Mr. Trump, who is known to watch Fox News while he is tweeting, posted his tweet shortly after a Fox News legal analyst appealed directly to the president during a Thursday morning segment about the coming House vote. The analyst, Andrew Napolitano, turned to television cameras and said, “Mr. President, this is not the way to go.” He added that Mr. Trump’s “woes” began with surveillance.

By midmorning, in a follow-up tweet, the president appeared to step back from supporting the limits that his own administration has been encouraging lawmakers to reject.

With that being said, I have personally directed the fix to the unmasking process since taking office and today’s vote is about foreign surveillance of foreign bad guys on foreign land. We need it! Get smart!
— Donald J. Trump (@realDonaldTrump) Jan. 11, 2018

Noah Weiland contributed reporting.
https://www.nytimes.com/2018/01/11/u...ess-trump.html

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

January 6th, December 30th, December 23rd, December 16th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 12:51 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)