P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 17-01-18, 08:52 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - January 20th, ’18

Since 2002


































"Every consumer has a right to access online content without interference or manipulation by their internet service provider." – Xavier Becerra, California attorney general


"Differentiation means in this case throttling by Verizon. This would, in theory, be the sort of thing people would want to know—with this knowledge, they could choose to switch to another carrier, or could lodge a complaint against with the Federal Trade Commission." – David Choffnes


"I have a nephew that I put some boundaries on. There are some things that I won't allow; I don't want them on a social network." – Tim Cook






































January 20th, 2018




Democrats are Just One Vote Shy of Restoring Net Neutrality

The Senate effort to undo the FCC's repeal order is close, but faces an uphill battle.
Terrence O'Brien

Senate minority leader Chuck Schumer now says Democrats in the Senate are a single vote away from restoring net neutrality. According to the senator from New York, they now have a total of 50 votes for a Senate resolution of disapproval that would restore the Open Internet Order of 2015 and deliver a stiff rebuke to Ajit Pai and other Republican members of the FCC. It would also prevent the agency from passing a similar measure in the future, all but guaranteeing Net Neutrality is permanently preserved.

Right now the resolution has the support of all 49 Democrats in the Senate and one Republican, Susan Collins of Maine. But Schumer and the rest of the caucus will have to win over one more Republican vote to prevent Vice President Mike Pence from breaking tie and allowing the repeal to stand.

Under the Congressional Review Act the Senate has 60 days to challenge a decision by an independent agency like the FCC. With less than 30 days left to go, Democrats will have to move quick to convince a "moderate" like John McCain or Lindsey Graham to buck their party. Of course, considering the public outcry that preceded and followed the reversal, it's not impossible. But it's still an uphill battle for supporters of net neutrality.
https://www.engadget.com/2018/01/15/...et-neutrality/





FCC Report Keeps Faster Definition of Broadband and Separates Mobile from Fixed Connections
Devin Coldewey

The FCC’s yearly report of broadband deployment keeps some crucial definitions in place that some feared would be changed or eliminated to ease the responsibilities of internet service providers. The threat of a lowered speed standard and the merging of mobile and fixed broadband services will not be carried out, it seems.

Broadband will continue to be defined as a connection with speeds of 25 megabits down and 3 megabits up. Another proposed definition of 10 down and 1 up was decried by critics as unrealistic for several reasons; not only is it insufficient for many ordinary internet applications, but it would let providers off the hook, because they would be counted as having deployed broadband if it met this lowered standard.

Fortunately, that isn’t the case, and the 25/3 standard remains in place.

The other worry was the potential decision to merge mobile with fixed broadband when measuring the quality of internet connections available to people throughout the country.

Had the two been merged, an area might have been considered well-served if it was, for example, in range of an LTE tower (giving decent mobile speeds) but only served by sub-1-megabit DSL. Since it was being considered that only one was required, that underserved area would be considered adequately connected.

But the FCC clearly saw the lack of logic in equating mobile connections and fixed broadband: they’re used, tracked, billed and deployed very differently.

From the fact sheet accompanying the draft report:

“Both fixed and mobile services can enable access to information, entertainment, and employment options, but there are salient differences between the two. Beyond the most obvious distinction that mobile services permit user mobility, there are clear variations in consumer preferences and demands for fixed and mobile services.

Any analysis that only looked at the progress in deploying fixed broadband service or only looked at the progress in deploying mobile broadband service would be incomplete. Therefore, the draft report takes a holistic view of the market and examines whether we are both making progress in deploying fixed broadband service and making progress in deploying mobile broadband service.”

Commissioner Jessica Rosenworcel commended this decision but criticized others in a separate statement, saying “I’m glad that the FCC has backed away from its crazy idea to lower the broadband speed standard. But it defies logic to conclude that broadband is being reasonably and timely deployed across this country when over 24 million Americans still lack access.”

The fact sheet and Chairman Pai’s commentary also get a few hits in regarding the recent decision to roll back the 2015 net neutrality rules, but they aren’t very substantial.

(Commissioner Clyburn writes: “How can this agency now claim that broadband is being deployed to all Americans in a reasonable and timely fashion? Only by repeating the majority’s tired and debunked claims that broadband investment and innovation screeched to a halt in 2015.”)

Pai has, however, proposed a $500 million project to expand rural broadband, the details of which are still forthcoming; I’ve asked his office for more information on it.

The full draft report, when it becomes public, will no doubt contain more interesting information ripe for interpretation, and other commissioners may also weigh in on its successes and shortcomings. In the meantime, it’s reassuring that the main worries leading up to it have been addressed.
https://techcrunch.com/2018/01/18/fc...d-connections/





Flurry of Lawsuits Filed to Fight Repeal of Net Neutrality
Cecilia Kang

The legal fight against the Federal Communications Commission’s recent repeal of so-called net neutrality regulations began on Tuesday, with a flurry of lawsuits filed to block the agency’s action.

One suit, filed by 21 state attorneys general, said the agency’s actions broke federal law. The commission’s rollback of net neutrality rules were “arbitrary and capricious,” the attorneys general said, and a reversal of the agency’s longstanding policy to prevent internet service providers from blocking or charging websites for faster delivery of content to consumers.

Mozilla, the nonprofit organization behind the Firefox web browser, said the new F.C.C. rules would harm internet entrepreneurs who could be forced to pay fees for faster delivery of their content and services to consumers. A similar argument was made by another group that filed a suit, the Open Technology Institute, a part of a liberal think tank, the New America Foundation.

Suits were also filed on Tuesday by Free Press and Public Knowledge, two public interest groups. Four of the suits were filed in the United States Court of Appeals for the District of Columbia Circuit. The Free Press suit was filed in the United States Court of Appeals for the First Circuit.

“The repeal of net neutrality would turn internet service providers into gatekeepers — allowing them to put profits over consumers while controlling what we see, what we do, and what we say online,” said Eric T. Schneiderman, the attorney general of New York, who led the suit by the state officials.

The lawsuits have long been expected. The filings on Tuesday, petitions to begin the suits, kick off what is expected to be an extended legal and political debate about the future of internet policy.

Democrats have rallied to fight the F.C.C.’s repeal of net neutrality, which was passed in a 3-to-2 party line vote in December. The agency is led by Ajit Pai, a Republican nominated by President Trump. All of the attorneys general involved in the suit filed on Tuesday are Democrats.

The lawsuits have the support of the Internet Association, a trade group representing big tech firms including Google and Netflix, giving the various legal challenges financial support and the clout of companies. The companies say internet service providers have the incentive to block and throttle their sites in order to garner extra fees.

The F.C.C. declined to comment on the suits. But it did point to a part of its order that prohibits legal challenges until the new rules are submitted into the federal registry. The F.C.C. is expected to enter the new rules into the federal registry in the coming days or weeks.

The states said they could file a petition to the United States Court of Appeals, starting the process to determine which court would hear the case. That is the action the attorneys general, as well as Mozilla and the Open Technology Institute, took on Tuesday.

The states that signed onto the lawsuit include California, Kentucky, Maryland, Massachusetts and Oregon, as well as the District of Columbia. Xavier Becerra, the California attorney general, said the decision to roll back the agency’s declaration of broadband as a utility-like service will harm consumers.

“Internet access is a utility — just like water and electricity,” Mr. Becerra said in a statement. “And every consumer has a right to access online content without interference or manipulation by their internet service provider.”

In a release, Mr. Schneiderman said the agency’s roll back disregarded a record of evidence that internet service providers’ could harm consumers without rules. A similar argument was made by Mozilla.

“Ending net neutrality could end the internet as we know it,” said Denelle Dixon, Mozilla’s chief business and legal officer in a blog post. “That’s why we are committed to fighting the order. In particular, we filed our petition today because we believe the recent F.C.C. decision violates both federal law as well as harms internet users and innovators.”

The issue of net neutrality has been fought in court challenges twice before in the past decade. The rules adopted in 2015, which set rules that sites could not be blocked or throttled, were upheld by the United States Court of Appeals in 2016 after legal challenges by telecom companies. The F.C.C. vote in December was to roll back those 2015 rules.

The new lawsuits are among several efforts to restore net neutrality rules. On Tuesday, Senate Democrats announced they were one supporter away from winning a vote to restore net neutrality rules. All 49 members of their caucus, as well as one Republican, have signed on to a resolution to overturn the rules. A similar effort initiated in the House has the support of 80 members.

Success by members of Congress is unlikely, particularly in the House, where Speaker Paul D. Ryan, Republican of Wisconsin, would have to agree to bring the resolution to a vote. The president will also have to agree to the resolutions, if they were passed, but the White House has expressed its support of the rollback of net neutrality rules.
https://www.nytimes.com/2018/01/16/t...s-general.html





Apple Is Blocking an App That Detects Net Neutrality Violations From the App Store

Apple told a university professor his app "has no direct benefits to the user."
Jason Koebler

The most pervasive feeling about the Federal Communication Commission’s net neutrality repeal is one of hopelessness. If we all need to use the internet, big telecom companies control our access to the internet, and there’s no choice about what company to use, how are we supposed to stop these companies from messing with our connections?

The FCC has suggested that consumer outrage will prevent companies from violating net neutrality, but it if you’re not a network engineer, it can be hard to know if net neutrality is being violated at all. David Coffnes, a researcher at Northeastern University, set out to change that. He created an app to detect net neutrality violations, but Apple has banned it from the App Store, preventing consumers from accessing the information they need to at least know when they’re getting screwed over.

Using Apple’s beta testing platform called TestFlight, I tested the app, called Wehe. It’s straightforward. You open the app, agree to a consent form (he is using the data in his research), and click “run test.” The app is designed to test download speeds from seven apps: YouTube, Amazon, NBCSports, Netflix, Skype, Spotify, and Vimeo. According to the app, my Verizon LTE service streamed YouTube to my iPhone at 6 Mbps, Amazon Prime video at 8 Mbps, and Netflix at 4 Mbps. It downloaded other data at speeds of up to 25 Mbps.

“Differentiation means in this case throttling by Verizon,” Choffnes told me. This would, in theory, be the sort of thing people would want to know—with this knowledge, they could choose to switch to another carrier, or could lodge a complaint against with the Federal Trade Commission.

Ajit Pai’s FCC has made the argument that “most attempts by ISPs to block or throttle content will likely be met with a fierce consumer backlash … in the event that any stakeholder [ISP] were inclined to deviate from this consensus against blocking and throttling, we fully expect that consumer expectations, market incentives, and the deterrent threat of enforcement actions will constrain such practices.”

But the fact is that every major wireless telecom provider is already throttling data, and we are more-or-less powerless to stop it. And the opaque nature of both the telecom industry and Apple’s App Store vetting process is preventing consumers—and researchers like Choffnes—from getting a full picture of how net neutrality is being violated.

An Apple App Store reviewer told Choffnes that “your app has no direct benefits to the user,” according to screenshots reviewed by Motherboard. According to Apple’s reviewer, the app contained “Objectionable Content,” a catch-all for apps that Apple doesn’t want to let into its App Store. Apple is blocking the app and no one is quite sure why, including Choffnes; neither Apple nor Verizon responded to requests for comment for this article.

Wehe is is designed to be part of Choffnes’s research work to determine geographic and carrier-related differences in video throttling. When you open the app, you are presented with a consent form that “invites you to take part in a research project.”

“The purpose of this research study is to understand how cellular internet providers give different performance to different network traffic from your smartphone,” it says, adding that data is anonymized. “For example, we would like to know if a provider is speeding up YouTube traffic and/or slowing down Netflix.”

Wehe, according to the App Store reviewer, “may mislead users by providing inaccurate determinations … specifically, your app is marketed to users as a way to check if their carrier is violating net neutrality. However, your app has no direct benefits to the user from participating in the study.”

Packet inspection and video throttling

When I heard about Wehe, I thought that it must be impossible for an app to detect net neutrality violations. Or at least, I couldn’t think of a mechanism in which it might work. But once I spoke to Choffnes, who has spent much of the past few years reverse-engineering the ways in which telecom companies throttle data, it made sense.

Choffnes is an expert in data “differentiation,” which means he studies how telecom companies alter the download speeds of text, photos, or emails may be prioritized over the download speeds of video content. Such “prioritization” or data discrimination violates one of the core tenets of net neutrality, but data differentiation is commonly used by cell phone providers nonetheless.

“We didn’t have net neutrality even before the rules changed,” Choffnes said. “All the carriers are doing content-based throttling, specifically with video. And some video providers are getting better performance than others.”

That video is being throttled is not a secret. Many telecom providers “zero rate” certain video services and then advertise those services as “unlimited” to customers, meaning it doesn’t count against a customer’s data cap. The most famous instance of this is T-Mobile’s “BingeOn” service, which allows unlimited access to Netflix, YouTube, and a few other major video providers.

But that “unlimited” video means that video is throttled—in BingeOn’s case, T-Mobile video maxes out around 1.5 mb/s, whereas its standard LTE service gets speeds of up to 10 times that for non-video content. Other telecom providers have similar programs; Verizon has plans with “unlimited 4G LTE data” and “premium unlimited 4G LTE data,” the plan you have determines the resolution and amount of data you can stream before it’s throttled (and putting a cap on resolution is also a form of throttling; delivering lower-resolution video means delivering less data).

Customers may not generally think of this practice as a net neutrality violation, but former FCC chief Tom Wheeler wrote in a letter to Congress in 2016 that such programs are likely violations of the net neutrality rules he put into place in 2015 (that have since been repealed by Ajit Pai’s FCC.)

Zero rating programs “may harm consumers and competition in downstream industry sectors by unreasonably discriminating in favor of select downstream providers, especially their own affiliates,” Wheeler wrote.

We know that telecom companies throttle video, but Choffnes’s research focuses on how and when they throttle. His research finds methods of actually detecting the mechanics of data differentiation from carrier to carrier. What he’s found is that, for the most part, telecom providers aren’t throttling video; they are using a network management tactic known as deep packet inspection that throttles based on metadata associated with network traffic. What this means is that T-Mobile, for instance, might not try to detect whether something is a video or not, but it can detect whether a service calls its data a video or has the metadata hallmarks of a video. If so, it will set a download speed cap for that specific data.

"We realized that they’re looking for certain text in the network traffic, and if we changed that text, when we send that traffic over the network, it doesn’t get throttled"

For example, when an encrypted connection is established between Netflix’s servers and T-Mobile’s servers (known as a TLS handshake), certain plaintext information is exchanged (host names and server names). In Netflix’s case, one of these servers is called “nflxvideo.net.” If T-Mobile detects this server name in the metadata, it will throttle download data for those packets.

Choffnes learned about this system by reverse engineering it; his team downloaded videos from various video services (including the TLS data and all metadata) and then recreated it on their own servers (called “replays”). What he found is that by changing the metadata of the video’s header—but not the video itself—it could be downloaded at much higher speeds. If he changed the metadata of other types of data (photos, for instance) to have the Netflix metadata, that data would be throttled by the telecom company when it was downloaded.

“We realized that they’re looking for certain text in the network traffic, and if we changed that text—replaced nflxvideo.net with northeasternvideo.com—when we send that traffic over the network, it doesn’t get throttled,” Choffnes said. “This means it’s keyword related and not server or even content related.”

Because throttling is often keyword and not content-related, that means some video services are treated different from other video services; you may be able to stream Vimeo or a video hosted on a less-popular website faster than you can stream a video on Netflix, for example. And video is generally (not always) throttled around the clock, regardless of the overall traffic being put on a network, which peak during commutes and in the evenings.

It’s something we’ve been working on for years, something the academic community thinks is accurate, and we’re working with a regulator to disseminate it so other people can use it

“When faced with a problem like network management, the question is ‘Do you want to use a sledgehammer or a scalpel?’ You want to use the tool that will have the least negative impact while providing benefit to everyone,” Choffnes said. “What I think is in place today is a bit of a sledgehammer. Video traffic is a cause for congestion, but the video is throttled to a low rate, and it’s done that way all the time.”

An information page for Wehe explains its mission: “We need your help to test more providers, in the US and worldwide, so we can understand how [throttling] policies change over time, location, and network. We are building a website that will publicize these practices, both to inform regulators and to allow consumers to make informed choices about selecting their mobile providers.”

‘Objectionable Content’

To be clear, much of our outrage should probably be directed at the telecom industry, which has never shown much intention of following the principles of net neutrality. But it's no surprise that telecom companies are going to act in the interest of their bottom lines. What's less clear is why, exactly, Apple has banned a pro-consumer app from its App Store.

Choffnes has presented this data at scientific and telecom conferences, and his papers are peer reviewed.

His system is not a perfect way of determining actual network speeds, because he doesn’t have access to telecom infrastructure or video provider servers. But he says that the basic methods of data discrimination have not been disputed by telecom companies and that his work has caught the eye of ARCEP, France’s version of the FCC, which has cited his work and wants to use his methods to catch telecom companies violating net neutrality in the country.

In fact, Verizon is currently paying his team to “research the video performance of Verizon’s video streaming services,” and Google has funded some of his work under its Faculty Award Research program. Choffnes says that the terms of these agreements do not allow Verizon or Google to influence his work: “This contract has no restriction on our ability to publish our findings that do not rely on confidential information, and by definition the measurements we do on these operational networks are not confidential (because we could do these measurement with or without Verizon),” he said.

I mention these partnerships because the industry seems to believe in the accuracy of his work, but Apple, it seems, does not. The company has famously blocked many apps from entering the App Store or has prevented third party apps from accessing data that Apple itself can. For instance, Apple removed a feature that allowed third-party apps to access iPhone battery cycle data, presumably because software readings of battery health are less accurate than hardware ones (Apple never publicly addressed why it made this data inaccessible.) But Apple allows many different apps that allow users to do straight speed tests of their connections, which Choffnes says uses essentially the same technology his app does.

“I probably could have gotten away with calling it a speed test,” he said. “But I wasn’t going to lie to get it published.”

Because Wehe is basically just making requests to Choffnes’s server at Northeastern (which he controls), there is no reason to think that the data it returns is inaccurate, and Apple’s suggestion that people receive no benefit from knowing they’re being throttled would seem to ignore the widespread public outrage about the FCC’s recent vote to repeal net neutrality.

“I’m under contract with a French telecom regulator to provide this app as a service. I’m not a random independent researcher who has decided to on a whim to publish something that may or may not do what it says,” he told me. “It’s something we’ve been working on for years, something the academic community thinks is accurate, and we’re working with a regulator to disseminate it so other people can use it.”
https://motherboard.vice.com/en_us/a...ality-app-wehe





Websites Infringing TV and Film Copyright to be Blocked

Judge orders ISPs to block sites involved in illegal sharing of copyrighted material
Aodhan O'Faolain

A body representing some of the world’s biggest TV and film studios has secured High Court injunctions blocking several websites involved in illegal downloading and streaming of films and TV shows.

The injunctions were granted at the Commercial Court on Monday by Mr Justice Brian McGovern, who said he was satisfied that the websites had engaged in widespread infringement of the TV and movies studios’ copyright.

There was a significant public interest in granting the orders, he said, to protect the livelihoods of those whose copyright was being infringed and to safeguard the business of companies involved in the legitimate distribution of such material.

The orders, made under the 2000 Copyright and Related Act, were sought by the Motion Picture Association, which represents Twentieth Century Fox, Universal, Warner Brothers, Paramount, Disney, Columbia and Sony Pictures.

The plaintiffs claimed the sites were breaching their copyright.

Their proceedings were against Ireland’s main internet service providers (ISPs): Eircom, Sky Ireland, Vodafone Ireland, Virgin Media Ireland, Three Ireland, Digiweb, Imagine Telecommunications and Magnet Networks.

None of the ISPs opposed the injunctions application.

The group claimed up to 1.5 million people in Ireland may be involved in illegally accessing their films on one of the websites.

The websites blocked include GoMovies, located at 123movieshub.to; Rarbg, located at rarbg.to; EZTV, located at eztv.ag; and Watchfree, currently located at gowatchfreemovies.to.

Extensive libraries

Some of the websites streamed movies via the internet and have been providing users with an extensive library of unauthorised copies of content.

The other websites distributed movies and films through the internet used via peer-to-peer file sharing.

Jonathan Newman SC, for the group, said the sites in question had offered for viewing many movies and programmes that are copyright of his client’s members.

The association was concerned because the sites had “very substantial” numbers of users in Ireland. Movies and programmes on one of the sites had been viewed between 1.5 million and 1.3 million times, counsel said.

Counsel said those behind the websites had hidden anything to do with their identities or their physical location.

Email correspondence was sent by the association’s lawyers to the sites concerning the copyright infringements but no responses had been received, counsel said.

Similar orders were granted by the Irish courts in relation to other sites that had been involved in such activities, he said.

Such orders have proven to be effective in other jurisdictions, and there was clear evidence that the orders sought are “dissuasive, effective and proportional” in countering copyright infringement.
https://www.irishtimes.com/news/crim...cked-1.3356668





Infringement Alert

From the SFWA Legal Affairs Committee:

The Internet Archive (Archive.org) is carrying out a very large and growing program of scanning entire books and posting them on the public Internet. It is calling this project “Open Library,” but it is SFWA’s understanding that this is not library lending, but direct infringement of authors’ copyrights. We suspect that this is the world’s largest ongoing project of unremunerated digital distribution of entire in-copyright books. An extensive, random assortment of books is available for e-lending—that is the “borrowing” of a digital (scanned) copy. For those books that can be “borrowed,” Open Library allows users to download digital copies in a variety of formats to read using standard e-reader software. As with other e-lending services, the books are DRM-protected, and should become unreadable after the “loan” period. However, an unreadable copy of the book is saved on users’ devices (iPads, e-readers, computers, etc.) and can be made readable by stripping DRM protection. SFWA is still investigating the extent to which these downloadable copies can be pirated. Unlike e-lending from a regular library, Open Library is not serving up licensed, paid-for copies, but their own scans.

These books are accessible from both archive.org and openlibrary.org. If you want to find out if your books are being infringed at the Internet Archive, go to https://archive.org/search.php, search metadata for your name. You have to register, log in, and “borrow” the books to see if they are there in their entirety. A secondary search at https://openlibrary.org/search may turn up some additional titles, but will also show books that are in the Open Library database that have not been infringed.

Statement from SFWA President, Cat Rambo:

I would like to emphasize that SFWA’s objection here is that writers’ work is being scanned in and put up for access without notifying them.

The organization appreciates the wide range of possible opinions on the matter of copyright, but will continue to insist that it is up to the individual writer whether or not their work should be made available in this way.

If you believe that your copyright has been violated by material available through the Internet Archive, please provide the Internet Archive Copyright Agent with the following information at the address listed below. Alternatively, you can use the SFWA DMCA Notice Generator (http://www.sfwa.org/2010/07/sample-d...r-for-authors/ ) to create a DMCA notice that you can send to the address below. As a temporary measure, authors can also repeatedly “check out” their books to keep them from being “borrowed” by others.

Include:

• Identification of the copyrighted work that you claim has been infringed;
• An exact description of where the material about which you complain is located within the Internet Archive collections;
• Your address, telephone number, and email address;
• A statement by you that you have a good-faith belief that the disputed use is not authorized by the copyright owner, its agent, or the law;
• A statement by you, made under penalty of perjury, that the above information in your notice is accurate and that you are the owner of the copyright interest involved or are authorized to act on behalf of that owner; and
• Your electronic or physical signature.

The Internet Archive Copyright Agent can be reached as follows:

Internet Archive Copyright Agent
Internet Archive
300 Funston Ave.
San Francisco, CA 94118
Phone: 415-561-6767
Email: info@archive.org

http://www.sfwa.org/2018/01/infringement-alert/





Dish Sues IPTV Streamers Over Arabic Channels

Says over dozen channels have been pirated for OTT services
John Eggerton

Dish has filed suit against two over-the-top video providers, SPider-TV and Tiger Star, it says are pirating Arabic channels for unauthorized distribution.

Dish says it has sent over 100 notices to both and their ISPs asking that they stop transmitting the content, to which Dish has the rights, including streaming, but to no avail.

Channels at issue include Al Arabiya, Al Jazeera Arabic, MTV Lebanon, Al Jadeed and many others.

Dish says they are taking their signals off the Dish channels, transcoding them and streaming them over the internet.

We feel it is our responsibility to ensure the channels and shows we pay to carry exclusively are not pirated and sold illegally,” said Timothy Messner, Dish EVP and general counsel, in a statement. “Despite our best efforts to ask both Spider-TV and Tiger Star voluntarily to cease transmitting the channels without authorization, they have continued to distribute the content, leaving us no course but legal action.”

The suit against SPider-TV was filed in a Maryland U.S. district court. The Tiger Star suit was filed in a Texas U.S. district court.
http://www.broadcastingcable.com/new...hannels/171214





Has Pop Music Lost its Fun?
Fraser McAlpine

It's a commonly held grudge of listeners who are no longer pop's core demographic that the music of the moment is not what it once was. It was a claim made about jazz in the 1920s when New York Evening Post music critic Ernest Newman said, "It is an instrument on which little men can play a few pleasant little tunes; but if a composer of any power were to try to play his tunes on it, it would soon break in his hands." Similar criticisms were repeated in the 50s with rock'n'roll, in the 60s with the beat groups, and onwards to today.

But happens when science attempts to prove these claims? Here are some studies that suggest your parents might have been having a lot more pop fun than you are...

1. The claim: sadder and slower

In a 2012 paper entitled Emotional Cues in American Popular Music: Five Decades of the Top 40, E. Glenn Schellenberg, and Christian von Scheve analysed two key elements in hit pop songs. Taking the biggest hits in the Billboard charts from 1950 to 2010, they charted a song's tempo - how fast the backbeat is - and whether it is in a major or minor key. As a rule of thumb, music which is written in a major key tends to sound happier, and minor key songs sound sad.

This isn't a foolproof measurement of a song's overall happiness - some of Coldplay's most sob-worthy choruses are in a major key - but they did find that the public taste is towards more minor key songs with a slow tempo, such as Naked by James Arthur. Even the major key pop songs have got slower, suggesting fun is becoming a scarcer commodity, highlighting, as they put it, "a progressive increase of mixed emotional cues in popular music".

2. The claim: simpler and louder

This followed a similar study by a team from the Spanish National Research Council, lead by artificial intelligence specialist Joan Serrà, who examined nearly half a million pop songs over a similar period (in this case 1955-2010), and looked at their tonal, melodic and lyrical content. They concluded that pop has become melodically less complex, using fewer chord changes, and that pop recordings are mastered to sound consistently louder (and therefore less dynamic) at a rate of around one decibel every eight years.

Serra told Reuters: "We found evidence of a progressive homogenization of the musical discourse. In particular, we obtained numerical indicators that the diversity of transitions between note combinations - roughly speaking chords plus melodies - has consistently diminished in the last 50 years."

And the report even offers an explanation of some recent hit covers of older songs: "Our perception of the new would be essentially rooted on identifying simpler pitch sequences, fashionable timbral mixtures, and louder volumes. Hence, an old tune with slightly simpler chord progressions, new instrument sonorities that were in agreement with current tendencies, and recorded with modern techniques that allowed for increased loudness levels could be easily perceived as novel, fashionable, and groundbreaking."

3. The claim: antisocial and angry

A year before that, the journal Psychology of Aesthetics, Creativity and the Arts published a report which looked at how the language of popular song has changed over the last 30 years. Researchers took a sample set of the Top 10 most popular songs in America from 1980 to 2007, and looked at how words are used to try and assess how pop fans used music to soundtrack their emotional state at the time. The report suggests that, "Simply tuning in to the most popular songs on the radio may provide people with increased understanding of their generation's current psychological characteristics."

They found that the use of the first-person singular pronouns (the word 'I') has increased steadily over time, suggesting that fans have become more interested in reflective first-person songs. This matches a decline in words that emphasis community and working together. They also noted a rise in antisocial and angry words, suggesting that pop hits are reflecting a growing sense of personal fury and social unrest. Accusations with which Eminem will be familiar.

4. The claim: just not as good as it used to be

As anyone who follows election news reporting knows, polling isn't conclusive. And polling people about their musical tastes layers subjectivity on subjectivity, because, when asked to state a preference, people are far happier to admit to loving a classic artist like David Bowie than a more recent arrival, and they're often cherry-picking the very best of the past in any case. That said, it's interesting to note the results of a 2014 poll conducted by Vanity Fair, in which 1,017 adults were asked a series of questions about their musical preferences.

When asked which decade has the worst music, their responses fanned out in broadly chronological order, with the 2010s getting 42% of the vote, the 2000s getting 15%, and the 1990s, 1980s and 1970s coming in fairly equally with 13%, 14% and 12%. This might lead a casual reader to conclude that the people polled were all of a certain age, but it seems to be an evenly held opinion. Of people aged 18-29, 39% voted for the 2010s, while the figure for the over 30s was 43%, which indicates most of the fun is in digging up old songs, rather than keeping up with the new.

5. The claim: more repetitive

Repetition in pop is a key part of its appeal, as essential in Little Richard's Tutti Frutti as it is in Big Shaq's Man's Not Hot. That said, a sterling 2017 report by Daniel Morris on repetition in pop lyrics suggests that hit songs are getting closer and closer to a one-word lyric sheet.

The Lempel-Ziv algorithm is a lossless way to compress data, by taking out repetitions, and Morris used it as a tool to examine 15,000 songs from the Billboard Hot 100 from 1958 to 2014, reducing their lyrics down to their smallest size without losing any data, and comparing their relative sizes. He found two very interesting things. The first was that in every year of study, the songs that reached the Top 10 were more repetitive than their competition. The second is that pop has become more repetitive over time, as Morris points out: "2014 is the most repetitive year on record. An average song from this year compresses 22% more efficiently than one from 1960."

Of course, none of this means that pop songs are any less fun. They may be slower and sadder than before, but if pop songs are now simpler and louder and more repetitive than they used to be, that might make up for it. In fact, a 2011 report called Music and Emotions in the Brain: Familiarity Matters, compiled by a team led by Carlos Silva Pereira suggests that the human brain enjoys knowing what is coming next in music. Having conducted fMRI scans on people listening to songs, the report concludes that, "Familiarity seems to be a crucial factor in making the listeners emotionally engaged with music."

So the quicker a song can become familiar, the more chance there is that listeners scooting by on streaming services will stop and play it again. Which would imply that if anything, modern pop music is more fun than ever before.
https://www.bbc.co.uk/music/articles...6-953e8a083334





Buying Headphones in 2018 is Going to be a Fragmented Mess

Into the uncertain wilderness of wireless and digital connections, we go
Vlad Savov

At CES this year, I saw the future of headphones, and it was messy. Where we once had the solid reliability of a 3.5mm analog connector working with any jack shaped to receive it, there’s now a divergence of digital alternatives — Lightning or USB-C, depending on your choice of jack-less phone — and a bunch of wireless codecs and standards to keep track of. Oh, and Sony’s working hard on promoting a new 4.4mm Pentaconn connector as the next wired standard for dedicated audio lovers.

It’s all with the intent of making things better, but before we get to the better place, we’re going to spend an uncomfortable few months (or longer) in a fragmented market where you’ll have to do diligent research to make sure your next pair of headphones works with all the devices you already own.

The self-imposed problem: finding a 3.5mm successor

My overwhelming impression from CES 2018 was that headphone companies have, without exception, bid a silent goodbye to the 3.5mm audio plug. We covered many dozens of new headphones, earphones, and truly wireless buds, and not one among them was a simple “plug this into the round hole on your phone or laptop” affair.

"With Apple’s help, we did this to ourselves"

The vast majority of new headphones were wireless. Audio-Technica’s latest models served as the perfect microcosm of the broader industry trend: the company launched a neckbud-style set of hi-fi earbuds with a plastic collar, two pairs of sporty earphones with hooks around the ear, and two over-ear models, one with built-in noise canceling and the other priced at €69. All wireless, all responding to consumer demand.

Talking to Audio-Technica representatives at CES, I was told that, “The speed that wireless headphones are growing is staggering, especially in terms of value when we consider wireless listening increased its share from approximately a quarter in 2016 to around 45% in 2017.” I heard the same message echoed by Beyerdynamic, 1More, Mee Audio, and every other audio company I spoke to. Many tout the improved convenience of wireless tech, to be sure — but whether or not a headphones maker is convinced there’s need for a shift is unimportant, all (even Grado!) are compelled to follow the prevailing winds of the wider tech industry. And phones play a huge role in driving this change:

“Since the iPhone and some other phones do not have a built-in 3.5mm jack, this trend has accelerated and there is no turning back.” — Val Kolton, V-Moda

“Mainstream headphones are becoming wireless first. This is the number 1 request from our customers.” — Sankar Thiagasamudram, Audeze

“Once device manufacturers began removing the 3.5mm headphone jack it put the consumer on notice that change was coming.” — Jonathan Levine, Master & Dynamic

“Clearly, Apple’s recent move to dispense with the 3.5mm socket has had an impact on the market.” — Alexander Van Der Heijden, Bowers & Wilkins

Wireless audio, however, is nowhere near as simple as it initially seems. You get a degree of universal compatibility from Bluetooth, but that quickly spirals into a codec mess if you want to pursue the best possible sound.

AptX HD, LDAC, or AAC? Android or iPhone?

Bluetooth audio has historically sacrificed sound quality for convenience relative to a wired connection. However, there are a couple of standards now that promise “better-than-CD” audio quality. One is Qualcomm’s AptX HD, which has graced the excellent Bowers & Wilkins PX and Beyerdynamic Aventho Wireless from late last year. But the problem with AptX HD is that it’s only supported by a few Android flagships and a limited range of pricier headphones at the moment, and it’s not supported at all by Apple’s iPhone and iPad. The same is true of Sony’s LDAC technology, which similarly promises higher-quality wireless audio (mostly by pumping more data through the air), and is actually available in Android Oreo for phone manufacturers to use, but the number of devices supporting it can be counted on one hand. Apple’s chosen alternative is Bluetooth AAC encoding.

Wireless audio codecs are a deep and dark rabbit hole to go down, and the only sure advice anyone can give today is that you’ll want at least one out of AAC, AptX HD, or LDAC in your next pair of Bluetooth cans. How much of an advantage you enjoy from each will depend on whether all your equipment is compatible. The conundrum for headphone makers? I’ll let 1More explain:

“Apple wants all BT to use AAC — their codec — Android wants everyone on BT 5. In order to make sure wireless headphones have multi-platform capability manufacturers have to adopt technology that is most applicable to all devices. So rather than invest in the latest codecs we have been forced to maintain more standard BT 4.1 and AptX as we wait and see what cellphone companies decide to make standard in the coming year.”

"Bluetooth 5 could improve things, but not as much as you might think"

That’s right, Bluetooth 5 isn’t even a thing yet for many of these companies, but they’ve already got a headache when choosing exactly what transmission method to support. As to Bluetooth 5 headphones, only a couple of companies introduced BT5 models at CES — Anker’s Zolo sub-brand being one of them — and most companies told me that they’re still studying how best to exploit the new capabilities of the latest version of Bluetooth. Audio-Technica, Bowers & Wilkins, and Master & Dynamic all promised they had such models on their road map, though B&W’s Alexander Van Der Heijden cautioned that “BT5 is designed mainly for Low Energy (Small data burst and low latency) for IoT, and therefore running audio over BT5 does not give us any immediate advantages over BT3 as streaming audio requires sustained data transfer.”

Fast pairing, built-in assistants, and charging cables

Another way in which the wireless headphones world is more unequal than the wired one is in the way some devices pair together. Apple’s W1-enabled iPhones and compatible AirPods and Beats headphones are a futuristic dream of instant Bluetooth pairing. Google has added a similar Fast Pair function in Android, but — once again — the devices supporting it are very limited in number. For the vast majority of people who don’t have the latest phone and headphone combo, Bluetooth pairing remains an aggravating and repeated struggle. The problem here, as with the codec issue above, is that wireless tech is indeed advancing in meaningful ways, but the distribution of those advances is both uneven and unpredictable.

Then there’s the matter of integrated smart assistants, such as Amazon’s Alexa and the Google Assistant. You might not think your headphones need to have either of the two built in, but it’s the nature of these voice assistants that once you find a particular use for them, you want them everywhere. They add another spec you’ll probably care to know before making a final determination on your next pair of headphones.

As if that’s not enough fragmentation, we’re now also in the midst of seeing headphone companies switch from the old Micro USB charging cable standard to the newer and better USB-C. B&O Play, for instance, sells the $300 Beoplay E8 truly wireless buds with Micro USB — which the company tells me was a courtesy to its users, most of whom will still have more Micro USB accessories and chargers — but also just announced the USB-C-powered Beoplay H8i and H9i. Apple’s headphones charge via a mix of Lightning and Micro USB, depending on the model, and the dream of having a single USB-C charger to power all of our mobile gear still seems a distant one.

Lightning and USB-C as the pricey cable replacement

If you’re unwilling to deal with the imperfections of Bluetooth or the need to charge yet another thing in your life, there are still wired options for you. The problem with them, however, is that they lock you into your phone’s particular ecosystem and they are nowhere near as cheap as the classic 3.5mm plug alternatives.

Shure used this year’s CES to launch a $99 USB-C cable — not a pair of earphones, just the cable for them — which joins its lineup of $99 replacement cables, one of them being a Bluetooth version and the other a Lightning option. Libratone sells the very good Q-Adapt earphones with either a Lightning or USB-C termination, but those cost $149 each. For the foreseeable future, you’ll find nothing better or cheaper on the USB-C front. I’ve been discussing this issue with audio companies for months now, and at CES they confirmed that USB-C remains a nightmare to navigate since its implementations vary across hardware manufacturers, and it’s difficult for a headphone company to ensure its earphones work with everything. That task is easier on the Lightning front, but prices remain high (thanks in part to Apple’s licensing fees and Made for iPhone certification requirements).

"A perfect storm for the 3.5mm plug’s extinction"

Even the audiophile, decidedly wired offerings at CES like Sennheiser’s new HD 820 were opting for that fancy new Pentaconn connector, or using the more established, heavy-duty XLR and 6.35mm plugs. Sony’s MDR-1AM2 was the closest thing to a regular pair of cans with a 3.5mm connector, but it, too, comes with the addition of a 4.4mm cable in the box. It’s a perfect storm for the 3.5mm plug’s extinction: for portable wired audio, everyone’s gradually moving toward Lightning and USB-C, and for high-end audiophile purposes, there are bigger and better jacks to plug into. The old 3.5mm connector is still going to be around for a long time — it provides the fallback to most wireless over-ear headphones today — but its importance has diminished dramatically in just the past year, and it’s accelerating down that path.

All of this is without even factoring in the proliferation of sound-customizing apps with many of the latest wireless cans and a wave of other digital enhancements that are yet to become public. Behind closed doors at CES, I got to try an early version of an impressive new digital signal processing system that provides more immersive 3D positional audio with the help of integrated gyroscopes. Codecs, apps, customization, Lightning plugs — headphones are becoming more digital than ever before. And as they transition toward their new form, it’ll be on us to become savvier shoppers and figure out the proper ways to put these shiny new pieces together.
https://www.theverge.com/2018/1/18/1...uture-ces-2018





How to Build a Low-Tech Internet

Wireless internet access is on the rise in both modern consumer societies and in the developing world.
Kris De Decker

In rich countries, however, the focus is on always-on connectivity and ever higher access speeds. In poor countries, on the other hand, connectivity is achieved through much more low-tech, often asynchronous networks.

While the high-tech approach pushes the costs and energy use of the internet higher and higher, the low-tech alternatives result in much cheaper and very energy efficient networks that combine well with renewable power production and are resistant to disruptions.

If we want the internet to keep working in circumstances where access to energy is more limited, we can learn important lessons from alternative network technologies. Best of all, there's no need to wait for governments or companies to facilitate: we can build our own resilient communication infrastructure if we cooperate with one another. This is demonstrated by several community networks in Europe, of which the largest has more than 35,000 users already.

More than half of the global population does not have access to the "worldwide" web. Up to now, the internet is mainly an urban phenomenon, especially in "developing" countries. Telecommunication companies are usually reluctant to extend their network outside cities due to a combination of high infrastructure costs, low population density, limited ability to pay for services, and an unreliable or non-existent electricity infrastructure. Even in remote regions of "developed" countries, internet connectivity isn't always available.

Internet companies such as Facebook and Google regularly make headlines with plans for connecting these remote regions to the internet. Facebook tries to achieve this with drones, while Google counts on high-altitude balloons. There are major technological challenges, but the main objection to these plans is their commercial character. Obviously, Google and Facebook want to connect more people to the internet because that would increase their revenues. Facebook especially receives lots of criticism because their network promotes their own site in particular, and blocks most other internet applications. [1]

Meanwhile, several research groups and network enthusiasts have developed and implemented much cheaper alternative network technologies to solve these issues. Although these low-tech networks have proven their worth, they have received much less attention. Contrary to the projects of internet companies, they are set up by small organisations or by the users themselves. This guarantees an open network that benefits the users instead of a handful of corporations. At the same time, these low-tech networks are very energy efficient.

WiFi-based Long Distance Networks

Most low-tech networks are based on WiFi, the same technology that allows mobile access to the internet in most western households. As we have seen in the previous article, sharing these devices could provide free mobile access across densely populated cities. But the technology can be equally useful in sparsely populated areas. Although the WiFi-standard was developed for short-distance data communication (with a typical range of about 30 metres), its reach can be extended through modifications of the Media Access Control (MAC) layer in the networking protocol, and through the use of range extender amplifiers and directional antennas. [2]

Although the WiFi-standard was developed for short-distance data communication, its reach can be extended to cover distances of more than 100 kilometres.

The longest unamplified WiFi link is a 384 km wireless point-to-point connection between Pico El Águila and Platillón in Venezuela, established a few years ago. [3,4] However, WiFi-based long distance networks usually consist of a combination of shorter point-to-point links, each between a few kilometres and one hundred kilometers long at most. These are combined to create larger, multihop networks. Point-to-points links, which form the backbone of a long range WiFi network, are combined with omnidirectional antennas that distribute the signal to individual households (or public institutions) of a community.

Long-distance WiFi links require line of sight to make a connection -- in this sense, the technology resembles the 18th century optical telegraph. [5] If there's no line of sight between two points, a third relay is required that can see both points, and the signal is sent to the intermediate relay first. Depending on the terrain and particular obstacles, more hubs may be necessary. [6]

Point-to-point links typically consist of two directional antennas, one focused on the next node and the other on the previous node in the network. Nodes can have multiple antennas with one antenna per fixed point-to-point link to each neighbour. [7] This allows mesh routing protocols that can dynamically select which links to choose for routing among the available ones. [8]

Long-distance WiFi links require line of sight to make a connection -- in this sense, the technology resembles the 18th century optical telegraph.

Distribution nodes usually consist of a sectoral antenna (a small version of the things you see on mobile phone masts) or a conventional WiFi-router, together with a number of receivers in the community. [6] For short distance WiFi-communication, there is no requirement for line of sight between the transmitter and the receiver. [9]

To provide users with access to the worldwide internet, a long range WiFi network should be connected to the main backbone of the internet using at least one "backhaul" or "gateway node". This can be a dial-up or broadband connection (DSL, fibre or satellite). If such a link is not established, users would still be able to communicate with each other and view websites set up on local servers, but they would not be able to access the internet. [10]

Advantages of Long Range WiFi

Long range WiFi offers high bandwidth (up to 54 Mbps) combined with very low capital costs. Because the WiFi standard enjoys widespread acceptance and has huge production volumes, off-the-shelf antennas and wireless cards can be bought for very little money. [11] Alternatively, components can be put together from discarded materials such as old routers, satellite dish antennas and laptops. Protocols like WiLDNet run on a 266 Mhz processor with only 128 MB memory, so an old computer will do the trick. [7]

The WiFi-nodes are lightweight and don't need expensive towers -- further decreasing capital costs, and minimizing the impact of the structures to be built. [7] More recently, single units that combine antenna, wireless card and processor have become available. These are very convenient for installation. To build a relay, one simply connects such units together with ethernet cables that carry both signal and power. [6] The units can be mounted in towers or slim masts, given that they offer little windload. [3] Examples of suppliers of long range WiFi components are Ubiquity, Alvarion and MikroTik, and simpleWiFi.

Long Range WiFi makes use of unlicensed spectrum and offers high bandwidth, low capital costs, easy installation, and low power requirements.

Long range WiFi also has low operational costs due to low power requirements. A typical mast installation consisting of two long distance links and one or two wireless cards for local distribution consumes around 30 watts. [6,12] In several low-tech networks, nodes are entirely powered by solar panels and batteries. Another important advantage of long range WiFi is that it makes use of unlicensed spectrum (2.4 and 5 GHz), and thus avoids negotiations with telecom operators and government. This adds to the cost advantage and allows basically anyone to start a WiFi-based long distance network. [9]

Long Range WiFi Networks in Poor Countries

The first long range WiFi networks were set up ten to fifteen years ago. In poor countries, two main types have been built. The first is aimed at providing internet access to people in remote villages. An example is the Akshaya network in India, which covers the entire Kerala State and is one of the largest wireless networks in the world. The infrastructure is built around approximately 2,500 "computer access centers", which are open to the local population -- direct ownership of computers is minimal in the region. [13]

Another example, also in India, are the AirJaldi networks which provide internet access to approximately 20,000 users in six states, all in remote regions and on difficult terrain. Most nodes in this network are solar-powered and the distance between them can range up to 50 km or more. [14] In some African countries, local WiFi-networks distribute internet access from a satellite gateway. [15,16]

A second type of long distance WiFi network in poor countries is aimed at providing telemedicine to remote communities. In remote regions, health care is often provided through health posts scarcely equipped and attended by health technicians who are barely trained. [17] Long-range WiFi networks can connect urban hospitals with these outlying health posts, allowing doctors to remotely support health technicians using high-resolution file transfers and real-time communication tools based on voice and video.

An example is the link between Cabo Pantoja and Iquitos in the Loreto province in Peru, which was established in 2007. The 450 km network consists of 17 towers which are 16 to 50 km apart. The line connects 15 medical outposts in remote villages with the main hospital in Iquitos and is aimed at remote diagnosis of patients. [17,18] All equipment is powered by solar panels. [18,19] Other succesful examples of long range WiFi telemedicine networks have been built in India, Malawi and Ghana. [20,21]

WiFi-Based Community Networks in Europe

The low-tech networks in poor countries are set up by NGO's, governments, universities or businesses. In contrast, most of the WiFi-based long distance networks in remote regions of rich countries are so-called "community networks": the users themselves build, own, power and maintain the infrastructure. Similar to the shared wireless approach in cities, reciprocal resource sharing forms the basis of these networks: participants can set up their own node and connect to the network (for free), as long as their node also allows traffic of other members. Each node acts as a WiFi routing device that provides IP forwarding services and a data link to all users and nodes connected to it. [8,22]

In a community network, the users themselves build, own, power and maintain the infrastructure.

Consequently, with each new user, the network becomes larger. There is no a-priori overall planning. A community network grows bottom-up, driven by the needs of its users, as nodes and links are added or upgraded following demand patterns. The only consideration is to connect a node from a new participant to an existing one. As a node is powered on, it discovers it neighbours, attributes itself a unique IP adress, and then establishes the most appropriate routes to the rest of the network, taking into account the quality of the links. Community networks are open to participation to everyone, sometimes according to an open peering agreement. [8,9,19,22]

Despite the lack of reliable statistics, community networks seem to be rather succesful, and there are several large ones in Europe, such as Guifi.net (Spain), Athens Wireless Metropolitan Network (Greece), FunkFeuer (Austria), and Freifunk (Germany). [8,22,23,24] The Spanish network is the largest WiFi-based long distance network in the world with more than 50,000 kilometres of links, although a small part is based on optic fibre links. Most of it is located in the Catalan Pyrenees, one of the least populated areas in Spain. The network was initiated in 2004 and now has close to 30,000 nodes, up from 17,000 in 2012. [8,22]

Guifi.net provides internet access to individuals, companies, administrations and universities. In principle, the network is installed, powered and maintained by its users, although volunteer teams and even commercial installers are present to help. Some nodes and backbone upgrades have been succesfully crowdfunded by indirect beneficiaries of the network. [8,22]

Performance of Low-tech Networks

So how about the performance of low-tech networks? What can you do with them? The available bandwidth per user can vary enormously, depending on the bandwidth of the gateway node(s) and the number of users, among other factors. The long-distance WiFi networks aimed at telemedicine in poor countries have few users and a good backhaul, resulting in high bandwidth (+ 40 Mbps). This gives them a similar performance to fibre connections in the developed world. A study of (a small part of) the Guifi.net community network, which has dozens of gateway nodes and thousands of users, showed an average throughput of 2 Mbps, which is comparable to a relatively slow DSL connection. Actual throughput per user varies from 700 kbps to 8 Mbps. [25]

The available bandwidth per user can vary enormously, depending on the bandwidth of the gateway node(s) and the number of users, among other factors

However, the low-tech networks that distribute internet access to a large user base in developing countries can have much more limited bandwidth per user. For example, a university campus in Kerala (India) uses a 750 kbps internet connection that is shared across 3,000 faculty members and students operating from 400 machines, where during peak hours nearly every machine is being used.

Therefore, the worst-case average bandwidth available per machine is approximately 1.9 kbps, which is slow even in comparison to a dial-up connection (56 kbps). And this can be considered a really good connectivity compared to typical rural settings in poor countries. [26] To make matters worse, such networks often have to deal with an intermittent power supply.

Under these circumstances, even the most common internet applications have poor performance, or don't work at all. The communication model of the internet is based on a set of network assumptions, called the TCP/IP protocol suite. These include the existence of a bi-directional end-to-end path between the source (for example a website's server) and the destination (the user's computer), short round-trip delays, and low error rates.

Many low-tech networks in poor countries do not comform to these assumptions. They are characterized by intermittent connectivity or "network partitioning" -- the absence of an end-to-end path between source and destination -- long and variable delays, and high error rates. [21,27,28]

Delay-Tolerant Networks

Nevertheless, even in such conditions, the internet could work perfectly fine. The technical issues can be solved by moving away from the always-on model of traditional networks, and instead design networks based upon asynchronous communication and intermittent connectivity. These so-called "delay-tolerant networks" (DTNs) have their own specialized protocols overlayed on top of the lower protocols and do not utilize TCP. They overcome the problems of intermittent connectivity and long delays by using store-and-forward message switching.

Information is forwarded from a storage place on one node to a storage place on another node, along a path that eventually reaches its destination. In contrast to traditional internet routers, which only store incoming packets for a few milliseconds on memory chips, the nodes of a delay-tolerant network have persistent storage (such as hard disks) that can hold information indefinitely. [27,28]

Delay-tolerant networks combine well with renewable energy: solar panels or wind turbines could power network nodes only when the sun shines or the wind blows, eliminating the need for energy storage.

Delay-tolerant networks don't require an end-to-end path between source and destination. Data is simply transferred from node to node. If the next node is unavailable because of long delays or a power outage, the data is stored on the hard disk until the node becomes available again. While it might take a long time for data to travel from source to destination, a delay-tolerant network ensures that it will eventually arrive.

Delay-tolerant networks further decrease capital costs and energy use, leading to the most efficient use of scarce resources. They keep working with an intermittent energy supply and they combine well with renewable energy sources: solar panels or wind turbines could power network nodes only when the sun shines or the wind blows, eliminating the need for energy storage.

Data Mules

Delay-tolerant networking can take surprising forms, especially when they take advantage of some non-traditional means of communication, such as "data mules". [11,29] In such networks, conventional transportation technologies -- buses, cars, motorcycles, trains, boats, airplanes -- are used to ferry messages from one location to another in a store-and-forward manner.

Examples are DakNet and KioskNet, which use buses as data mules. [30-34] In many developing regions, rural bus routes regularly visit villages and towns that have no network connectivity. By equipping each vehicle with a computer, a storage device and a mobile WiFi-node on the one hand, and by installing a stationary WiFi-node in each village on the other hand, the local transport infrastructure can substitute for a wireless internet link. [11]

Outgoing data (such as sent emails or requests for webpages) is stored on local computers in the village until the bus comes withing range. At this point, the fixed WiFi-node of the local computer automatically transmits the data to the mobile WiFi-node of the bus. Later, when the bus arrives at a hub that is connected to the internet, the outgoing data is transmitted from the mobile WiFi-node to the gateway node, and then to the internet. Data sent to the village takes the opposite route. The bus -- or data -- driver doesn't require any special skills and is completely oblivious to the data transfers taking place. He or she does not need to do anything other than come in range of the nodes. [30,31]

In a data mules network, the local transport infrastructure substitutes for a wireless internet link.

The use of data mules offers some extra advantages over more "sophisticated" delay-tolerant networks. A "drive-by" WiFi network allows for small, low-cost and low-power radio devices to be used, which don't require line of sight and consequently no towers -- further lowering capital costs and energy use compared to other low-tech networks. [30,31,32]

The use of short-distance WiFi-links also results in a higher bandwidth compared to long-distance WiFi-links, which makes data mules better suited to transfer larger files. On average, 20 MB of data can be moved in each direction when a bus passes a fixed WiFi-node. [30,32] On the other hand, latency (the time interval between sending and receiving data) is usually higher than on long-range WiFi-links. A single bus passing by a village once a day gives a latency of 24 hours.

Delay-Tolerant Software

Obviously, a delay-tolerant network (DTN) -- whatever its form -- also requires new software: applications that function without a connected end-to-end networking path. [11] Such custom applications are also useful for synchronous, low bandwidth networks. Email is relatively easy to adapt to intermittent connectivity, because it's an asynchronous communication method by itself. A DTN-enabled email client stores outgoing messages until a connection is available. Although emails may take longer to reach their destination, the user experience doesn't really change.

Browsing and searching the web requires more adaptations. For example, most search engines optimize for speed, assuming that a user can quickly look through the returned links and immediately run a second modified search if the first result is inadequate. However, in intermittent networks, multiple rounds of interactive search would be impractical. [26,35] Asynchronous search engines optimize for bandwith rather than response time. [26,30,31,35,36] For example, RuralCafe desynchronizes the search process by performing many search tasks in an offline manner, refining the search request based on a database of similar searches. The actual retrieval of information using the network is only done when absolutely necessary.

Many internet applications could be adapted to intermittent networks, such as webbrowsing, email, electronic form filling, interaction with e-commerce sites, blogsoftware, large file downloads, or social media.

Some DTN-enabled browsers download not only the explicitly requested webpages but also the pages that are linked to by the requested pages. [30] Others are optimized to return low-bandwidth results, which are achieved by filtering, analysis, and compression on the server site. A similar effect can be achieved through the use of a service like Loband, which strips webpages of images, video, advertisements, social media buttons, and so on, merely presenting the textual content. [26]

Browsing and searching on intermittent networks can also be improved by local caching (storing already downloaded pages) and prefetching (downloading pages that might be retrieved in the future). [206] Many other internet applications could also be adapted to intermittent networks, such as electronic form filling, interaction with e-commerce sites, blogsoftware, large file downloads, social media, and so on. [11,30] All these applications would remain possible, though at lower speeds.

Sneakernets

Obviously, real-time applications such as internet telephony, media streaming, chatting or videoconferencing are impossible to adapt to intermittent networks, which provide only asynchronous communication. These applications are also difficult to run on synchronous networks that have limited bandwidth. Because these are the applications that are in large part responsible for the growing energy use of the internet, one could argue that their incompatibility with low-tech networks is actually a good thing (see the previous article).

Furthermore, many of these applications could be organized in different ways. While real-time voice or video conversations won't work, it's perfectly possible to send and receive voice or video messages. And while streaming media can't happen, downloading music albums and video remains possible. Moreover, these files could be "transmitted" by the most low-tech internet technology available: a sneakernet. In a sneakernet, digital data is "wirelessly" transmitted using a storage medium such as a hard disk, a USB-key, a flash card, or a CD or DVD. Before the arrival of the internet, all computer files were exchanged via a sneakernet, using tape or floppy disks as a storage medium.

Just like a data mules network, a sneakernet involves a vehicle, a messenger on foot, or an animal (such as a carrier pigeon). However, in a sneakernet there is no automatic data transfer between the mobile node (for instance, a vehicle) and the stationary nodes (sender and recipient). Instead, the data first have to be transferred from the sender's computer to a portable storage medium. Then, upon arrival, the data have to be transferred from the portable storage medium to the receiver's computer. [30] A sneakernet thus requires manual intervention and this makes it less convenient for many internet applications.

There are exceptions, though. For example, a movie doesn't have to be transferred to the hard disk of your computer in order to watch it. You play it straight from a portable hard disk or slide a disc into the DVD-player. Moreover, a sneakernet also offers an important advantage: of all low-tech networks, it has the most bandwidth available. This makes it perfectly suited for the distribution of large files such as movies or computer games. In fact, when very large files are involved, a sneakernet even beats the fastest fibre internet connection. At lower internet speeds, sneakernets can be advantageous for much smaller files.

Technological progress will not lower the advantage of a sneakernet. Digital storage media evolve at least as fast as internet connections and they both improve communication in an equal way.

Resilient Networks

While most low-tech networks are aimed at regions where the alternative is often no internet connection at all, their usefulness for well-connected areas cannot be overlooked. The internet as we know it in the industrialized world is a product of an abundant energy supply, a robust electricity infrastructure, and sustained economic growth. This "high-tech" internet might offer some fancy advantages over the low-tech networks, but it cannot survive if these conditions change. This makes it extremely vulnerable.

The internet as we know it in the industrialized world is a product of an abundant energy supply, a robust electricity infrastructure, and sustained economic growth. It cannot survive if these conditions change.

Depending on their level of resilience, low-tech networks can remain in operation when the supply of fossil fuels is interrupted, when the electricity infrastructure deteriorates, when the economy grinds to a halt, or if other calamities should hit. Such a low-tech internet would allow us to surf the web, send and receive e-mails, shop online, share content, and so on. Meanwhile, data mules and sneakernets could serve to handle the distribution of large files such as videos. Stuffing a cargo vessel or a train full of digital storage media would beat any digital network in terms of speed, cost and energy efficiency. And if such a transport infrastructure would no longer be available, we could still rely on messengers on foot, cargo bikes and sailing vessels.

Such a hybrid system of online and offline applications would remain a very powerful communication network -- unlike anything we had even in the late twentieth century. Even if we envision a doom scenario in which the wider internet infrastructure would disintegrate, isolated low-tech networks would still be very useful local and regional communication technologies. Furthermore, they could obtain content from other remote networks through the exchange of portable storage media. The internet, it appears, can be as low-tech or high-tech as we can afford it to be.

Edited by Jenna Collett
http://www.lowtechmagazine.com/2015/...-internet.html





How Millions of Iranians Are Evading the Internet Censors
Sam Schechner

Iran’s new offensive against social media is showing signs of backfiring.

Authorities in Tehran have ratcheted up their policing of the internet in the past week and a half, part of an attempt to stamp out the most far-reaching protests in Iran since 2009.

But the crackdown is driving millions of Iranians to tech tools that can help them evade censors, according to activists and developers of the tools. Some of the tools were attracting three or four times more unique users a day than they were before the internet crackdown, potentially weakening government efforts to control access to information online.

“By the time they wake up, the government will have lost control of the internet,” said Mehdi Yahyanejad, executive director of NetFreedom Pioneers, a California-based technology nonprofit that largely focuses on Iran and develops educational and freedom of information tools.

An official at Iran’s United Nations mission didn’t immediately respond to a request for comment.

In recent days, Iran has said it has contained days of public demonstrations against the regime. Protesters used social media to spread the word about, or bear witness to, the protests, as people did during the Green Movement in 2009.

Iran blocked major social-media sites, such as Twitter Inc. and Facebook Inc., in 2009.

This time around, encrypted social-media app Telegram, which is widely used in Iran, became one of the key communication tools among protesters. Iranians have used Telegram to share information about demonstrations and videos of gatherings.

Iran moved to block Telegram in late December. In response, Iranians are flocking to a number of popular so-called circumvention tools. Downloads of such tools surged after the government move, according to data gathered by ASL19, a Toronto-based research and tech lab that helps people in Iran access information.

“When Telegram got blocked, we got a big push,” said Michael Hull, co-founder of Psiphon Inc., a Toronto-based firm that makes one such app. Psiphon said the number of unique users a day in Iran jumped from about 3 million to more than 10 million on Jan. 1 and 2, amid the protests, and remains around 8 million.

“When governments do this stuff, they are our best marketing tool,” he said.

The Psiphon app works in part by redirecting and camouflaging user traffic through cloud-service providers.

Adam Fisk, founder of Lantern, another popular app that had been primarily used in China, announced last week that it would remove all data caps for users in Iran—allowing them to browse banned sites and use banned services without limits. Its global number of mobile users grew fourfold after Telegram was blocked, with almost all the growth from Iran, said Mr. Fisk, whose firm, Brave New Software Project Inc., is based in Los Angeles.

Circumvention tools—some of which have received funding from U.S. government programs dating back as far as the early 2000s—have been increasing in sophistication in recent years. That has set up an arms race with authorities amid government crackdowns by countries including China and Turkey.

Governments are usually reluctant to shut off all domestic access to the internet, but authorities can order internet-service providers to cut off domestic access to some services. They can block or limit access to specific addresses or slow download speeds to impractical rates—essentially making the internet impossible to use.

Circumvention tools use various methods to get around the blocking of specific services. One popular technique is to redirect users’ internet traffic bound for banned addresses via foreign cloud-service providers or content-delivery networks that are used to boost download speeds, making traffic harder to spot.

A regime could still block individual cloud-service providers, but that would end up blocking lots of other traffic from local businesses and residents.

Another technique is to encrypt and camouflage data—making a Telegram message look like an email, for example.

Problems with internet service can still crop up. One Twitter user posted on Dec. 31 that the internet had slowed and Psiphon for a short time was constantly getting disconnected. “My access to domestic websites, however, has not changed at all,” wrote the user, who said they were posting from Tehran.

Still, the new tools are giving users access to Telegram, activists say. And they can also expose users to other blocked apps and websites.

“People are using circumvention tools to access Telegram who might not normally use them,” said Collin Anderson, a Washington, D.C.-based researcher who studies internet infrastructure and human rights. “And that is giving them access to a much wider internet.”
https://www.msn.com/en-us/news/other...ors/ar-BBI9g95





A Step in the Right Direction: House Passes the Cyber Vulnerability Disclosure Reporting Act
Nate Cardozo and Andrew Crocker

The House of Representatives passed the “Cyber Vulnerability Disclosure Reporting Act” this week. While the bill is quite limited in scope, EFF applauds its goals and supports its passage in the Senate.

H.R. 3202 is a short and simple bill, sponsored by Rep. Sheila Jackson Lee (D-TX), that would require the Department of Homeland Security to submit a report to Congress outlining how the government deals with disclosing vulnerabilities. Specifically, the mandated report would comprise two parts. First, a “description of the policies and procedures developed [by DHS] for coordinating cyber vulnerability disclosures,” or in other words, how the government reports flaws in computer hardware and software to the developers. And second, a possibly classified “annex” containing descriptions of specific instances where these policies were used to disclose vulnerabilities in the previous year, leading to mitigation of the vulnerabilities by private actors.

Perhaps the best thing about this short bill is that it is intended to provide some evidence for the government’s long-standing claims that it discloses a large number of vulnerabilities. To date, such evidence has been exceedingly sparse; for instance, Apple received its first ever vulnerability report from the U.S. government in 2016. Assuming the report and annex work as intended, the public’s confidence in the government’s ability to “play defense” may actually increase.

The bill has no direct interaction with the new Vulnerabilities Equities Process (VEP) charter, which was announced last November. As we said then, we think the new VEP is probably a step in the right direction, and this bill providers further support for transparency into the government's handling of vulnerabilities.

As an aside, we question the need to classify the annex describing actual instances of disclosed vulnerabilities. Except maybe under exceptional circumstances, this should be public, especially coming after dubious statements by officials like that by White House Cybersecurity Coordinator Rob Joyce when he said last week that “the U.S. government would never put a major company like Intel in a position of risk like this to try to hold open a vulnerability.” Reassurances like that remain hard to take at face value in light of the NSA’s recent history of sabotaging American companies’ computer security.

We’ll be watching as the bill moves to the Senate.
https://www.eff.org/deeplinks/2018/0...-reporting-act





Facebook Report Sparks Danish Crackdown on Underage Sex Post
Frances Schwartzkopff

• Police have charged more than 1,000 young people in case
• Crackdown comes as global #MeToo movement focuses on sex abuse

Danish police have charged 1,004 young people with distributing sexually explicit material of two 15-year-olds following a tip-off from Facebook Inc.

The activity may constitute distribution of child pornography, the Danish National Police said on Monday. The case is “very large and complex,” Police Inspector Lau Thygesen said. “We’ve taken the matter very seriously as it has big consequences for those involved.”

The case --- the biggest of its kind in Denmark -- follows calls for greater efforts to clamp down on revenge porn and distribution of private material after young women last year detailed in local newspapers how their lives had been destroyed by the publication of photos and video intended only for their partners. The crackdown also comes amid a shift in tolerance toward such acts as the #MeToo movement gains traction.

“We want to give out a warning to young people: think about what you’re doing,” Flemming Kjaerside, a police superintendent for Denmark’s National Crime Center, said by phone. “Don’t ever share sex videos. It can have consequences for the victims and also for those distributing. We really hope that it is an eyeopener for young people, that they should be careful in the digital world about what you should do.”

Danish police said they began their investigation after Facebook notified U.S. authorities of two explicit video clips and a photograph of the couple on its chat platform Messenger last year. U.S. authorities in turn notified European police. While most of those charged distributed the material a couple of times, others distributed it several hundred times, Danish police said.

Those facing charges range in age from 15 years to people in their 20s, Kjaerside said. About 800 are male. If convicted, they face possible prison sentences and will be shut out of some professions such as childcare, Kjaerside said. They may also face immigration restrictions to the U.S.
https://www.bloomberg.com/news/artic...rage-sex-posts





1,000 Danes Accused of Child Pornography for Sharing Video of Teens
Martin Selsoe Sorensen

About 1,000 adults and adolescents may face prosecution for sharing online video of two 15-year-olds having sex, and more people may be charged for spreading the images, the Danish National Police said Monday.

Consensual sex between 15-year-olds is legal in Denmark, but anyone who forwarded the video violated the law against distributing child pornography, the police explained. The two video clips — one 50 seconds long, the other 9 seconds — were shared through Facebook’s Messenger app.

The case prompted outrage when it was first reported last year, because the video appeared to show the girl being abused, penetrated with foreign objects. The girl has said that she consented to sex, but not to the abuse or to the recording.

The police notified 1,004 people on Monday that they faced preliminary charges, which the police can issue on their own; prosecutors will decide whether to proceed with charges and take the cases to trial. Investigators are looking into whether the video was distributed on platforms other than Facebook.

“It may sound very dramatic that we’re charging with child pornography,” said Flemming Kjaerside, a police superintendent. “Many had no intention to distribute child pornography, but objectively speaking, that’s what they’ve done.”
Continue reading the main story

With the ubiquity of phone cameras, cases like this one have become a recurring phenomenon around the world, testing accepted legal standards. In some instances, including a highly publicized case in North Carolina, minors have faced criminal charges for sending explicit pictures of themselves. In others, like a Colorado case involving more than 100 high school students, the authorities have declined to bring any charges.

Those found guilty in the Danish case are unlikely to go to prison, but the convictions will remain on their records for 10 years, barring them from becoming police officers and from taking certain jobs working with children. Some have already confessed, Mr. Kjaerside said, as the police across the country contacted the accused and their parents, and conducted the first rounds of questioning.

It is not clear whether all of the people charged knew that the boy and girl in the videos were under 18, the minimum age for legally distributed pornography. If they convince a judge that they had no way of knowing, the child pornography charges could be dropped, but they could still be charged with distributing the material without the consent of the couple in the images.

All but eight of the accused are under 25 years old, and four-fifths of them are male. A few 14-year-olds who also shared the video are not being prosecuted, the police said.

Mira Bech, 19, who received notice to contact the police for questioning, admitted seeing and storing the video, but she denied sharing it. She said she knew others who also had the video but had not been charged.

“This will ruin my life,” she told TV2, a national television station. “It’s the world’s most ridiculous case. I couldn’t tell that the people in the video were under 18.”

Mr. Kjaerside, the police superintendent, said that charges were leveled only against those who had shared the video.

The case came to light in the fall of 2017, after Facebook alerted the National Center for Missing and Exploited Children in the United States, and that group contacted American and international authorities. Danish police spent months identifying the social media users, their IP addresses, phone numbers and names.

The recordings were made at a party by friends of the couple, who were the first to share them online. Two of them were fined last year for their part in the distribution.

The girl in the video later told Information, a Danish newspaper, that one of the people responsible tried to use the recordings to blackmail her. She said she received a message saying: “I have a video of you. If you don’t send me a nude photo, I’ll share it with all of my friends.”

“I tried to forget that evening,” she said. “I knew that it was filmed, but I didn’t realize they would think of passing the videos on to others.”

Kuno Sorensen, a psychologist with Save the Children Denmark, said, “Youngsters are afraid of reporting these cases to the police,” and they are embarrassed to discuss them with their parents or other authority figures.

“A lot of the charged are under 18 years,” he said. “It’s sad that they become hostages in this, but at the same time they’ve committed a crime with a heavy impact on the victims.”

For years, Danes have discussed the issue of sexual images shared without consent, so it is hard to believe that the people who spread the video were ignorant of the consequences, said Emma Holten, who is known for her campaign against cyberbullying.

“Four years ago, I would have felt sorry for them,” she said. “Back then you could have argued that they were not aware of that it was illegal, but today they know.”

Rather than tell children not to take explicit images, she said, adults should tell them not to bully or humiliate others, just as they do in school or on playgrounds.

“Telling them to not take photos is like trying to forbid them to have sex,” she said. “That’s been tried for 10,000 years and it didn’t work.”
https://www.nytimes.com/2018/01/15/w...phy-video.html





Apple CEO Tim Cook: I Don't Want My Nephew on a Social Network
Rob Price

• Apple CEO Tim Cook on Friday spoke about technology overuse, saying he wouldn't allow his nephew to use social networks.
• Cook is the latest in a long line of executives and others in the tech industry who have raised concerns about the downsides of technology products and services.
• A former Facebook executive recently said social media was "destroying how society works," and an inventor of the iPhone said Apple and other tech firms needed to do more to address the concerns.

Add Apple CEO Tim Cook to the list of tech luminaries warning about the potential risks of modern technology.

Speaking at a school in England on Friday, Cook said he didn't want his nephew to use social media, according to The Guardian. He also argued that the use of tech in schools should be limited.

"I don't have a kid, but I have a nephew that I put some boundaries on," Cook said, according to The Guardian. "There are some things that I won't allow; I don't want them on a social network."

He said he didn't believe in overuse of technology, adding: "I'm not a person that says we've achieved success if you're using it all the time ... I don't subscribe to that at all."

And while technology companies such as Apple have for years pushed their products on schools, Cook acknowledged that sometimes an iPad is inappropriate in the classroom.

"There are still concepts that you want to talk about and understand," he said, according to the Guardian report. "In a course on literature, do I think you should use technology a lot? Probably not."

The tech industry is getting worried about what it has built

In recent months, numerous executives and others in the tech industry have expressed concerns about the effect of technology on society and human minds.

Research has pointed to links between mental health and tech usage. Recent studies have found that children who use smartphones for three hours a day or more are much more likely to be suicidal, and that frequent social-media use increased the risk of depression by 27% among eighth-graders.

Sean Parker, the first president of Facebook, has charged that social networks are exploiting human "vulnerability," saying: "God only knows what it's doing to our children's brains."

Chamath Palihapitiya Brian Ach/Getty Images for TechCrunchChamath Palihapitiya, another former Facebook executive, said late last year that social media was "destroying how society works," adding that he felt "tremendous guilt" for what he helped make.

"In the back, deep, deep recesses of our mind, we kind of knew something bad could happen," Palihapitiya said, though he later walked back parts of his remarks after they attracted significant international media attention.

And Roger McNamee, an early investor in Facebook and Google, has been increasingly vocal in his criticism of those companies.

"The people who run Facebook and Google are good people whose well-intentioned strategies have led to horrific unintended consequences," he told The Guardian in October.

He added: "The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models."

Apple is drawing scrutiny

After two major shareholders publicly raised concerns about iPhone addiction among kids, the company promised to introduce features to help combat the issue.

Meanwhile, Tony Fadell, a creator of the iPod and iPhone, lumped Apple in with other tech giants when he said the industry wasn't doing enough to tackle tech addiction.

"Apple Watches, Google Phones, Facebook, Twitter — they've gotten so good at getting us to go for another click, another dopamine hit," he said in a tweet. "They now have a responsibility & need to start helping us track & manage our digital addictions across all usages — phone, laptop, TV, etc."

Cook's steps to limit his nephew's use of technology resemble those of other tech executives. The Microsoft mogul Bill Gates said he capped his daughter's screen time and refused to let his children get smartphones until they were 14. And Steve Jobs, Cook's predecessor as CEO of Apple, said right after the release of the iPad in 2010 that he had barred his kids from using it.

"We limit how much technology our kids use at home," he said.
http://www.newstimes.com/technology/...ds-6398412.php





Apple is Hiring Dozens of Work-From-Home Positions Right Now
Heather Leighton

If you want a solid Apple discount and want to work from home, there's a job with Apple you can apply to right now.

The tech company announced online that it is looking for full-time employees to work as AppleCare at-home advisers and managers, who are customer service providers that help in technical support of Apple products, including the iPhones, iPads and MacBooks.

"If you love exploring the ways technology helps you do all your favorite things, you'll probably be great at sharing your knowledge with others," the job listing reads. "That's what you'll do every day as an Apple At Home Advisor. And with each customer conversation you have, it becomes clear: You're not just supporting technology. You're supporting people."

For the role, you'll be working from home, but they make it very clear that it is a professional role that will require training and discipline. Along with the company discount, other perks include paid time away and career growth.

Current positions include at-home advisors, team managers, and area managers. As of publication, when searching for "at-home" positions on their job search website, Apple has more than 50 positions to fill.

To look into the job and apply, visit the Apple website here.
http://www.newstimes.com/jobs/articl...e-12497878.php





CES Was Full of Useless Robots and Machines That Don’t Work

This year’s electronics expo promised a ‘better life’ and ‘better world.’ It instead offered a folding machine that can’t fold sweatshirts.
Taylor Lorenz

There’s a woman in Sweden named Simone Giertz who makes a living building shitty robots. Her well-intentioned creations fail comically at waking her up, brushing her teeth, washing her hair, applying lipstick, and making her breakfast.

At CES in 2018, any number of these robots would have been right at home.

CES is a massive annual trade show in Las Vegas where hundreds of thousands of people gather to see the latest and greatest new products in consumer tech. In recent years, the show has transformed into a sprawling behemoth that dominates the entire city for a week.

The event is meant to deliver a vision of the future. The technology on display is supposed to show you what the next generation of cars, TVs, computers, cameras, etc. will look like. At best, the products deliver a rough sketch.

Take the FoldiMate, a giant robotic machine that costs $850 that can supposedly fold your clothes. The machine, which took up more space than a washing machine, might be worth it if you could dump a huge pile of laundry inside some chamber and have your garments returned to you in neatly folded stacks. But that type of machine has yet to be built.

In order for the FoldiMate to work, you must individually button up each shirt then manually clip it onto the machine, which could be more time consuming than just folding everything yourself.

The machine can only fold certain items too. Dress pants and traditional button up shirts are fine, bulky sweatshirts, baby clothes, socks, or undergarments are off the table.

The FoldiMate fit right in with the other “smart home”-type products at CES, where the primary innovation in the past year seemed to be adding Amazon Alexa to absolutely everything.

The Haier smart mirror caught my eye as I stepped into the Central Hall of the convention center. It promised to help me dress by recommending outfits for travel, work, or a date. It could also give detailed washing instructions for different garments and track where it was sitting in my closet.

Intrigued, I asked how it would know so much about all my clothes. “Do I dump all my laundry into a big scanner?” I asked naively.

The cheery brand ambassador laughed. “The mirror gets all its information from RFID chips in the clothing, which all clothes will come with in the future.”

I asked how this product would help someone who buys their clothes in the year 2018 and still wears a not insubstantial amount of sweaters from college.

Her face dropped and she explained that for the time being I would have to manually enter all of the information by hand by tagging every item in my wardrobe. “The mirror also plays workout videos,” she said, as I walked away.

As I walked down the long corridors between booths I saw walls of TVs, rows of massage chairs, and many, many cars.

One man dressed in shimmering red coattails and a red top hat explained the intricacies of different types of Kicker amplifiers and speakers to a group of auto industry marketers.

Finally, I was confronted by a small robot dance crew. The 3-foot-tall, white, child-shaped bots were meant to provide companionship for seniors and children. Naturally, they were broken.

As they swayed back and forth, the iPads affixed to their chests all read “Sign into Chrome.”

I asked how frequently the screens on these robots malfunctioned and a woman standing in the booth area she didn’t know.

But many companion robots didn’t seem to work. Their touch screens either didn’t recognize my finger, couldn’t execute basic voice commands, or their high-pitched robotic voices were too difficult to understand.

These are the robots technology companies want old people to rely on, but trying to connect with them was like attempting to extract emotional support from a broken Windows tablet.

One companion bot called the Loobot can supposedly be controlled with your mind. “It can even read a child’s facial expression,” the man at the booth told me. But in order for it to work, the user has to wear an uncomfortable metal headband.

Drone cages were set up all over one area of the South Hall. One drone whizzed by and a loud voice came over the loudspeaker. “Amigo drone, it’s your friend,” the voice said. “Take it with you everywhere you go.”

I wondered how I would take it anywhere I normally go in New York City, or most other places where drones are restricted by no-fly zones.

Then there was the self-driving luggage.

90Fun’s Puppy 1 self-driving rollaway, which uses Segway technology to roll behind you, couldn’t go 10 feet without falling on its face. A Chinese competitor I observed in action kept losing its owner and was abysmally slow. I couldn’t imagine running late for a flight and trying to keep any of these in tow.

The United States Post Office had a giant booth with a game set up where you could pretend to deliver a package. A Kodak booth was set up to promote its new cryptocurrency. American Express kept trying to airdrop me marketing materials every time I walked by.

A giant banner in the main hallway read “A better life. A better world.” But all I could think of is how much I wanted to be back home in the real world where, even if it’s primitive, most technology just works.
https://www.thedailybeast.com/ces-wa...that-dont-work





Amazon Won't Say if it Hands Your Echo Data to the Government

The retail, cloud, and device giant stands as the least transparent of transparent tech companies.
Zack Whittaker

Amazon has a transparency problem.

Three years ago, the retail giant became the last major tech company to reveal how many subpoenas, search warrants, and court orders it received for customer data in a half-year period. While every other tech giant had regularly published its government request figures for years, spurred on by accusations of participation in government surveillance, Amazon had been largely forgotten.

Eventually, people noticed and Amazon acquiesced.

Since then, Amazon's business has expanded. By its quarterly revenue, it's no longer a retail company -- it's a cloud giant and a device maker. The company's flagship Echo, an "always listening" speaker, collects vast amounts of customer data that's openly up for grabs by the government.

But Amazon's bi-annual transparency figures don't want you to know that.

In fact, Amazon has been downright deceptive in how it presents the data, obfuscating the figures in its short, but contextless, twice-yearly reports. Not only does Amazon offer the barest minimum of information possible, the company has -- and continues -- to deliberately mislead its customers by actively refusing to clarify how many customers, and which customers, are affected by the data demands it receives.

ZDNet started covering Amazon's then-lack of transparency and subsequently published reports when Stephen Schmidt, chief information security officer for Amazon Web Services (AWS), posted the debut report on the "AWS Security blog" late on a Friday night in mid-2015.

Since then, every report was put on an AWS subdomain page, which asks in the footer if you "want more information about AWS information requests?"

After its second report, we asked Amazon spokesperson Frank Fellows in July 2016 if the company would include data such as Echo audio, retail, and mobile service data in the future. He declined to comment.

Transparency reports came and went. We would occasionally contact an Amazon spokesperson for comment to provide context to data found in each report, but the company would either not respond or decline to comment.

Then, earlier this month, after we reported a record high in government demands for data, Amazon spokesperson Stacy Mitchell emailed to say the report "actually focuses solely on Amazon" and not just on AWS as we had reported, and as we had assumed in previous reports. With that being the case, we asked which products, services, and divisions the data in the report related to, but the spokesperson would not say. The logic was that if the figures don't solely relate to AWS as the first transparency report was billed, it was necessary to provide context to what the figures did in fact relate to. We pressed, but, clearly at an impasse, we reached out to another spokesperson, Grant Milne, for clarity. After a short back and forth, Milne also refused to say which products, services, and divisions were included in the report.

Lastly, we asked Ty Rogers, Amazon's director of corporate communications, who also declined to comment.

What started as a debut transparency report attempt, with all the hallmarks of aiming to appease its AWS customers (and misconstrued by this reporter), quickly became, albeit three years later, a successful effort to mislead and confuse by deliberately avoiding answering a simple question.

If Amazon's transparency reports are not limited to AWS, the implication is that the government has requested customer data that includes Echo audio files and user shopping activity, at least.

"With Amazon Echo microphones sitting inside so many American homes, it's essential that Amazon explain how often governments demand that data and how it fights back against overbroad requests," said Matt Cagle, technology and civil liberties attorney at the ACLU of Northern California.

"Amazon's 'customer first' commitment requires it," he said, referring to a now well-known quote by the company's founder Jeff Bezos.

No tech or telecom company is obligated to reveal how many requests for customer data they receive from the government in any set time period. But after Google proactively revealed its first transparency report in 2010, a raft of companies have since published their own figures, catalyzed in part following the NSA surveillance scandal in 2013, in an effort to counter the narrative that they were complicit or cooperated with government spying.

In the months and years after, Apple, Facebook, Microsoft, and Yahoo -- among those named -- began releasing more data points on the amount of subpoenas, search warrants, and court orders it receives each half-year.

These reports now more than ever have more context and are public -- letting anyone drill down the data by region or country, by the type of request, and how many accounts are impacted in each reporting period. And, in some cases, the companies make available downloadable spreadsheets packed with raw data.

Amazon, which wasn't named as a surveillance partner in the leaked NSA documents, publishes the least amount of data in its reports. By comparison, each report has just three pages and contains only basic information, like how many requests the company received and how many were approved or denied.

Unlike other companies, Amazon doesn't even say how many customers were affected.

By that logic, a single government data request could amount to any number of customers or potentially all its customers. (Amazon, for its part, says in its reports that it "objects to overbroad or otherwise inappropriate" subpoenas, search warrants, and court orders.)

With Microsoft, Google, Facebook, and Apple, it's arguably more clear what kind of data each company collects than Amazon, which has a sprawling business across retail, the cloud, and devices like its Fire tablets and Echo speakers.

It's those Echo speakers that have the potential to be more intrusive than any other of the company's businesses, products, or services.

Long have there been concerns that the government could access data from the Alexa-powered Echo speaker -- or worse, compel the company (or on its own) remotely activate an Echo speaker in someone's home or workplace. In 2016, Gizmodo filed a freedom of information (FOIA) request to see if the FBI had ever wiretapped an Echo as part of a criminal investigation, but the FBI neither confirmed nor denied if it had ever tapped the Echo.

Google doesn't publish data specific to Google Home, the search giant's rival smart speaker, but it breaks down the ratio of requests received to accounts impacted. (A Google spokesperson did not respond to a request for comment prior to publication.) And that's a problem, too. On the other hand, Apple, with its rival HomePod speaker due out later this year, anonymizes user data, meaning there's nothing for the company to turn over even if a demand was made.

But where Amazon has the market share -- data says as many as 35 million Americans are Echo owners -- the company falls far below what modern tech companies see as a baseline of transparency. And if Amazon won't say how many of its customers had their data turned over to the authorities, it looks as though the company has something to hide.

Ironically, that's the opposite of what the company intended in publishing its transparency reports.
http://www.zdnet.com/article/amazon-...-tech-company/





NSA Deleted Surveillance Data it Pledged to Preserve

The agency tells a federal judge that it is investigating and 'sincerely regrets its failure.'
Josh Gerstein

The National Security Agency destroyed surveillance data it pledged to preserve in connection with pending lawsuits and apparently never took some of the steps it told a federal court it had taken to make sure the information wasn’t destroyed, according to recent court filings.

Word of the NSA’s foul-up is emerging just as Congress has extended for six years the legal authority the agency uses for much of its surveillance work conducted through U.S. internet providers and tech firms. President Donald Trump signed that measure into law Friday.

Since 2007, the NSA has been under court orders to preserve data about certain of its surveillance efforts that came under legal attack following disclosures that President George W. Bush ordered warrantless wiretapping of international communications after the 2001 terrorist attacks on the U.S. In addition, the agency has made a series of representations in court over the years about how it is complying with its duties.

However, the NSA told U.S. District Court Judge Jeffrey White in a filing on Thursday night and another little-noticed submission last year that the agency did not preserve the content of internet communications intercepted between 2001 and 2007 under the program Bush ordered. To make matters worse, backup tapes that might have mitigated the failure were erased in 2009, 2011 and 2016, the NSA said.

“The NSA sincerely regrets its failure to prevent the deletion of this data,” NSA’s deputy director of capabilities, identified publicly as “Elizabeth B.,” wrote in a declaration filed in October. “NSA senior management is fully aware of this failure, and the Agency is committed to taking swift action to respond to the loss of this data.”

In the update Thursday, another NSA official said the data were deleted during a broad, housecleaning effort aimed at making space for incoming information.

“The NSA’s review to date reveals that this [Presidential Surveillance Program] Internet content data was not specifically targeted for deletion,” wrote the official, identified as “Dr. Mark O,” “but rather the PSP Internet content data matched criteria that were broadly used to delete data of a certain type … in response to mission requirements to free-up space and improve performance of the [redacted] back-up system. The NSA is still investigating how these deletions came about given the preservation obligations extant at the time. The NSA, however, has no reason to believe at this time that PSP Internet content data was specifically targeted for deletion.”

An NSA spokesman declined to comment on Friday.

Defiance of a court order can result in civil or criminal contempt charges, as well as sanctions against the party responsible. So far, no one involved appears to have asked White to impose any punishment or sanction on the NSA over the newly disclosed episodes, although the details of what happened are still emerging.

“It’s really disappointing,” said David Greene, an attorney with the Electronic Frontier Foundation, which has been leading the prolonged litigation over the program in federal court in San Francisco. “The obligation’s been in place for a really long time now. … We had a major dust-up about it just a few years ago. This is definitely something that should’ve been found sooner.”

The last legal showdown over the issue may have actually compounded the NSA’s problems. In May 2014, an NSA official known as “Miriam P.” assured the court that the data were safe.

The NSA is “preserving magnetic/digital tapes of the Internet content intercepted under the [PSP] since the inception of the program,” she wrote, adding that “the NSA has stored these tapes in the offices of its General Counsel.”

The agency now says, “regrettably,” that the statement “may have been only partially accurate when made.”

The latest NSA filing says the ongoing investigation indicates that officials did a “physical inspection” in 2014 to confirm the tapes’ presence in the counsel’s office storage space. However, “those tapes largely concerned metadata,” not the content of communications the NSA intercepted.

The NSA says the impact of the misstatement and the deletion on the litigation should be “limited” because it has found back-ups of some content from about four months in 2003 and because it has a larger set of metadata from 2004 to 2007. That metadata should give a strong indication of whether the plaintiffs in the suits had their communications captured by the NSA, even if the communications themselves may be lost, the filings indicate. The NSA is also using “extraordinary” efforts to recover the data from tapes that were reused, it said.

Asked why the Electronic Frontier Foundation hasn’t publicized the episode, Greene said his group was waiting for the NSA to turn over data that the plaintiffs in the suits have demanded before considering next steps regarding the spy agency’s failure to maintain the records it said it was keeping.

“We don't know exactly how bad it is,” the lawyer said, adding: “Even if you take them at their word that this was just an honest mistake, what it shows is despite your best intention to comply with important restrictions, it can be really difficult to implement. … It shows that with the really tremendous volume of information they’re vacuuming up, it is impossible to be meticulous.”
https://www.politico.com/story/2018/...ce-data-351730





Trump Signs Bill Renewing NSA's Internet Surveillance Program
Dustin Volz

U.S. President Donald Trump on Friday said he signed into law a bill renewing the National Security Agency’s warrantless internet surveillance program, sealing a defeat for digital privacy advocates.

“Just signed 702 Bill to reauthorize foreign intelligence collection,” Trump wrote on Twitter, referring to legislation passed by the U.S. Congress that extends Section 702 of the Foreign Intelligence Surveillance Act (FISA).

The law renews for six years and with minimal changes the National Security Agency (NSA) program, which gathers information from foreigners overseas but incidentally collects an unknown amount of communications belonging to Americans.

The measure easily passed the U.S. House of Representatives last week despite mixed signals posted on Twitter by Trump and narrowly avoided a filibuster in the Senate earlier this week that split party lines. The measure had drawn opposition from a coalition of privacy-minded Democrats and libertarian Republicans.

In his tweet on Friday, Trump attempted to clarify why he signed the bill despite repeating an unsubstantiated claim that his Democratic predecessor, Barack Obama, ordered intelligence agencies to eavesdrop on Trump’s 2016 Republican presidential campaign.

“This is NOT the same FISA law that was so wrongly abused during the election,” Trump wrote. “I will always do the right thing for our country and put the safety of the American people first!”

Last September, the U.S. Justice Department said in a court filing that it had no evidence to support Trump’s claim about improper surveillance during the campaign.

Without Trump’s signature, Section 702 had been set to expire on Friday, though intelligence officials had said the surveillance program could continue to operate until April.

Under the law, the NSA is allowed to eavesdrop on vast amounts of digital communications from foreigners living outside the United States via U.S. companies like Facebook Inc, Verizon Communications Inc and Alphabet Inc’s Google.

But the program also incidentally scoops up Americans’ communications, including when they communicate with a foreign target living overseas, and can search those messages without a warrant.

The White House, U.S. intelligence agencies and congressional Republican leaders have said the program is indispensable to national security, vital to protecting U.S. allies and needs little or no revision.

Privacy advocates say it allows the NSA and other intelligence agencies to grab data belonging to Americans in a way that represents an affront to the U.S. Constitution.

Reporting by Dustin Volz; Editing by Sandra Maler and Jonathan Oatis
https://uk.reuters.com/article/us-us...-idUKKBN1F82MK





BitTorrent Flaw Could Let Hackers Take Control of Windows, Linux PCs

Google's Project Zero uncovers proof-of-concept attack
Nicholas Fearn

GOOGLE'S PROJECT ZERO has uncovered a "critical flaw" in the Transmission BitTorrent app that could give cybercrooks complete control of users' computers.

According to Project Zero, the client is vulnerable to a DNS re-binding attack that effectively tricks the PC into accepting requests via port 9091 from malicious websites that it would (and should) ordinarily ignore.

The flaw could enable attackers to execute all kinds of attacks, including remote code execution, and works in both Chrome and Firefox on Windows and Linux PCs. Other browsers will almost certainly be vulnerable too.

Writing on Twitter, Ormandy said this is "first of a few remote code execution flaws in various popular torrent clients". Before publishing details of this attack, Google Project Zero reached out to Transmission, which has since released a patch.

First of a few remote code execution flaws in various popular torrent clients, here is a DNS rebinding vulnerability Transmission, resulting in arbitrary remote code execution. https://t.co/kAv9eWfXlG
— Tavis Ormandy (@taviso) January 11, 2018

Publicising details of the attack appears to have done the trick of forcing the developers to rush out a patch, but this has not been applied in all the software that uses the Transmission protocol, Ormandy warned.

In a follow up to his original November post warning of a security vulnerability, Ormandy last week wrote: "I'm finding it frustrating that the Transmission developers are not responding on their private security list, I suggested moving this into the open so that distributions can apply the patch independently. I suspect they won't reply, but let's see.

"I've never had an open-source project take this long to fix a vulnerability before, so I usually don't even mention the 90-day limit if the vulnerability is in an open source project.

"I would say the average response time is measured in hours rather months if we're talking about open source."

Transmission is one of a number of BitTorrent peer-to-peer file sharing clients.

Rather than a centralised hub-and-spoke system for distributing files and data, shared files are decentralised, but publicised via the software that utilises the protocol. If anyone in the network wants a file, it is downloaded in 'pieces' from the source or sources.

Peer-to-peer file sharing, however, has gained a reputation as a distribution mechanism for pirated software, television shows and films.

However, the protocol is also used for many legitimate file-distribution purposes, such as software and other downloads by legitimate vendors in order to reduce the stresses on networks that more centralised distribution systems can cause.
https://www.theinquirer.net/inquirer...dows-linux-pcs





Shiver Me Timbers! New Signs Pirates Liked Booty _ and Books
Martha Waggoner

Dead men tell no tales, but there's new evidence that somebody aboard the pirate Blackbeard's flagship harbored books among the booty.

In an unusual find, researchers have discovered shreds of paper bearing legible printing that somehow survived three centuries underwater on the sunken vessel. And after more than a year of research that ranged as far as Scotland, they managed to identify them as fragments of a book about nautical voyages published in the early 1700s.

Conservators for Blackbeard's ship the Queen Anne's Revenge found the 16 fragments of paper wedged inside the chamber for a breech-loading cannon, with the largest piece being the size of a quarter.

The Queen Anne's Revenge had been a French slave ship when Blackbeard captured it in 1717 and renamed it. The vessel ran aground in Beaufort, in what was then the colony of North Carolina, in June 1718. Volunteers with the Royal Navy killed Blackbeard in Ocracoke Inlet that same year.

Tens of thousands of artifacts have been recovered since Florida-based research firm Intersal Inc. located the shipwreck off the North Carolina coast in 1996 but few, if any, are as surprising as pieces of paper. To find paper in a 300-year-old shipwreck in warm waters is "almost unheard of," said Erik Farrell, a conservator at the QAR Conservation Lab in Greenville.

Eventually, the conservators determined that the words "south" and "fathom" were in the text, suggesting a maritime or navigational book. But one word, Hilo, stood out because it was both capitalized and in italics, said Kimberly Kenyon, also a conservator at the lab.

They turned to Johanna Green, a specialist in the history of printed text at the University of Glasgow, who pointed them to the Spanish settlement of Ilo - or Hilo - on the coast of Peru. The fragments eventually were determined to be from a 1712 first edition of a book by Capt. Edward Cooke titled "A Voyage to the South Seas, and Round the World, Peform'd in the years 1708, 1709, 1710 and 1711."

It's impossible to say who aboard Blackbeard's ship would have been reading the voyage narrative - a form popular in England in the 17th and 18th century - or whether it belonged to a pirate or some terrified captive. But some pirates were known to be literate, Kenyon said.

For example, Stede Bonnett, the "gentleman pirate" who joined Blackbeard in 1717, had his own library. It's not known if he brought his books on the Queen Anne's Revenge.

A history of pirates written in 1724 mentions a journal belonging to Blackbeard that was taken when he was killed. And when Blackbeard captured a ship called the Margaret in December 1717, the list of items taken from the ship included books, Farrell said.

"They were literate men," Kenyon said. "People always assume pirates are ruffians from bad backgrounds, and that wasn't always the case."

The survival of the paper fragments is perhaps even more unusual than their existence aboard the pirate vessel.

The chamber in which they were found was a separate piece of a breech-loading swivel gun that was likely kept on the top deck because it was used as an anti-personnel weapon, Farrell said. Conservators don't have the cannon itself, which likely was salvaged or stolen when the Queen Anne's Revenge ran aground. In cannons of that period, "wadding" material such as cloth or paper would usually be stuffed behind a cannonball. So it's also possible someone just tore up the book without reading it to use it for firepower.

Conservators had removed a wooden plug from the chamber so they could clean it when they discovered the paper fragments stuffed in there, along with pieces of fabric in May 2016, Farrell said. That mass was removed easily enough, but prying the fragments from the fabric was more tedious and time-consuming, he said.

The combination of fabric and the plug likely protected the paper, which normally would have disintegrated in water, Farrell said.

But the ability to read doesn't change the evil character of pirates, who ransacked, raped and killed.

"The fact that they're literate doesn't mean they're not terrible, marauding people," Farrell said. "It just adds some nuance."
http://www.wfsb.com/story/37268243/s...ooty-and-books

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

January 13th, January 6th, December 30th, December 23rd

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 08:53 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)