P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 08-05-19, 06:28 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - May 11th, ’19

Since 2002


































"Hansmeier was greedy, arrogant, devious, mendacious, and consistently positioned other people to be damaged by his conduct, even as he enjoyed the proceeds of the scheme he orchestrated." – US Prosecutors






































May 11th, 2019




Fox Rothschild Parts Ways With Partner, Porn Client

Lincoln Bandlow has left Fox Rothschild after working on more than 2,000 lawsuits on behalf of a super-litigious porn production company.
Roy Strom

Lincoln Bandlow, who has represented a pornography production company in more than 2,000 copyright infringement lawsuits, has left Fox Rothschild, the law firm where he had been a partner since 2015.

Bandlow’s work representing Strike 3 Holdings, which makes porn films under the brands “Tushy,” “Vixen” and “Blacked,” has come under increased scrutiny from judges around the country. The company’s file-and-settle strategy has led at least one judge to label it a “copyright troll,” and also led to sanctions against Bandlow for failing to meet court deadlines. At the same time, Bandlow continues to battle a challenge involving the evidence Strike 3’s lawsuits rely upon.

Los Angeles-based Bandlow has opened up his own law firm, The Law Offices of Lincoln Bandlow, which is the counsel of record listed for nearly 40 Strike 3 cases, according to a search of federal court dockets on legal analytics site Lex Machina.

Bandlow did not immediately respond to a request for comment, but he has defended the lawsuits against claims of copyright trolling in the past.

“It’s good content, and a quality company that pays its employees well,” Bandlow said in a previous interview with ALM.

Fox Rothschild lawyers who worked alongside Bandlow on Strike 3 have filed motions in many dockets requesting to withdraw as counsel. A spokeswoman for Philadelphia-based Fox Rothschild said the firm had no comment on Bandlow’s departure or on whether it had been uncomfortable having a partner involved with the litigation.

U.S. District Judge Royce Lamberth wrote in a November opinion that Strike 3 Holdings’ cases were “a high-tech legal shakedown” seeking to “treat this court not as a citadel of justice, but as an ATM.”

Bandlow was sanctioned $750 in February by a U.S. magistrate judge who chastised the lawyer for missing a number of court deadlines. In an effort to avoid those sanctions, Bandlow cited a temporary lack of resources at Fox Rothschild, saying staffing shortages during the Christmas holiday season were to blame.

Bandlow was also ordered to pay opposing counsel’s fees amounting to $700 by U.S. District Judge Thomas Zilly after fighting efforts to depose Greg Lansky, a co-owner and the public face of Strike 3 who files a declaration in each case describing damage done to his business by Internet pirating.

The defense in that case won another ruling on Friday when Zilly ordered that Strike 3 must produce the underlying data that shows U.S. residents have illegally downloaded the company’s videos. Zilly said Strike 3 also must hand over some form of evidence to prove its claims that the mass pirating of its content is bedeviling the company’s finances.

In a separate case involving pornography and mass copyright litigation, Minnesota lawyer Paul Hansmeier is set to be sentenced in June after he pleaded guilty to federal fraud charges.

Hansmeier had been behind Prenda Law, which prosecutors say filmed its own pornography movies and uploaded them to the internet before tracking and suing users who downloaded the films. This alleged so-called “honey-potting” scheme has federal prosecutors seeking a more than 12-year prison sentence for Hansmeier.

“Hansmeier was greedy, arrogant, devious, mendacious, and consistently positioned other people to be damaged by his conduct, even as he enjoyed the proceeds of the scheme he orchestrated,” federal prosecutors wrote in a sentencing memorandum last month.

Individuals who think they may have made payments as a result of Hansmeier’s scam are encouraged by the U.S. Attorney for the District of Minnesota to file claims for restitution.
https://www.law.com/thelegalintellig...r-porn-client/





The CIA Sets Up Shop on Tor, the Anonymous Internet
Lily Hay Newman

The anonymity service Tor has grown in popularity around the world over the past few years, but it has also long been a tool for intelligence agencies and clandestine communications—not to mention endless cat-and-mouse games between law enforcement and criminals. But now, the CIA is staking out a more public presence there.

On Tuesday, the CIA announced its own Tor "onion service," so that people around the world can browse the agency's website anonymously—or, you know, send history-altering tips. Tor is an anonymity network that you access through a special browser, like the Tor Browser, and that uses its own URLs. The service protects your IP address and browsing online by encrypting the traffic and bouncing it around a series of waypoints to make it very difficult to trace.

Over the years, several organizations have made so-called onion sites—a dedicated version of their website that they configure and host to be accessible through the Tor anonymity network. Also called an onion service, Facebook launched one in 2014, and The New York Times added one in 2017. The National Police of the Netherlands even has an onion service related to its dark-web criminal takedown operations. But the CIA is the first intelligence agency to make the leap.

"Our global mission demands that individuals can access us securely from anywhere," CIA director of public affairs Brittany Bramell told WIRED ahead of the launch in a statement. "Creating an onion site is just one of many ways we’re going where people are."

Everything from the CIA's main website is available on its onion site, including instructions for how to contact the CIA and a digital form for submitting tips. There are also job listings, the agency's archival material including its World Factbook and, of course, the Kids' Zone. The main reasons to actually access the CIA's site through Tor seem to be sending information to the agency with more robust anonymity protection or quietly applying for a job there.

The CIA's site is a Version 3 onion service, meaning it has the improved cryptographic algorithms and stronger authentication the Tor Project launched at the end of 2017. In general, it works the same as Version 2 onion sites except it has a longer address. Instead of something like "nytimes3xbfgragh.onion," you reach the CIA's onion site at "ciadotgov4sjwlzihbbgxnqg3xiyrg7so2r2o3lt5wz5ypk4sxyjstad.on ion."

Tor was largely created through funding from the United States government in the 1990s and early 2000s, including from the Naval Research Lab and the Defense Advanced Research Projects Agency. The anonymity service has been open source since its public release in 2002, and it transitioned to being overseen by a nonprofit, dubbed the Tor Project, in 2006.

It can seem confusing at first that the US government would fund the creation of a tool that has since been used by criminals and foreign governments to conduct secret operations. But the US government can benefit from using the anonymity service in the same way these groups do. Still, stories and investigations abound about Tor's origins, including one persistent rumor that the CIA funded Tor's creation through covert channels. The Tor Project says that it has always been transparent about its funding sources and that it has no past or present connection to the CIA.

"We make free and open source software that’s available for anyone to use—and that includes the CIA," says Stephanie Whited, communications director for the Tor Project. "We don’t choose who uses our software. We want to see onion services adopted more frequently, and we think there’s a trend moving in that direction."

The CIA could have ulterior motives for establishing an onion site, but perhaps the agency is in fact simply trying to offer more ways for people to contact it and interact with its public resources. If nothing else, the project's tagline pokes fun at its inherent ambiguity: "Onions have layers, so do we."
https://www.wired.com/story/cia-sets-up-shop-on-tor/





FBI has Seized Deep Dot Web and Arrested its Administrators
Zack Whittaker

The FBI have arrested several people suspected of involvement in running Deep Dot Web, a website for facilitating access to dark web sites and marketplaces.

Two suspects were arrested in Tel Aviv and Ashdod, according to Israel’s Tel Aviv Police, which confirmed the arrests in a statement earlier in the day. Local media first reported the arrests.

Arrests were also made in France, Germany and the Netherlands. A source familiar with the operation said a site administrator was arrested in Brazil.

Deep Dot Web is said to have made millions of dollars in commission by offering referral links to dark web marketplaces, accessible only at .onion domains over the Tor Network. Tor bounces internet traffic through a series of random relay servers dotted across the world, making it near-impossible to trace the user.

Its .onion site displayed a seized notice by the FBI, citing U.S. money laundering laws. Its clear web domain no longer loads.

Tuesday’s arrests follow an operation by U.S. and German authorities earlier in the week that took down the Wall Street Market, one of the largest remaining dark web marketplaces. Thousands of sellers sold drugs, weapons and stolen credentials used to break into online accounts.

Efforts to reach Deep Dot Web over encrypted chat were unsuccessful.

A spokesperson for the Justice Department did not have comment, while the FBI declined to comment. A spokesperson for the Israeli consulate in New York did not respond to a request for comment.
https://techcrunch.com/2019/05/07/deep-dot-web-arrests/





Apple CEO Tim Cook Says Digital Privacy 'has Become a Crisis'
Lisa Eadicicco,

• Apple CEO Tim Cook told ABC News in an interview that privacy has become a "crisis."
• Cook has advocated for government regulation that would protect consumer privacy in the past.
• Cook also addressed concerns about the amount of time consumers spend on mobile devices, saying he doesn't want consumers spending too much time on their iPhones.

Apple CEO Tim Cook called online privacy a "crisis" in an interview with ABC News, reaffirming the company's stance on privacy as companies like Facebook and Google have come under increased scrutiny regarding their handling of consumer data.

"Privacy in itself has become a crisis," Cook told ABC's Diane Sawyer. "It's of that proportion — a crisis."

Unlike companies such as Google and Facebook, Apple's business isn't focused on advertising, and therefore it does not benefit from collecting data to improve ad targeting.

"You are not our product," he said. "Our products are iPhones and iPads. We treasure your data. We wanna help you keep it private and keep it safe."

Cook cited the vast amount of personal information available online when explaining why privacy has become such an important issue to address. "The people who track on the internet know a lot more about you than if somebody's looking in your window," he said. "A lot more."

Cook is known to be a vocal advocate for consumer privacy. In January, he published an op-ed in Time calling for government regulation that would make it more difficult for companies to collect data while providing more transparency for consumers. He also urged for a crackdown on data brokers that transfer consumer data between companies. Before that, he appeared on Vice News Tonight and voiced his support for government regulation.

Apple doesn't benefit from gathering data about consumers, as companies with booming advertising businesses would. But it does make money from its partnership with Google that secures its search engine as the default on the iPhone's Safari browser. Apple and Google haven't disclosed the terms of their agreement, but Goldman Sachs analysts estimated in September that Google could pay Apple as much as $12 billion in 2019.

Sawyer pointed out that Apple profits from its deal with Google, which has come under scrutiny regarding its data collection policies and privacy concerns. Cook said it works with Google "because we believe it's the best browser."

Although Cook described privacy as a crisis, he added that he believes it's a "fixable" problem. "And we just have to, like we've done every other point in time, when we get together it's amazing what we can do. And we very much are an ally in that fight."

Cook also addressed the mounting concerns about screen time in his interview with ABC News, saying he doesn't want consumers using their iPhones too much. "But I don't want you using the product a lot," he said. "In fact, if you're using it a lot, there's probably something we should do to make your use more productive."

The comments come after a report from The New York Times found that Apple had removed apps that help parents manage screen time from the App Store following the release of its own screen time management feature for the iPhone in September. Apple then published a statement saying it removed those apps because they were using a technology known as mobile device management, or MDM, that's intended for businesses that need to handle sensitive data on employee devices.

Cook told ABC News he's open to suggestions from parents when it come to screen time management and parental controls, saying that it's "something that we together need to fix."
https://www.newstimes.com/technology...is-6160611.php





Verizon, T-Mobile, Sprint, and AT&T Hit With Class Action Lawsuit Over Selling Customers’ Location Data

The lawsuits come after a Motherboard investigation showed AT&T, Sprint, and T-Mobile sold phone location data that ended up with bounty hunters, and The New York Times covered an instance of Verizon selling data.
Joseph Cox

On Thursday, lawyers filed lawsuits against four of the country’s major telecommunications companies for their role in various location data scandals uncovered by Motherboard, Senator Ron Wyden, and The New York Times. Bloomberg Law was first to report the lawsuits.

The news provides the first instance of individual telco customers pushing to be awarded damages after Motherboard revealed in January that AT&T, T-Mobile, and Sprint had all sold access to the real-time location of their customers’ phones to a network of middlemen companies, before ending up in the hands of bounty hunters. Motherboard previously paid a source $300 to successfully geolocate a T-Mobile phone through this supply chain of data.

“Through its negligent and deliberate acts, including inexplicable failures to follow its own Privacy Policy, T-Mobile permitted access to Plaintiffs and Class Members’ CPI and CPNI,” the complaint against T-Mobile reads, referring to “confidential proprietary information” and “customer proprietary network information,” the latter of which includes location data.

The complaints against T-Mobile, AT&T, and Sprint are largely identical, and all also mention how each carrier ultimately provided data to a company called Securus, which allowed low level law enforcement to locate phones without a warrant, as The New York Times first reported in 2018. The complaint against Verizon focuses just on the Securus case. However, Motherboard previously reported how Verizon sold data that ended up in the hands of another company, called Captira, which then sold it to the bail bondsman industry.

Do you know anything else about location data selling? You can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, OTR chat on jfcox@jabber.ccc.de, or email joseph.cox@vice.com .

The class in each lawsuit covers an approximation of the telcos’ individual customers between April 30, 2015 and February 15, 2019: 100 million for Verizon, 100 million for AT&T, 50 million for T-Mobile, and 50 million for Sprint. Each lawsuit is filed in the name of at least one customer for each telco, and they are seeking unspecified damages to be determined at trial, the complaints read.

The thrust of the complaints center around whether each telco violated section 222 of the Federal Communications Act (FCA), which says that the companies are obligated to protect the CPI and CPNI of its customers, and whether the Plaintiff’s and Class Members’ CPNI was accessible to unauthorized third parties during the relevant period.

The suits were filed by Z LAW, a “consumer protection law firm,” according to its website.

“We are reviewing the legal filing and have no further comment at this time,” a Sprint spokesperson told Motherboard in an email.

“We can’t comment on pending litigation,” a T-Mobile spokesperson wrote in an email.

Verizon and AT&T did not immediately respond to a request for comment.

When Motherboard reported in January that AT&T, T-Mobile, and Sprint had sold their customer data to companies that ultimately provided it to bounty hunters and other people unauthorized to handle it, each telco said they were stopping the sale of phone location data to third-parties altogether. AT&T and T-Mobile previously told Motherboard they have already done so, and Sprint said it plans to by the end of May. Verizon made its own commitment after the 2018 Securus scandal.

After Motherboard’s January investigation, 15 Senators called for the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) to properly investigate the sale of phone location data to bounty hunters. The House Committee on Energy and Commerce asked FCC Chairman Ajit Pai to hold an emergency briefing on the issue; Pai refused.

Motherboard also previously reported that 250 bounty hunters had access to AT&T, T-Mobile, and Sprint phone location data from another company that catered specifically to the bail bond industry. Some of that data included highly precise assisted GPS data, which is usually reserved for 911 responders.
https://motherboard.vice.com/en_us/a...-location-data





Facebook's Effort to Stop Suicides by Monitoring your Posts Reveals a Worrisome Gap Between Tech Giants and Healthcare Experts
Erin Brodwin

• Facebook has a suicide-monitoring tool that uses machine learning to identify posts that may indicate someone is at risk of killing themselves.
• The tool was involved in sending emergency responders to locations more than 3,500 times as of last fall.
• A Harvard psychiatrist is worried the tool could worsen health problems by homing in on the wrong people or escalating mental-health crises.
• Facebook does not consider the tool to be health research and hasn't published any information on how it works or whether it's successful.

Facebook knew there was a problem when a string of people used the platform to publicly broadcast their suicides in real time.

Staff at the company had been thinking about the issue of suicide since 2009, when a cluster of them occurred at two high schools near the company's headquarters in Palo Alto. Then, things became personal. After the company rolled out a video livestreaming tool called "Facebook Live," several people used it to broadcast themselves taking their own lives. First it was a 14-year-old girl and then a 33-year-old man, both in the US. Later, in the fall, a young man in Turkey broadcast himself dying by suicide.

Facebook, led by Chief Executive Officer Mark Zuckerberg, tasked its safety-and-security team with doing something about it.

The result was Facebook's suicide-monitoring algorithm, which has been running since 2017 and was involved in sending emergency responders to people more than 3,500 times as of last fall, according to the company.

Using pattern-recognition technology, the tool identifies posts and livestreams that appear to express intents of suicide. It scans the text in a post, along with the comments on it, such as "Are you OK?" When a post is ranked as potentially suicidal, it is sent first to a content moderator and then to a trained staff member tasked with notifying emergency responders.

Harvard psychiatrist and tech consultant John Torous only learned of the tool's existence last year, from a journalist. He said he's concerned it may be doing more harm than good.

'We as the public are partaking in this grand experiment'

"We as the public are partaking in this grand experiment, but we don't know if it's useful or not," Torous told Business Insider last week.

Torous has spent years collaborating with tech giants like Microsoft on scientific research. The reason he hadn't heard about Facebook's suicide-monitoring algorithm was because Facebook hasn't shared information about the tool with researchers such as him, or with the broader medical and scientific community.

In fact, Facebook hasn't published any data on how its tool works. The company's view is that the tool isn't a health product or research initiative but more akin to calling for help if you see someone in trouble in a public space.

"We are in the business of connecting people with supportive communities. We are not mental health providers," Antigone Davis, Facebook's global head of safety, previously told Business Insider.

But without public information on the tool, Torous said big questions about Facebook's suicide-monitoring tool are impossible to answer. He is worried the tool might home in on the wrong users, discourage frank discussions about mental health on the platform, or escalate or even create, a mental-health crisis where there wasn't one.

In sum, Torous said Facebook's use of the tool could be hurting more people than it's helping.

"It's one thing for an academic or a company to say this will or won't work. But you're not seeing any on-the-ground peer-reviewed evidence," Torous said. "It's concerning. It kind of has that Theranos feel."

Clinicians and companies disagree on the definition of health research

Facebook's suicide-monitoring tool is just one example of how the barriers that separate tech from healthcare are crumbling. A growing array of products and services — think Apple Watch, Amazon's Alexa, and even the latest meditation app — straddle the gap between health innovation and tech disruption. Clinicians see red flags. Tech leaders see revolution.

"There's almost this implicit assumption that they play by a different set of rules," Torous said.

At Facebook, the safety and security team spoke with experts at several suicide-prevention nonprofits, including Daniel Reidenberg, the founder of Save.org. Reidenberg told Business Insider that he helped Facebook create a solution by sharing his experiences, bringing in people who'd struggled personally with suicide, and having them share what helped them.

Reidenberg told Business Insider that he thinks Facebook is doing good work in suicide, but because its efforts are in uncharted waters, he thinks everyday issues will arise with the tool. He disagrees with Torous' view that the efforts are health research.

"There isn't any company that's more forward-thinking in this area," Reidenberg said.

Still, it is unclear how well Facebook's suicide-monitoring tool works. Because of privacy issues, emergency responders can't tell Facebook what happened at the scene of a potential suicide, Davis said. In other words, emergency responders can't tell Facebook if they reached the scene too late to stop a death, showed up to the wrong place, or arrived only to learn there was no real problem.

Torous, a psychiatrist who's familiar with the thorny issues in predicting suicide, is skeptical of how that will play out with regard to the suicide monitoring tool. He points to a review of 17 studies in which researchers analyzed 64 different suicide-prediction models and concluded that the models had almost no ability to successfully predict a suicide attempt.

"We know Facebook built it and they're using it, but we don't really know if it's accurate, if it's flagging the right or wrong people, or if it's flagging things too early or too late," Torous said.
https://www.sfgate.com/technology/bu...g-13820710.php





French Telecom Giant Orange on Trial over Staff Suicides
Nicolas Vaux-montagny

The toll is shocking: 19 suicides, 12 suicide attempts and eight cases of serious depression among employees over a three-year span at France's main telephone and internet company.

A Paris court on Monday begins a long-awaited trial accusing telecom giant Orange and seven former or current managers of moral harassment and related charges. The company — then called France Telecom — was undergoing job cuts and modernization efforts at the time of the suicides a decade ago.

On trial are the former president of France Telecom, Didier Lombard, former human resources director Olivier Barberot and former deputy executive director Louis-Pierre Wenes. It's the largest trial to date in France for moral harassment on a company-wide scale, and is expected to last two months.

The defendants are suspected of having "degraded work conditions of personnel that risked hurting their rights and dignity, altering the physical or mental health (of personnel), or compromising their professional future."

Four other officials are suspected of complicity in moral harassment.

In France, moral harassment can be punished by a year in prison and a fine of 15,000 euros ($16,790). Orange itself is also on trial, and the court could order the company to grant additional damages to each civil party in the case.

An investigation into the wave of employee suicides between 2007 and 2010 was opened following a complaint from the Sud union. At the time, Lombard allegedly referred to the deaths as "the fashion."

Lombard, who was replaced as France Telecom chief in 2010, has denied all the charges. He attributed the suicides, attempted suicides and cases of depression to "local difficulties with no links to each other" and no relation to the company's job cuts at the time.

The indictment lists the employees who took their lives or tried, some on the job.

Michel, 50, left a note about his decision to end his life on July 29, 2009, according to the prosecutor's report. Michel's note denounced "the permanent sense of urgency, overwork, absence of training, the total disorganization of the company" plus "management by terror."

"I'm taking my life because of my work at France Telecom. It's the only reason," the note said.

A month earlier, Christel, 37, slashed her veins in an apparent bid to kill herself in front of two superiors who told her hours earlier she would be transferred. In March 2009, 52-year-old Herve was preparing to jump from an office window but the noise he was making drew others to his rescue, who quoted him as saying that "I'm sick of this s----- job."

Jean-Michel, a father of three children, was 53 when he threw himself in front of a train on July 2, 2008, while on the phone with two union delegates.

France Telecom, once a state-owned monopoly, transformed into a private company in the 2000s. Lombard launched a restructuring plan aimed at shedding 22,000 jobs, but most employees were still considered civil servants and so were protected from layoffs.

As it sought to reduce staff, the indictment says the company imposed "excessive and intrusive control" on employees, assigned workers to demoralizing tasks, failed to provide training, isolated staff and used "intimidation maneuvers or threats and pay cuts."

Lombard's lawyer, Jean Veil, says his client is innocent because he could not possibly know what was going on in France Telecom's vast network of more than 100,000 employees.

"Mr. Didier Lombard is suspected of harassment of people he never saw," Veil said in 2012, when Lombard was handed preliminary charges. "Now there's a surprising accusation."

___

Elaine Ganley in Paris contributed.
https://www.newstimes.com/news/medic...f-13821555.php





How the U.K. Won’t Keep Porn Away From Teens

Complying with a new law, the largest online porn company has set itself up to be the youth gatekeeper of British smut. What could go wrong?
John Herrman

Come July 15, 2019, internet users in Britain attempting to visit major pornography sites will be confronted with a question: How old are you? Then, a follow-up: Can you prove it?

They’ll have a few options. Users can verify their age online, by submitting official government IDs or credit card information. Or they can walk into a store and establish their eligibility to access porn the old-fashioned way: by handing money and identification over to a human being, at a participating store, in exchange for a pass.

The British government has touted its mandatory age check as a “world-first” that will help make Britain the “safest place in the world to be online,” particularly for children. It has been less vocal about the precise manner in which these rules will be enforced. Just a few months out, and after multiple embarrassing delays, this is very much a work in progress.

What is taking shape is an enforcement regime made up not just of actual regulators and quasi-regulators but also major pornographers. It is a system that may not only fail to accomplish the law’s stated purpose (to keep children from stumbling upon adult content), but which also risks being captured by the biggest name in online porn, a multinational streaming conglomerate called MindGeek.

How could a distinctly British moral crusade end up empowering a foreign porn monopolist?

How a Bill Becomes a Law

The age verification rule grew from a Conservative party campaign promise in 2015, and ended up tucked into what would become the Digital Economy Act 2017, a wide-ranging bundle of internet rules and regulations.

Among the bill’s consequential but stultifying provisions about telecommunications infrastructure, copyright enforcement and government data sharing, the porn rule remained not only intact but grew stronger over time (thanks in part to copious media coverage). The bill was hastily rubber-stamped before Britain’s 2017 general election, and questions about how exactly it would be enforced, as well as concerns about user privacy, were set aside to be dealt with later.

Rather than create a new regulatory body within the government, the British Department for Digital, Culture, Media & Sport outsourced the task to the British Board of Film Classification, or BBFC, a nongovernmental organization best known for assigning ratings to films, much like the Motion Picture Association in America.

“The Government designated the BBFC to be the age-verification regulator because of our expertise in regulating pornographic content and our longstanding experience of online regulation, through working with the video-on-demand industry, mobile filters and online music videos,” wrote Brittany Maher-Kirk, a spokeswoman for the BBFC, in an email.

How a Law Becomes a Product

The BBFC then did some outsourcing of its own. The organization, it turned out, would not be creating or endorsing a single age verification system. Instead, it would lay out guidelines for external age verification services run by private firms. Commercial porn sites would be required to install such a system under threat of being banned, at the direction of the BBFC, by major internet service providers.

A user sharing credit card information, or a driver’s license, in this system, would not be handing it over to an agency of the government, or even to an organization deputized by the government, but to a private firm of the porn site’s choosing, carrying, perhaps, a BBFC stamp of approval.

But what can the BBFC can do to ensure private data is handled correctly? “They don’t have any experience regulating an industry for privacy concerns,” said Jim Killock, the executive director of the Open Rights Group, a nonprofit user-rights organization that has been critical of the Digital Economy Act. Nor, he said, do they have much power to do so. On its website, the BBFC says that data privacy “is so important that it has its own regulator,” the Information Commissioner’s Office, “which has the expertise and powers to apply strict data protection standards.”

“We have a Memorandum of Understanding with the ICO,” the site says, “but we don’t duplicate their work.”

The process will certainly feel conspicuously risky: a user will attempt to go to a porn site; that user will hit a wall; that user will be asked to supply proof of his or her age to get through.

As for whether the systems actually keep out kids, the BBFC will be keeping an eye on that, somehow. “Checks will be carried out by our Compliance team and we will use technology to maximize efficiency. There will be regular ongoing checks,” wrote Ms. Maher-Kirk, regarding third-party age verification providers.

The organization has also developed a “voluntary, nonstatutory certification scheme to ensure age-verification providers maintain high standards of privacy and data security,” Ms. Maher-Kirk said. The certification, which the BBFC said was developed with the help of a cybersecurity firm called NCC Group, will be represented to users by the presence of a reassuring green “V.”

The Inmates and the Asylum

The BBFC says it will soon publish a list of recommended age verification services. Existing solutions include AgeChecked, which markets its tool to gambling sites and e-cigarette retailers, among others, and AgePass, which claims to store its data on a “private blockchain.” One option, however, will start with an enormous advantage.

“AgeID is free for the user and very affordable for the website owner, providing a seamless experience for both sides,” reads the AgeID website. “Our single sign-on solution means users can verify once, then simply login to any one of the thousands of sites protected by AgeID on launch, without the hassle of re-verifying every time.”

AgeID is precisely the sort of solution the BBFC is demanding. It also shares an office with its parent company, MindGeek.

MindGeek’s holdings include Pornhub, Redtube and YouPorn, among dozens of other sites; it has its own porn production companies and its own adult advertising network; it claims on its corporate website, which makes no direct allusion to pornography, that it receives more than 115 million daily visitors to its properties. The company is now headquartered in Luxembourg, but has offices around the world. On AgeID’s site, the most visible evidence its affiliation with MindGeek is a pair of addresses at the bottom of the site: one in Cyprus and one in Oxnard, Calif., both listed under companies called variations of “MG Billing.”

As what seems to be the most visited collection of porn sites in the world, MindGeek took early interest in what the British government was doing. In fact, the Open Rights Group obtained communications between the company and regulators before the law passed.

Aside from having the resources to develop and lobby for its tool, MindGeek also has the advantage of being able to roll it out instantly on the most popular porn sites in the world, creating the impression of a single sign-on for porn in Britain, akin to signing into third-party apps with your Facebook or Google account. “In the name of child protection, the government has given a massive leg up to an enormous pornography company to have a monopoly on age verification in the U.K.,” Mr. Killock said. “That’s quite a surprising outcome.”

The Consolidation of Pornography

The adult industry has for the most part taken a wait-and-see approach to the Digital Economy Act, which, not unlike Brexit, has been limping toward an uncertain resolution for some time. Now that the new rules are bearing down on adult sites, the biggest threat the industry sees isn’t the specter of creeping censorship, or of the government getting into citizens’ private business, but of further corporate consolidation by a firm that has already remade the industry in its image.

MindGeek’s largest properties are “tube sites,” as in YouTube, which allow users to upload videos of their own, and which have come to dominate online porn consumption in the last decade. Tube sites have been criticized for embracing a growth strategy common among major online media platforms: turning a blind eye toward stolen content uploaded to their sites in service of growth; then, once they’ve become dominant, using that leverage to work with producers directly.

“What we call free porn is a misnomer, because it’s very often pirated or stolen,” said Shira Tarrant, author of “The Pornography Industry: What Everyone Needs to Know,” and a professor at California State University, Long Beach. Tube sites haven’t just capitalized on stolen content, Dr. Tarrant said: They’ve altered the character of the content itself. “What gets clicked more shows up more.” On large, centralized sites the reliance on algorithmic recommendations (as on some mainstream social media platforms) means that narrow categories of content rise to the top, creating feedback loops. “It can repeat a lot of stereotypes,” she said.

“There’s a lot about MindGeek that the average person doesn’t know,” said Jiz Lee, an adult performer and film producer, including “having built their empire off of pirated content.” Their involvement in enforcing age regulations is doubly worrying because “porn being accessible to children is a problem of their own making.”

“I know that MindGeek says, ‘don’t worry, your information is safe,’ but I think we all have reason to worry,” Dr. Tarrant added. The company has suffered data breaches in the past. And from an industry perspective, installing a MindGeek-owned sign-in portal (from which the company could see, at bare minimum, how popular a competitor is) would feel like one more concession to the porn world’s own fearsome tech giant.

MindGeek has said that it will not actually collect or store any such user data through AgeID; the company will further outsource the actual age verification to separate age verification sites, including Yoti, which verifies users by asking for a selfie and a government issued ID.

“AgeID does not verify users internally,” said James Clark, director of communications at the company. “We use multiple trusted, independent third parties in the age verification space to feedback a simple pass-fail result. AgeID cannot see, let alone store, any of this data.” AgeID is, in other words, a middleman. It just happens to be a middleman to which MindGeek’s millions of visitors in Britain will be introduced first.

Jumping Over the Porn Wall

There are plenty of anxious if probable what-ifs: What if the new porn age databases get hacked? What if one of them turns out to be a scam? Or what if the British government has inadvertently helped crown MindGeek the King of Porn for Life? At least one question, however, should be answered as soon as the rules go into effect: Will they even work?

“I have never known a parental control that couldn’t be bypassed by kids,” Dr. Tarrant said. British youth will indeed have plenty of options. They could sign up for virtual private networks to appear as though they’re browsing from another country (free versions of which often come with unclear privacy trade-offs themselves). They could get a physical pass from a friend. They could seek out noncompliant foreign sites that haven’t yet been caught.

Or, contrary to the rule’s stated intention to stop kids from “stumbling across” porn online, they could just search Twitter or Reddit. According to the BBFC, “Social media platforms are not defined as online commercial pornography,” and won’t be subject to age verification. Porn will always find a way.
https://www.nytimes.com/2019/05/03/s...-porn-law.html





Canada Border Services Seizes Lawyer's Phone, Laptop for not Sharing Passwords

Concern is mounting over Canadian border officers' powers to search smartphones
Sophia Harris

As more people travel with smartphones loaded with personal data, concern is mounting over Canadian border officers' powers to search those phones — without a warrant.

"The policy's outrageous," said Toronto business lawyer, Nick Wright. "I think that it's a breach of our constitutional rights."

His thoughts follow a personal experience. After landing at Toronto's Pearson Airport on April 10, he said the Canada Border Services Agency (CBSA) flagged him for an additional inspection — for no stated reason.

Wright had just returned from a four-month trip to Guatemala and Colombia where he studied Spanish and worked remotely. He took no issue when a border services officer searched his bags, but drew the line when the officer demanded his passwords to also search his phone and laptop.

Wright refused, telling the officer both devices contained confidential information protected by solicitor-client privilege.

He said the officer then confiscated his phone and laptop, and told him the items would be sent to a government lab which would try to crack his passwords and search his files.

"In my view, seizing devices when someone exercises their constitutional right is an affront to civil liberty," said Wright who's still waiting for the return of his phone and laptop. Meanwhile, he said he has spent about $3,000 to replace them.

Officers can search your phone

According to the CBSA, it has the right to search electronic devices at the border for evidence of customs-related offences — without a warrant — just as it does with luggage.

If travellers refuse to provide their passwords, officers can seize their devices.

The CBSA said that between November 2017 and March 2019, 19,515 travellers had their digital devices examined, which represents 0.015 per cent of all cross-border travellers during that period.

During 38 per cent of those searches, officers uncovered evidence of a customs-related offence — which can include possessing prohibited material or undeclared goods, and money laundering, said the agency.

While the laws governing CBSA searches have existed for decades, applying them to digital devices has sparked concern in an era where many travellers carry smartphones full of personal and sometimes very sensitive data.

A growing number of lawyers across Canada argue that warrantless digital device searches at the border are unconstitutional, and the practice should be stopped or at least limited.

"The policy of the CBSA of searching devices isn't something that is justifiable in a free and democratic society," said Wright who ran as a Green Party candidate in the 2015 federal election.

"It's appalling, it's shocking, and I hope that government, government agencies and the courts, and individual citizens will inform themselves and take action."

'Out of date' laws

Consumer advocacy group, OpenMedia is already taking action. It has launched an online and ad campaign to raise awareness about digital border searches and pressure the federal government to update the rules that govern them.

"These laws are incredibly, incredibly out of date," said OpenMedia privacy campaigner Victoria Henry. "The way they treat our digital devices are as mere goods and that's the same classification as a bag of T-shirts."

She wants to see separate border rules for digital devices which stipulate reasonable grounds for a search. Henry also said those rules must be clearly laid out to the public.

"We need to have clear and transparent policies and mechanisms for recourse."
An example of an ad that OpenMedia plans to run, starting in June on the Vancouver Skytrain. (submitted by OpenMedia)

The federal government says that its current policies are both reasonable and necessary to keep Canadian borders secure.

CBSA officers are directed to disable any internet connection and only examine content that is already stored on a device, said Scott Bardsley, spokesperson for Public Safety Minister Ralph Goodale, in an email.

He also said that digital searches "should not be routine" and that "officers may only conduct a search if there are multiple indicators that evidence of contraventions may be found on a device."

Wright said, in his case, no rationale was provided why his phone and laptop needed to be examined.

"There were no factors that I'm aware of that would justify the searches."
'Respect for privacy'

Public Safety spokesperson Bardsley also said that CBSA officers understand the importance of solicitor-client privilege and are instructed not to examine documents that fall within that scope.

"CBSA officers are trained to conduct all border examinations with as much respect for privacy as possible."

Wright said that wasn't his experience. Instead, the officer he dealt with neither expressed knowledge of, nor responded to his concerns that his laptop and phone contained solicitor-client privileged documents, he said.

"His response was only to demand the passwords to access both."

Bardsley said that if travellers have issues or concerns, they can submit a complaint to the CBSA. He added that the government is investing $24 million to enhance oversight by creating an independent review body for the agency.

Wright has already submitted his complaint to the CBSA which includes a demand for the immediate return of his phone and laptop, plus compensation for having to temporarily replace them.

If and when he gets them back, his battle may not be over; Wright is now considering legal action.

"I think it's important that we all stand up for our civil liberties and our charter rights," he said.
https://www.cbc.ca/news/business/cbs...edia-1.5119017





Top Cybersecurity Experts Stand Up for Digital Right to Repair
securepair_admin

Cybersecurity luminaries including Bruce Schneier, Gary McGraw, Joe Grand, Chris Wysopal and Katie Moussouris are backing securepairs.org, countering industry efforts to paint proposed right to repair laws in 20 states as a cyber security risk.

Boston, Massachusetts, April 30, 2019 — Leading information security experts are speaking up in support of right to repair laws that are being debated in state capitols and calling out electronics and technology industry efforts to keep replacement parts, documentation and diagnostic tools for digital devices secret in the name of cyber security.

Declaring “fixable stuff is secure stuff,” the group called for “facts not FUD” (fear, uncertainty and doubt) in the face of recent efforts to paint the right to repair as a cyber security risk. The group of more than 20 cyber security professionals includes some of the most regarded names in information security. Among them: Bruce Schneier of IBM and Harvard University, an author and globally recognized expert in cryptography; Gary McGraw, the computer scientist and author of 12 books on software security; pioneering vulnerability disclosure expert Katie Moussouris of Luta Security; Chris Wysopal, Chief Technology Officer at Veracode, Joe Grand (aka “Kingpin”) of Grand Idea Studio and Dan Geer, the Chief Information Security Officer of In-Q-Tel, a non-profit, venture arm of the CIA.

“As cyber security professionals, we have a responsibility to provide accurate information and reliable advice to lawmakers who are considering Right to Repair laws,” said Joe Grand of Grand Idea Studio, a hardware hacker and embedded systems security expert.

No cyber risk in repair

“False and misleading information about the cyber risks of repair is being directed at state legislators who are considering right to repair laws,” said Paul Roberts, the founder of securepairs.org and Editor in Chief at The Security Ledger, an independent cyber security blog. “Securepairs.org is a voice of reason that will provide policy makers with accurate information about the security problems plaguing connected devices. We will make the case that right to repair laws will bring about a more secure, not less secure future.”

With right to repair laws proposed in 20 states, the technology, electronics and home appliance industries have gone on the offensive. Working through front groups and public relations firms, they are floating specious arguments about the cyber security risks of repair. In opinion pieces, blog posts and interviews, these groups are painting pro-consumer, pro-competition laws granting digital device owners access to service manuals, diagnostic software or replacement parts as a safety risk and a giveaway to hackers and cyber criminals.

“We’ve seen industry opponents using dubious cybersecurity arguments to claim we shouldn’t have the freedom to fix the things we own,” said Nathan Proctor, the head of U.S. PIRG’s Right to Repair campaign. “I’m grateful the real experts are standing up, and setting the record straight: There is no cyber threat from repair. Just let us fix our stuff.”

Security issues with connected devices are real enough, notes Roberts. But they have nothing to do with the kinds of measures promoted in right to repair laws. “Home electronics, personal electronic devices and smart appliances too often ship with easily exploitable software vulnerabilities or insecure configurations. These are the digital equivalent of unlocked or unlockable doors that hackers can step through,” Roberts said. “Sadly, device manufacturers, working through their industry groups, PR firms and paid lobbyists, are spending money trying to sink right to repair legislation that is totally unrelated to these problems,” he said.

“We know from hard experience that security through obscurity is a myth,” said Grand. “Keeping the workings of electronic devices secret does nothing to reduce the threat from motivated, resourceful hackers or cyber criminals. Instead, it prevents legitimate owners from maintaining and repairing their property as they see fit. Manufacturers who support Right to Repair will actually improve, not weaken, security by providing access to documentation and genuine, high quality replacement components,” he said.
Securepairs.org encompasses a set of common principles. Namely: that repair and re-use are rights of owners. Second, that there is no security through obscurity. Third: that repair fosters greater security. Fourth: that true security is by design. Finally, that we must make laws and govern ourselves with facts not FUD.

A nation-wide network of security professionals

Securepairs.org is launching to help mobilize information security professionals to help secure the right to repair in their home states: writing letters and emails and providing expert testimony about the real sources of cyber risks in connected devices.
We have assembled some of the world’s top experts on our side to counter the FUD with facts. They include one of the most respected voices on the security of the Internet of Things (Bruce Schneier), on data security and privacy (Jon Callas), secure software and application design (Gary McGraw), on software application security testing (Chris Wysopal), embedded device security (Billy Rios, Joe Grand), and fostering a culture of security (Katie Moussouris). Our ask: be a voice of reason in the debate over a digital right to repair. We need their voices in the needed conversation about the (very real) security issues with connected, “smart” devices – and about the many security benefits of the kinds of requirements encapsulated in right to repair bills.

Join us!

As of today, we’re inviting other like-minded information security professionals to join this esteemed list. In the months ahead, we look forward to speaking facts to FUD and to infuse the debate over right to repair laws with an understanding about the real risks posed by insecure, connected devices.

With hearings still going on regarding right to repair legislation in 10 states, securepairs.org is also looking to get information security pros to brief lawmakers and to encourage their peers to sign up via its website.

Check out our website and our full list of supporters. If you’re an information security professional and want to help support right to repair laws in your state or nationally, do us a favor and sign up to be a securepairs.org supporter!

We thank you for your support!

Paul F. Roberts

John Bumstead

Jon Callas

Ming Chow

Jack Daniel

Robert Ferrell

Richard Forno

John Frederickson

Dan Geer

Joe Grand

Gordon “Fyodor” Lyon

Gary McGraw

Katie Moussouris

Dr. Peter Neumann

Ken Pfeil

Bruce Potter

Billy Rios

Cris “Space Rogue” Thomas

Bruce Schneier

Dr. Johannes Ullrich

Chenxi Wang

Chris Wysopal

Tatu Ylonen


https://securepairs.org/top-cybersec...ght-to-repair/





How Chinese Spies Got the N.S.A.’s Hacking Tools, and Used Them for Attacks
Nicole Perlroth, David E. Sanger and Scott Shane

Chinese intelligence agents acquired National Security Agency hacking tools and repurposed them in 2016 to attack American allies and private companies in Europe and Asia, a leading cybersecurity firm has discovered. The episode is the latest evidence that the United States has lost control of key parts of its cybersecurity arsenal.

Based on the timing of the attacks and clues in the computer code, researchers with the firm Symantec believe the Chinese did not steal the code but captured it from an N.S.A. attack on their own computers — like a gunslinger who grabs an enemy’s rifle and starts blasting away.

The Chinese action shows how proliferating cyberconflict is creating a digital wild West with few rules or certainties, and how difficult it is for the United States to keep track of the malware it uses to break into foreign networks and attack adversaries’ infrastructure.

The losses have touched off a debate within the intelligence community over whether the United States should continue to develop some of the world’s most high-tech, stealthy cyberweapons if it is unable to keep them under lock and key.

The Chinese hacking group that co-opted the N.S.A.’s tools is considered by the agency’s analysts to be among the most dangerous Chinese contractors it tracks, according to a classified agency memo reviewed by The New York Times. The group is responsible for numerous attacks on some of the most sensitive defense targets inside the United States, including space, satellite and nuclear propulsion technology makers.

Now, Symantec’s discovery, unveiled on Monday, suggests that the same Chinese hackers the agency has trailed for more than a decade have turned the tables on the agency.

Some of the same N.S.A. hacking tools acquired by the Chinese were later dumped on the internet by a still-unidentified group that calls itself the Shadow Brokers and used by Russia and North Korea in devastating global attacks, although there appears to be no connection between China’s acquisition of the American cyberweapons and the Shadow Brokers’ later revelations.

But Symantec’s discovery provides the first evidence that Chinese state-sponsored hackers acquired some of the tools months before the Shadow Brokers first appeared on the internet in August 2016.

Repeatedly over the past decade, American intelligence agencies have had their hacking tools and details about highly classified cybersecurity programs resurface in the hands of other nations or criminal groups.

The N.S.A. used sophisticated malware to destroy Iran’s nuclear centrifuges — and then saw the same code proliferate around the world, doing damage to random targets, including American business giants like Chevron. Details of secret American cybersecurity programs were disclosed to journalists by Edward J. Snowden, a former N.S.A. contractor now living in exile in Moscow. A collection of C.I.A. cyberweapons, allegedly leaked by an insider, was posted on WikiLeaks.

“We’ve learned that you cannot guarantee your tools will not get leaked and used against you and your allies,” said Eric Chien, a security director at Symantec.

Now that nation-state cyberweapons have been leaked, hacked and repurposed by American adversaries, Mr. Chien added, it is high time that nation states “bake that into” their analysis of the risk of using cyberweapons — and the very real possibility they will be reassembled and shot back at the United States or its allies.

In the latest case, Symantec researchers are not certain exactly how the Chinese obtained the American-developed code. But they know that Chinese intelligence contractors used the repurposed American tools to carry out cyberintrusions in at least five countries: Belgium, Luxembourg, Vietnam, the Philippines and Hong Kong. The targets included scientific research organizations, educational institutions and the computer networks of at least one American government ally.

One attack on a major telecommunications network may have given Chinese intelligence officers access to hundreds of thousands or millions of private communications, Symantec said.

Symantec did not explicitly name China in its research. Instead, it identified the attackers as the Buckeye group, Symantec’s own term for hackers that the Department of Justice and several other cybersecurity firms have identified as a Chinese Ministry of State Security contractor operating out of Guangzhou.

Because cybersecurity companies operate globally, they often concoct their own nicknames for government intelligence agencies to avoid offending any government; Symantec and other firms refer to N.S.A. hackers as the Equation group. Buckeye is also referred to as APT3, for Advanced Persistent

A spotlight on the people reshaping our politics. A conversation with voters across the country. And a guiding hand through the endless news cycle, telling you what you really need to know.

In 2017, the Justice Department announced the indictment of three Chinese hackers in the group Symantec calls Buckeye. While prosecutors did not assert that the three were working on behalf of the Chinese government, independent researchers and the classified N.S.A. memo that was reviewed by The Times made clear the group contracted with the Ministry of State Security and had carried out sophisticated attacks on the United States.

A Pentagon report about Chinese military competition, issued last week, describes Beijing as among the most skilled and persistent players in military, intelligence and commercial cyberoperations, seeking “to degrade core U.S. operational and technological advantages.”

In this case, however, the Chinese simply seem to have spotted an American cyberintrusion and snatched the code, often developed at huge expense to American taxpayers.

Symantec discovered that as early as March 2016, the Chinese hackers were using tweaked versions of two N.S.A. tools, called Eternal Synergy and Double Pulsar, in their attacks. Months later, in August 2016, the Shadow Brokers released their first samples of stolen N.S.A. tools, followed by their April 2017 internet dump of its entire collection of N.S.A. exploits.

Symantec researchers noted that there were many previous instances in which malware discovered by cybersecurity researchers was released publicly on the internet and subsequently grabbed by spy agencies or criminals and used for attacks. But they did not know of a precedent for the Chinese actions in this case — covertly capturing computer code used in an attack, then co-opting it and turning it against new targets.

“This is the first time we’ve seen a case — that people have long referenced in theory — of a group recovering unknown vulnerabilities and exploits used against them, and then using these exploits to attack others,” Mr. Chien said.

The Chinese appear not to have turned the weapons back against the United States, for two possible reasons, Symantec researchers said. They might assume Americans have developed defenses against their own weapons, and they might not want to reveal to the United States that they had stolen American tools.

For American intelligence agencies, Symantec’s discovery presents a kind of worst-case scenario that United States officials have said they try to avoid using a White House program known as the Vulnerabilities Equities Process.

Under that process, started in the Obama administration, a White House cybersecurity coordinator and representatives from various government agencies weigh the trade-offs of keeping the American stockpile of undisclosed vulnerabilities secret. Representatives debate the stockpiling of those vulnerabilities for intelligence gathering or military use against the very real risk that they could be discovered by an adversary like the Chinese and used to hack Americans.

The Shadow Brokers’ release of the N.S.A.’s most highly coveted hacking tools in 2016 and 2017 forced the agency to turn over its arsenal of software vulnerabilities to Microsoft for patching and to shut down some of the N.S.A.’s most sensitive counterterrorism operations, two former N.S.A. employees said.

The N.S.A.’s tools were picked up by North Korean and Russian hackers and used for attacks that crippled the British health care system, shut down operations at the shipping corporation Maersk and cut short critical supplies of a vaccine manufactured by Merck. In Ukraine, the Russian attacks paralyzed critical Ukrainian services, including the airport, Postal Service, gas stations and A.T.M.s.

“None of the decisions that go into the process are risk free. That’s just not the nature of how these things work,” said Michael Daniel, the president of the Cyber Threat Alliance, who previously was cybersecurity coordinator for the Obama administration. “But this clearly reinforces the need to have a thoughtful process that involves lots of different equities and is updated frequently.”

Beyond the nation’s intelligence services, the process involves agencies like the Department of Health and Human Services and the Treasury Department that want to ensure N.S.A. vulnerabilities will not be discovered by adversaries or criminals and turned back on American infrastructure, like hospitals and banks, or interests abroad.

That is exactly what appears to have happened in Symantec’s recent discovery, Mr. Chien said. In the future, he said, American officials will need to factor in the real likelihood that their own tools will boomerang back on American targets or allies. An N.S.A. spokeswoman said the agency had no immediate comment on the Symantec report.

One other element of Symantec’s discovery troubled Mr. Chien. He noted that even though the Buckeye group went dark after the Justice Department indictment of three of its members in 2017, the N.S.A.’s repurposed tools continued to be used in attacks in Europe and Asia through last September.

“Is it still Buckeye?” Mr. Chien asked. “Or did they give these tools to another group to use? That is a mystery. People come and go. Clearly the tools live on.”
https://www.nytimes.com/2019/05/06/u...ing-cyber.html





Alexa has been Eavesdropping on You this Whole Time

When Alexa runs your home, Amazon tracks you in more ways than you might want.
Geoffrey A. Fowler

Would you let a stranger eavesdrop in your home and keep the recordings? For most people, the answer is, “Are you crazy?”

Yet that’s essentially what Amazon has been doing to millions of us with its assistant Alexa in microphone-equipped Echo speakers. And it’s hardly alone: Bugging our homes is Silicon Valley’s next frontier.

Many smart-speaker owners don’t realize it, but Amazon keeps a copy of everything Alexa records after it hears its name. Apple’s Siri, and until recently Google’s Assistant, by default also keep recordings to help train their artificial intelligences.

So come with me on an unwelcome walk down memory lane. I listened to four years of my Alexa archive and found thousands of fragments of my life: spaghetti-timer requests, joking houseguests and random snippets of “Downton Abbey.” There were even sensitive conversations that somehow triggered Alexa’s “wake word” to start recording, including my family discussing medication and a friend conducting a business deal.

For as much as we fret about snooping apps on our computers and phones, our homes are where the rubber really hits the road for privacy. It’s easy to rationalize away concerns by thinking a single smart speaker or appliance couldn’t know enough to matter. But across the increasingly connected home, there’s a brazen data grab going on, and there are few regulations, watchdogs or common-sense practices to keep it in check.

Let’s not repeat the mistakes of Facebook in our smart homes. Any personal data that’s collected can and will be used against us. An obvious place to begin: Alexa, stop recording us.

Here are the eight steps to delete Amazon’s recordings from your Echo speaker in the Alexa app. (Geoffrey Fowler/The Washington Post)

The spy in your speaker

“Eavesdropping” is a sensitive word for Amazon, which has battled lots of consumer confusion about when, how and even who is listening to us when we use an Alexa device. But much of this problem is of its own making.

Alexa keeps a record of what it hears every time an Echo speaker activates. It’s supposed to record only with a “wake word” — “Alexa!” — but anyone with one of these devices knows they go rogue. I counted dozens of times when mine recorded without a legitimate prompt. (Amazon says it has improved the accuracy of “Alexa” as a wake word by 50 percent over the past year.)

What can you do to stop Alexa from recording? Amazon’s answer is straight out of the Facebook playbook: “Customers have control,” it says — but the product’s design clearly isn’t meeting our needs. You can manually delete past recordings if you know exactly where to look and remember to keep going back. You cannot stop Amazon from making these recordings, aside from muting the Echo’s microphone (defeating its main purpose) or unplugging the darned thing.

Amazon founder and chief executive Jeff Bezos owns The Washington Post, but I review all tech with the same critical eye.

Amazon says it keeps our recordings to improve products, not to sell them. (That’s also a Facebook line.) But anytime personal data sticks around, it’s at risk. Remember the family that had Alexa accidentally send a recording of a conversation to a random contact? We’ve also seen judges issue warrants for Alexa recordings.

Alexa’s voice archive made headlines most recently when Bloomberg discovered Amazon employees listen to recordings to train its artificial intelligence. Amazon acknowledged that some of those employees also have access to location information for the devices that made the recordings.

Saving our voices is not just an Amazon phenomenon. Apple, which is much more privacy-minded in other aspects of the smart home, also keeps copies of conversations with Siri. Apple says voice data is assigned a “random identifier and is not linked to individuals” — but exactly how anonymous can a recording of your voice be? I don’t understand why Apple doesn’t give us the ability to say not to store our recordings.

To stop Google Assistant from recording you, set Voice & Audio Activity to “paused” under myaccount.google.com/activitycontrols. (Geoffrey Fowler/The Washington Post)

The unexpected leader on this issue is Google. It also used to record all conversations with its Assistant but last year quietly changed its defaults to not record what it hears after the prompt “Hey, Google.” But if you’re among the people who previously set up Assistant, you probably need to readjust your settings (check here) to “pause” recordings.

I’m not the only one who thinks saving recordings is too close to bugging. Last week, the California State Assembly’s privacy committee advanced an Anti-Eavesdropping Act that would require makers of smart speakers to get consent from customers before storing recordings. The Illinois Senate recently passed a bill on the same issue. Neither is much of a stretch: Requiring permission to record someone in private is enshrined in many state laws.

“They are giving us false choices. We can have these devices and enjoy their functionality and how they enhance our lives without compromising our privacy,” Assemblyman Jordan Cunningham (R), the bill’s sponsor, told me. “Welcome to the age of surveillance capitalism.”

The spy in your thermostat

Inspired by what I found in my Alexa voice archive, I wondered: What other activities in my smart home are tech companies recording?

I found enough personal data to make even the East German secret police blush.

When I’m up for a midnight snack, Google knows. My Nest thermostat, made by Google, reports back to its servers’ data in 15-minute increments about not only the climate in my house but also whether there’s anyone moving around (as determined by a presence sensor used to trigger the heat). You can delete your account, but otherwise Nest saves it indefinitely.

Then there are lights, which can reveal what time you go to bed and do almost anything else. My Philips Hue-connected lights track every time they’re switched on and off — data the company keeps forever if you connect to its cloud service (which is required to operate them with Alexa or Assistant).

Every kind of appliance now is becoming a data-collection device. My Chamberlain MyQ garage opener lets the company keep — again, indefinitely — a record of every time my door opens or closes. My Sonos speakers, by default, track what albums, playlists or stations I’ve listened to, and when I press play, pause, skip or pump up the volume. At least they hold on to my sonic history for only six months.

And now the craziest part: After quizzing these companies about data practices, I learned that most are sharing what’s happening in my home with Amazon, too. Our data is the price of entry for devices that want to integrate with Alexa. Amazon’s not only eavesdropping — it’s tracking everything happening in your home.

You can't stop Amazon from collecting data about smart home devices connected to Alexa, but you can tell Amazon to delete the data it already has collected at amazon.com/alexaprivacy. (Geoffrey Fowler/The Washington Post)

Amazon acknowledges it collects data about third-party devices even when you don’t use Alexa to operate them. It says Alexa needs to know the “state” of your devices “to enable a great smart home experience.” But keeping a record of this data is more useful to them than to us. (A feature called “hunches” lets you know when a connected device isn’t in its usual state, such as a door that’s not locked at bedtime, but I’ve never found it helpful.) You can tell Amazon to delete everything it has learned about your home, but you can’t look at it or stop Amazon from continuing to collect it.

Google Assistant also collects data about the state of connected devices. But the company says it doesn’t store the history of these devices, even though there doesn’t seem to be much stopping it.

Apple does the most admirable job operating home devices by collecting as little data as possible. Its HomeKit software doesn’t report to Apple any info about what’s going on in your smart home. Instead, compatible devices talk directly, via encryption, with your iPhone, where the data stays.

Amazon and other tech companies say they need our voice data to train their artificial intelligence. (Washington Post illustration/iStock)
Why is this happening?

Why do tech companies want to hold on to information from our homes? Sometimes they do it just because there’s little stopping them — and they hope it might be useful in the future.

Ask the companies why, and the answer usually involves AI.

“Any data that is saved is used to improve Siri,” Apple said.

“Alexa is always getting smarter, which is only possible by training her with voice recordings to better understand requests, provide more accurate responses, and personalize the customer experience,” Beatrice Geoffrin, director of Alexa privacy, said in a statement. The recordings also help Alexa learn different accents and understand queries about recurring events such as the Olympics, she said.

Noah Goodman, an associate professor of computer science and psychology at Stanford University, told me it’s true that AI needs data to get smarter.

“Technically, it is not unreasonable what they are saying,” Goodman said. Today’s natural language-processing systems need to rerun their algorithms over old data to learn. Without the easy access to data, their progress might slow — unless the computer scientists make their systems more efficient.

But then he takes his scientist hat off. “As a human, I agree with you. I don’t have one of these speakers in my house,” Goodman said.

Amazon says its Echo Dot is the best-selling speaker of all time. (Jonathan Baran/The Washington Post)

We want to benefit from AI that can set a timer or save energy when we don’t need the lights on. But that doesn’t mean we’re also opening our homes to tech companies as a lucrative source of data to train their algorithms, mine our lives and maybe lose in the next big breach. This data should belong to us.

What we lack is a way to understand the transformation that data and AI are bringing to our homes.

Think of “Downton Abbey”: In those days, rich families could have human helpers who were using their intelligence to observe and learn their habits, and make their lives easier. Breakfast was always served exactly at the specified time. But the residents knew to be careful about what they let the staff see and hear.

Fast-forward to today. We haven’t come to terms that we’re filling our homes with even nosier digital helpers. Said Goodman: “We don’t think of Alexa or the Nest quite that way, but we should.”
https://www.washingtonpost.com/techn...is-whole-time/





Who’s Afraid of the Dark? Hype Versus Reality on the Dark Web
Juan Sanchez and Garth Griffin

Click here to download the complete analysis as a PDF.

Key Findings

• The collection of onion sites that is sometimes called the dark web is often portrayed as a vast and mysterious part of the internet. In reality, the number of onion sites is tiny compared to the size of the surface web. Our count of live reachable onion site domains comes to less than 0.005% of the number of surface-web site domains. Out of about 55,000 onion domains that we found, only around 8,400 onion domains had a live site (15%). The popular iceberg metaphor that describes the relationship of the surface web and dark web is upside down.
• These onion sites are disorganized and unreliable. Scams are prevalent, such as a typosquatting scam that claims to have successfully defrauded users of over 400 popular onion sites, netting thousands of dollars in Bitcoin from victims. Uptime even on popular dark web sites is well below the 99.999% “five nines” availability that is expected for reputable companies on the surface web, and onion sites regularly disappear permanently with or without explanation.
• From a language standpoint, onion sites are more homogeneous than the surface web. We observed that 86% of onion sites have English as their primary language, with the next two most common being Russian with 2.8% and German with 1.6%. On the surface web, researchers report English is at the top with only 54%.
• The idea of a dark web that is hidden and mysterious is more likely an extrapolation of a tiny portion of these onion sites — a set of invitation-only and unpublicized communities buried in the most shadowy corners of this part of the internet. On the surface web, popular websites will attract inbound link counts in the millions or more. In our onion site crawl, the site with the highest inbound link count was a popular market with 3,585 inbound links. An onion site offering help setting up onion servers had 279 inbound links. In contrast, we looked at what we view as the top eight onion sites most respected in the criminal community and found that the most visible had a maximum of 15 inbound links with an average of only 8.7 inbound links per site. It is this tiny slice of the dark web that is truly dark.

What Is the Dark Web?

The dark web is a frequent topic of interest for anyone who cares about cybersecurity, but its mystique has given rise to a number of popular misconceptions, and “dark web” can be a muddled term. To make a more concrete assessment of one precise definition of the term “dark web,” this blog presents our findings of a spider specifically for those sites that are accessible within the Tor network of onion domains. There are plenty of varied definitions for the dark web, the deep web, the criminal underground, and other related concepts, but for this investigation, our exclusive focus is on onion sites.

According to Wikipedia, the dark web can be described as any web content that requires specific software, configurations, or authorization to access. This definition overlaps with another common term, the “deep web,” which is commonly used to refer to all the parts of the internet not indexed by search engines.

The dark web is also often conflated with the cybercriminal underground, implying that it is solely a place where people traffic illicit and sordid goods and services. While that kind of activity makes up a significant proportion of content on the dark web, the fact that the Tor browser can circumvent surveillance measures also makes it useful for legitimate activities in certain circumstances, like free expression from political dissidents in authoritarian countries. Some prominent surface websites host mirrors of their content on Tor sites for exactly this reason, including The New York Times and Facebook. On the other side of the coin, Insikt Group’s research has shown that much criminal activity happens on sites not requiring any special protocols to access, such as public social media sites like Twitter or messaging services like WhatsApp and Telegram.

In this research, we investigated a few things about this network of onion sites: how big it really is, the languages in which it’s written, and how reliable it is to use in terms of uptime and trustworthiness. We spidered about 260,000 onion pages to approximate the full reachable Tor network from a starting set of onion sites that we pulled from public lists and our own content.

The Dark Web Is Tiny

The dark web is often portrayed as vast and mysterious, implying that there is a large number of onion sites on Tor, but this is not what we find. This misperception may be in part due to the fact that there are many tragic and horrible things that take place under the anonymity Tor provides. While we cannot contradict the sad reality that those things do happen, we find that in terms of size, the network of onion sites is tiny compared to the surface web, and the part with real threat intelligence value is smaller still. Our crawling found 55,828 different onion domains, but only 8,416 were observed to be live on the Tor network during our crawl.

Our findings disprove the misconception that the relationship between the surface web and dark web has an iceberg shape, with the surface web being a small portion of the World Wide Web above the water and the dark web below the visible surface accounting for the majority. The truth is that this iceberg shape is upside down.

There are an estimated 200 million unique surface web domains that are active, which positions the current live onion site network at less than 0.005% of the size of the World Wide Web.

Onion sites are prone to disappearing from the network, which will cause any attempt to reach the page to fail. The ratio of live to total onion domains was about 15% in our results. A similar ratio (about 15%) holds for the surface web. This number, which provides an estimate for the size of the Tor network, complements the findings of an Onionscan report from 2017, which reported a live rate of 4,400 live sites out of 30,000. Others also claim that the network is shrinking. While we cannot directly compare against their numbers because their approach was not as broad as our spider, we do find that the ratio of live to dead continues to be similar to these previous findings, with about 15% of the sites being live.

We also found that this tiny network of onion sites is tightly connected. For 82% of the live domains in the network that we’ve crawled, the average degrees of separation from a popular link hub like the Hidden Wiki is 2.47. The data suggests that if you visit the Hidden Wiki onion page, you’d be about three clicks away from 82% of live onion sites. This measure is tighter than might be expected in the surface web. For example, the Facebook social graph has been reported to have an average degree of separation of 3.57 between pairs of users.

It’s also notable that the other 18% of crawled domains were completely disconnected from the Hidden Wiki, which might indicate the presence of isolated communities separate from the rest of the network. While this opens the possibility of there being swaths of sites that our approach could not discover, we believe this is unlikely due to our broad starting base, which included all onion domains seen anywhere in our vast open source data as well as our extensive collection focused on the criminal underground.

The Dark Web Is Disorganized and Unreliable

The dark web is plagued by flakiness. As criminal activity has proliferated across onion sites, so have scams and attacks. The servers of onion websites are taken down when they fall victim to attacks. A prominent example is a site called Daniel’s Hosting, which used to provide Tor hosting services to about 6,500 onion websites. This site was hacked in 2018, causing a massive outage of onion sites. The infrastructure was compromised using a PHP zero-day vulnerability that allowed the hacker to gain access to the full database of sites and delete all the accounts inside.

While it was eventually recovered, the victimization and prolonged downtime is a typical example of the level of service found on onion sites. Even popular dark web markets can have uptime well below 90%, with one well-known market having about 65% uptime as of this article. Sites can be down for weeks at a time, which would be unthinkable for reputable service providers on the surface web. For comparison, Facebook’s uptime is measured at 99.95%, and the gold standard is 99.999% availability, known as “five nines.” Onion sites are typically far below that level, and some simply disappear for days, for weeks, or for good.

Typosquatting is a tactic used by malicious actors on the surface web, and this has been taken to onion sites as well. Typosquatting is a technique where a malicious actor registers a domain that users of a legitimate website might easily mistake for the website of the service they’re trying to use, which is then exploited by the actor hosting malicious content on the typosquatted domain (for example, a fake login page at “aple[.]com” or “apple[.]co”).

We found a blatant example of onion site typosquatting that we’re calling the “Thank You” scam. Our spider found numerous copies of onion sites hosting only a simple banner from someone that claims to have earned more than 200 BTC by hosting slightly modified domain names for over 800 popular onion sites. We speculate that the perpetrator might have asked for user credentials and profited from stealing them, but this is unclear, as the scam landing pages are no longer visible and all the sites instead show the gloating message. Well-known Bitcoin mixers and markets were included in the list of typosquat victims.

From the “more than 800” fake domains referenced by the scammer, our spider found 430 live sites, all with a landing page where the perpetrator communicates his retirement and thanks the viewer for their money. If indeed there are as many as the banner claims, we believe that the remaining 370 are no longer live.

Typosquatting is even easier on onion sites than the surface web due to the way that onion domains work. Onion domains are hashes, so they typically contain many characters that appear entirely random to a human user. For example, the onion domain 7rmath4ro2of2a42.onion does not correspond in any visual sense to the site that it loads, a news site called SoylentNews. This makes it hard for a Tor user to distinguish between a real onion domain and a typosquat. Sharing written onion typosquats would be an effective way to spread them, as many Tor users will not be familiar enough with the real domain to tell the difference. In addition, many fake domains were added to Daniel’s Onion Link List, a popular site for hosting and listing onion domains. Finding phishing links is common enough for Deep Dot Web to make a post warning about it. Even without considering the content of the sites, these factors give the entire network of onion sites a sense of untrustworthiness.

Language Usage on the Dark Web Is More Homogeneous Than on the Surface Web

Various studies have estimated the language breakdown of the surface web, but such measurements of the dark web are rare. Spidering the Tor network provides a way to measure the breakdown of written languages on onion sites. We estimate that English is the main language for 86% of onion sites, a higher proportion than the surface web, in which English accounts for only 54%. Following English comes Russian at 2.8%, German at 1.6%, and Spanish at 1%. The languages below those in frequency account for less than 1% each and 8.6% as a whole. While the percentages differ, the order of the top four languages by popularity is the same as the order for the surface web. After that, the order diverges as the percentages get smaller.

We formed these estimates using stratified sampling for our spidered data, selecting random pages from each crawled domain and assigning a main language for the whole domain based on a majority vote of the languages detected across the pages.

The Hidden Dark Web

The idea of a dark web that is hidden and mysterious is better exemplified by a tiny portion of onion sites, a set of invitation-only and generally unpublicized communities buried in the most shadowy corners of the internet. To understand just how hidden these sites are, we measured how many unique onion domains had a link pointing to a given site. This measurement can then be compared to popular sites to evaluate their relative visibility. Popular surface-web sites have inbound link counts in the millions or more.

The site with the highest inbound link count across all our crawled onion domains was a popular market with 3,585 inbound links. An onion site providing help with hosting onion servers had 279 inbound links. We chose eight sites that in our qualitative expertise we view as top-tier criminal sites with significant barriers to entry and a high level of obscurity. For these eight sites, we measured an average of 8.7 domains with links to them, and the highest inbound link count for one of these sites was 15 — a stark contrast with the link counts for well-known sites. It is sites like these that are truly dark, and sites like these that have the most value for threat research on the dark web.

Methodology

Tor (“The Onion Router”) is free, open-source software initially developed by the U.S. military and designed for anonymous communication. The network consists of onion domains and connections between them in the form of direct links. For the purposes of our research, we use the term “dark web” exclusively to refer to websites on onion domains. Websites that are able to be reached without these kinds of specific software or network configurations are known as the surface web.

For years, Recorded Future has collected targeted dark web content that is relevant to our clients. For this project, we aimed to collect data from the whole Tor network without regard to whether a site likely contains useful information for threat intelligence data or is just junk. The approach was a web crawler (“spider”) that uses a Tor browser simulator. Our spider has been crawling new onion pages since December 2018. The spider was started on lists of known onions like the Hidden Wiki and as onion pages seen in Recorded Future’s existing data holdings.

Estimating the size of the Tor network required two procedures. First, we had to run the spider for enough time to crawl the majority of the onion sites. Second, we had to remove any duplicates from the count. For the former, we measured the rate of new, live domains found per day. This started out at around 2,000 new domains per day and leveled off about two months later. While we do still find some new domains, the overall rate is small enough that there is a high probability that we have found the vast majority of sites that are reachable from our current set of onion pages. It is possible that there are sites that are not reachable from our starting lists of onion sites, which the crawler will never find. While we cannot rule that out, the breadth of our starting lists gives us confidence that we have found the vast majority of onion sites that exist.

To count domains after data was collected, we removed any duplicates. One of the largest sources of duplication was 5,941 duplicates of the Deep Dot Web onion site. For an unknown reason, there are thousands of variations for the onion domain for this site using different placements of a non-printing character in the URL. The domains vary only by this inclusion of a unicode character that is not printable. This character, the “soft hyphen,” or “SHY” in unicode, is not visible in the URL bar when copying and pasting the domain. It also appears to have no effect on the returned site, with the same webpage returned regardless of the SHY character. From a human’s point of view, the modified Tor site URL will be an exact copy and will load the same site, but the non-printing characters are visible when the URL is rendered as raw characters, such as when viewing the raw HTML for the site containing the link.

It is unclear why Deep Dot Web has decided to use such a great number of different spellings of their domain that are all indistinguishable visually. One possible explanation could be that they are trying to prevent others from indexing their site. In 2015, Deep Dot Web reported having to aggressively shut down fake copies of their onion site that had the onion urls of popular markets replaced by phishing links. We did not attempt to evaluate this strange behavior further, and just removed the duplicate domains from our counts.

We did not attempt to determine how many unique servers were underlying the domains we observed. Given that some hosting services may host thousands of sites, like in the case of Daniel’s hosting service, we estimate that the number of different servers is in the hundreds or low thousands. Additional work would be required to obtain greater certainty.

To load onion urls, we only used browser-default ports 443 and 80. It is possible that a portion of failed urls will load correctly if requested via different ports. This is another potential future expansion on this work as the spider continues its ongoing scraping.

The contents of all live onion pages scraped with the spider are added to the Recorded Future® Platform.

Conclusion

The dark web is many things, but it is not the vast sprawling network of steely-eyed, hardened criminals that some might imagine it to be. Its 8,400 live onion domains are a tiny fraction of the surface web, with only 15% being live out of a mere 55,000 onion sites total. Onion sites are easy prey for attacks and scams like the “Thank You” typosquatting scam. It is more homogeneous, with 86% of onion sites primarily in English. The part of the dark web that does live up to its reputation is the set of top-tier criminal forums. Inbound link analysis of a select set of sites that we view as top-tier confirmed that they do indeed have less visibility, measured by a reduced number of links pointing to them.

If you’re curious for more, with the Recorded Future platform, you can see all of our spidered content yourself and get a deeper sense of what the dark web really is.
https://www.recordedfuture.com/dark-web-reality/





What Does Unsplash Cost in 2019?
Luke Chesser

3 years ago, we wrote ‘What does Unsplash cost?’ to give a totally transparent look at the bills associated with hosting one of the largest photography sites in the world.

Since then, Unsplash has continued to grow tremendously, now powering more image use than the major image media incumbents, Shutterstock, Getty, and Adobe, combined.

With Unsplash’s public API, we power over 1000+ mainstream applications, including Medium, Trello, Squarespace, Tencent, Naver, Square, Adobe, and Dropbox.

All of that growth means two things: more traffic and bigger bills.

In the interest of transparency, Chris and I thought we were overdue for an update.

It’s 2019. What does it cost to host Unsplash?

Then
Back in 2016, Unsplash had just crossed 1 billion images viewed and 5.5M photos downloaded per month.

Our team was smaller and our product was a lot less developed, which led to less services and less in-house processing. We had one main application, a traditional Rails monolith, that consumed a handful of services to create the basic Unsplash experience.
Heavy features like search and realtime photo stats were in their infancy, which led to much simpler data processing requirements and the use of 3rd party services like Keen and a handful of CRON jobs.

The final monthly breakdown for April 2016 was:

Web Servers: $2,731.23
Monitoring: $630.00
Data Processing: $1,000.00
Image Hosting: $11,170.00
Other: $2,127.39

Total (USD): $17,658.62

Now
A lot has changed.

For one, Unsplash is a hell of a lot bigger. 10+ times bigger. We now get more traffic from our API partners than our own website and official apps, despite these growing significantly.

Partnering with some of the largest consumer facing apps in the world has pushed our engineering team to match their practices around redundancy, monitoring, and availability, which requires more supporting resources and services.

Our product team has continued to push the envelope for core features like search and contributor stats, requiring more and more data to be processed in greater and greater volumes.

All of these things have pushed our architecture to be more complex, while also increasing the baseline costs.

Web servers
Total monthly cost: $29,763

We continue to use Heroku as our main web platform. Despite its premium cost over AWS, Azure, and Google Cloud, Heroku’s built-in deployment and configuration tools allow our team to move faster, more confidently, and more reliably.

As we’ve detailed previously, the alternatives would undoubtably be cheaper on paper. But in reality, the increased simplicity and freedom offered by Heroku for a small, product-focused team is a major cost savings advantage.

In addition to our main web servers and databases using Heroku, we use Fastly for distributed CDN caching, Elastic Cloud for our Elasticsearch clusters, and Stream for our feed and notification architecture.

Monitoring
Total monthly cost: $7,679

Our team is small for Unsplash’s size, with our total product team counting in at just 11 people.

With no one dedicated to dev-ops, ensuring Unsplash is running smoothly and never goes down, requires a lot of instrumentation and reporting.

Despite the volume of metrics we monitor and report on, New Relic, Sentry, and Datadog remain fairly inexpensive solutions. Our logging is certainly our largest monitoring expense, but the detailed information is crucial when debugging issues or rolling out new features.

Data Processing
Total monthly cost: $15,223

Data processing has been the area with the largest relative increase since 2016. Back then, analytics and data were an afterthought in our development process. We relied on tools like Google Analytics for user analytics and Keen for product metrics like photo views and downloads.

Since then, we’ve needed to expand our data collection, aggregation, and reporting significantly, both from a product and a company perspective. As Unsplash has grown, the volume has also increased considerably, with hundreds of millions of events tracked every day.

We’ve replaced Google Analytics and Keen with an open-source data pipeline, Snowplow Analytics. Snowplow takes care of the data collection and formatting, allowing Tim, our data engineer, to focus on data aggregation, modelling, and visualization.

We’ve also expanded the role of the data architecture in the product to handle all of our machine learning and search processing. As we go forward, we expect this to continue to be the biggest area of expansion.

Image Hosting
Total monthly cost: $42,408

Imgix is our single biggest expense, but we love it. Yes there are cheaper options, but trust us when we say that they aren’t as good for what we do.

We send petabytes of data through Imgix’s CDN and render more than 250 million variations of our source images every month. Their reliability, performance, and flexibility is unmatched, and negotiating our contract through them actually allows us to discount our CDN costs due to their bulk negotiations with CDN providers.

Image hosting costs breakdown for February 2019 (very useful, I know)

The final monthly breakdown for February 2019 was:

• Web servers: $29,763
• Monitoring: $7,679
• Data Processing: $15,223
• Image Hosting: $42,408
• Other: $3,580

Total (USD): $98,653

Comparing across the years, some trends emerge.

Despite growing top-line metrics over 12x and significantly expanding the systems to include more features, reliability, and redundancies, hosting costs
in total have only increased 5x.

Downloads vs hosting costs since April 2016

There are few reasons backing this:

1. As systems approach a certain cost threshold, it becomes more optimal to trade engineering salary for technical optimizations. We try to avoid this as it removes engineering resources from user-facing feature development, but over the years we’ve made significant improvements to low-level caches, bulk data aggregations, and HTTP caching.
2. At larger and larger volumes, it becomes easier to negotiate bulk discounts from services.
3. Resources can be more fully utilized at high capacity. This is especially true for our Redis and Redshift clusters.

At the same time, the ratio between our hosting costs and the non-hosting software we use, like Github, Looker, and Slack, continues to increase, as it’s a function of engineering team size. To put that in perspective, per engineer, Unsplash supports more users than Facebook at the equivalent point in time.
Hosting vs Software costs for the last 18 months

Hopefully getting a behind-the-scenes look at what it costs to run a site like Unsplash will help you with your own business, or at least give you a better understanding of what’s involved.

If you’re in a position to be able to share your company’s costs, we’d love to see.
https://medium.com/unsplash/what-doe...9-f499620a14d0





Portland Is Again Blazing Trails for Open Internet Access
Susan Crawford

"Net neutrality" still gets people mad. Millions have the vague sense that the high prices, frustration over sheer unavailability, awful customer service, and feeling of helplessness associated with internet access in America would be fixed if only net neutrality were the law of the land. As I've written here in the past, that's not exactly true: Without classifying high-speed internet access as a utility and taking meaningful policy steps to ensure publicly overseen, open, reasonably priced, last-mile fiber is in place everywhere, we'll be stuck with the service we’ve got. A rule guaranteeing net neutrality–which would cover only how network providers treat content going over their lines–won’t solve the larger, structural issues of noncompetitive, high-priced access.

To keep moving toward better policy, though, we'll need to keep internet access on the radar screen leading up to the 2020 election. At the moment, “net neutrality” is our best vehicle for that, as narrow a solution as it is. So it’s good news that, last month, the House passed net neutrality protections and advocates trumpeted their delight. To be sure, the status quo continues to prevail, with Senate Majority Leader Mitch McConnell (R-Kentucky) asserting that the House bill would be "dead on arrival" in the Senate. And although some Democratic senators have suggested that net neutrality provisions could be attached to a House spending bill, it's more likely that the current House-Senate stalemate will endure—which is a good sign for 2020. Enough voters care about “net neutrality” that a clear line between the Democrats supporting it and the Republicans getting in the way is helpful to the overall Democratic cause.

While we're waiting, here's some history: The tussle over "net neutrality" started 20 years ago in Portland, Oregon, home of drizzling rain and ample coffee. Today, Portland and its region are poised to be Ground Zero for resolving the real issues behind public concern over “net neutrality”—the stagnant, uncompetitive, hopelessly outclassed state of internet access in America. Portland is taking seriously the idea of a publicly overseen dark-fiber network over which private providers could compete to offer cheap, ubiquitous internet access. Let's hope this next journey is shorter and smoother than the last one was.

In June 1998, there were 20 or 30 local internet service providers in Portland, according to Marshall Runkel, who today is chief of staff to Portland commissioner (as City Council members are known) Chloe Eudaly. Back then, Runkel was 28 years old and a staffer for commissioner Erik Sten, the youngest person ever elected to the Portland City Council (aged 27). That month, AT&T (the long-distance provider that had emerged from the breakup of Ma Bell) announced plans to buy TCI, which held the cable franchises in Portland. The independent ISPs, who had a legal right to run their internet access businesses over the copper phone lines of telephone companies, wanted the same opportunity for cable lines. They were worried that they wouldn't be able to sell their services over cable, and "they provided an important voice in the conversation," says Runkel.

Even before June 1998, Portland had made common cause with several neighboring local governments, deciding to negotiate with cable franchisees collectively, as a region. Smart thing to do. The regional alliance formed the Mount Hood Cable Regulatory Commission, which still exists, and appointed a citizen advisory committee to make recommendations about cable franchise policy.

In order for AT&T to take over TCI’s valuable franchise agreements, the Mount Hood Cable Regulatory Commission had to agree to the transaction. This gave local government a lever with which it could demand particular behavior from AT&T. And the citizens were really worried about AT&T's potential control over data content, as well as the prospect of diminished ISP competition. So the citizen advisory committee recommended that the region require AT&T to operate on an "open access" basis, allowing any ISP to compete across its wires. That didn't fit with AT&T's plans; it wanted to offer a single ISP controlled by the cable industry, called @Home.

Runkel says this kind of open-access thinking was "in Portland's DNA." Portland had acted in the 1970s to remove a freeway and open its riverfront to pedestrian access by creating the beautiful Tom McCall Waterfront Park. It had blocked the development of a huge parking garage in the middle of downtown, preferring to create an open public square. As Runkel puts it, Portland likes the idea of "the people taking on the powerful." And the citizen advisory group in 1998 was a skilled bunch. "They really understood the internet access issue," he says.

Portland's unusual form of local government also helped. Each city department reports to one of the five commissioners (the four City Council members plus the mayor). Coordination can be a huge challenge: Essentially, there are always five mayors. But this structure also means that each commissioner has the opportunity to really dig into issues he or she cares about.

Because Sten and Runkel were into the newfangled 1998 internet ("We had this thing called a website and used this revolutionary new technology called email," chuckles Runkel), they seized the franchise transfer issue tied to the AT&T-TCI deal and worked to persuade their colleagues that AT&T should be required to allow other ISPs to use its network. The timing was good, too: The issue of internet access "was so brand new that there was no real existing coalition or conventional wisdom," Runkel recalls.

AT&T's response to the suggestion of open access was to threaten to sue if Portland dared to require it let others sell services over its access lines. That backfired, because Portland officials reacted poorly to being pushed around. So Portland steamed ahead, issuing the franchise to AT&T on the condition that the company charge all ISPs a reasonable amount for access to its lines. (The city had more running room because the Federal Communications Commission was caught flat-footed—it was still trying to decide how to treat accessing the internet over cable lines as a regulatory matter.)

AT&T sued the city and lost the first round. The federal district judge who heard the case, Owen Panner, wrote in his June 1999 opinion that he viewed the AT&T access lines as an "essential facility," like a railroad bridge controlled by a private railroad company to which competing railroad operators need access in order to do business. Panner dealt easily with all of AT&T's arguments and upheld the city's assertion of power.

But the next round, and most of the rounds since then, went AT&T's way. The Ninth US Circuit Court of Appeals said Portland had exceeded its legal authority. Then the FCC decided that traditional regulations shouldn't apply to cable lines; in a related 2005 case (Brand X), the Supreme Court upheld the FCC. Today's FCC chair, Ajit Pai, has returned us to that deregulatory framework adopted by the FCC in 2004. Millions of American are mad, including people in the Portland region, where service is dominated by Comcast (which now owns AT&T’s cable systems) and CenturyLink (successor to US West, one of the Baby Bells).

Fast forward to today. A grassroots group called Municipal Broadband PDX is agitating for construction of a publicly owned open-access fiber network across the region. The city of Portland has contributed funds for a feasibility study and Multnomah County is on board with the idea. Hillsboro, less than 20 minutes away from Portland, is already building its own network. Citizen advisory committees will play a crucial role in the Portland region when it comes to planning for publicly overseen dark fiber, just as they did 20 years ago. There are many questions that need to be answered, including how much a dark fiber network would cost to build, how long it would take to be rolled out, where the money would come from to build it, and how a reasonable, nondiscriminatory leasing regime would work. If the project goes forward, it will be one of the largest public fiber efforts in the country.

History does not repeat itself, but it rhymes, as many people think Mark Twain said. Dark fiber, leased at reasonable rates to multiple private sector ISPs, would create the abundant, competitive, cheap internet access Portland has dreamt of for decades. And having it in place is the actual answer to "net neutrality" worries: Where there is abundant access and competition, no one company can pick and choose winners and losers or charge its customers outrageous rates for second-class services. That's what's in Portland's DNA.
https://www.wired.com/story/portland...ternet-access/




GoNetspeed Expands Fiber Optic Internet Service in New Haven and Other CT Cities
Luther Turmelle

A Rochester, N.Y., company that began wiring parts of New Haven, Bridgeport and Hartford last summer for fiber optic cable to provide ultra-high speed Internet service completed an expansion in the Elm City on Tuesday that began in January.

GoNetspeed’s expansion in New Haven added availability of the service in the city’s East Rock, Prospect Hill and Newhallville sections. The company’s initial service area in the city included the Beaver Hills, Edgewood and Dwight sections, according to company officials.

“We’re extremely excited to increase the availability of our fast, fiber-optic Internet service to more neighborhoods in New Haven,” Tom Perrone, chief operating officer of GoNetspeed, said in a statement. “We have aggressive growth plans to continue our momentum in Connecticut, with plans underway on even more neighborhoods throughout the state.”

The company’s Elm City expansion plans over the remainder of this year include adding service in theWestville neighborhood and in the area around Yale University, said Jodie Snook, a company spokeswoman. Expansion plans outside the New Haven area include parts of West Hartford, Newington and to what Snook said was part of the area around Stratford.

The expansion in New Haven alone means that GoNetspeed’s fiber optic network runs past 12,000 homes and businesses in the city, about four times greater than last summer, according to Snook. With this summer’s expansion, the statewide availability of GoNetspeed will bring its service past 30,000 homes and businesses statewide, she said, which represents a 33 percent increase in access to the network.

Snook said the company is not revealing the number of subscribers that have signed up for its service thus far. The company currently is offering to waive its $100 installation fee for new subscribers

GoNetspeed’s bandwidth offerings to customers start at 150 Megabits per second for $50 per month. The other two service offerings are 500 Megabits per second for $70 per month and $90 for 1 Gigabit per second.

The company offers a lifetime a guarantee that subscribers will pay the same price for their Internet speeds for the entire time they remain a customer within a GoNetspeed service territory.

Continued expansion depends upon consumer interest: the company needs at least 1 in 10 homes in a specific area to sign up for service in order for GoNetspeed to consider expanding a new neighborhood or communities.

The company’s website includes a place where individuals interested in having GoNetspeed provide service to their community can register.

GoNetspeed is now serving 14 communities throughout Pennsylvania and Connecticut.
https://www.newstimes.com/business/a...e-13826750.php

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

May 4th, April 27th, April 20th, April 13th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 04:17 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)