P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 03-09-20, 06:16 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - September 5th, ’20

Since 2002































September 5th, 2020




Software Piracy Spreading With the Virus
Jack M. Germain

An excellent guide for students considering STEM courses, graduates pondering job choices, and career changers at any stage in life. A useful tool for school and career counselors, recruiters, and HR pros eager to diversify their workplaces. Get the Kindle or Paperback.

The pandemic caused by COVID-19 has a destructive reach that goes beyond that of a highly contagious and deadly illness. It is also contributing to the rapid spread of piracy -- as in spreading illegal copies of commercial software.

Software piracy involves much more than businesses and consumers using illegal copies of computer programs. What lurks within the pirated copies is often rogue code -- malware -- that can be just as deadly to computers and users' finances.

Software companies are reporting that piracy has increased 20 to 30 percent due to COVID-19 and working from home, according to Ted Miracco, CEO of compliance and licensing management firm Cylynt.

The Software Alliance (BSA), research shows nearly 40 percent of all software used worldwide is not properly licensed and software companies are losing nearly US$46 billion a year due to unlicensed use," he told the E-Commerce Times.

More specifically, pirated software has a five-pronged consequence that its victims discover only when caught or infected, noted Miracco:

1. Remote work environments are creating a situation where hackers can breach an online fortress to seize a company's intellectual property.

2. Unemployed workers are buying pirated software over the Internet to generate income.

3. WFH employees are making illegal copies of the software they need for their jobs.

4. The ubiquity of the Internet and the wholesale move to cloud computing are not as secure as they could be.

5. Software pirates and hackers are resourceful at hiding their identities and evading anti-piracy technologies.

The practice of pirating software -- illegally using and distributing someone else's software -- has existed since the advent of commercial software. In most cases, pirating software involves the intentional bypass of software security controls, like licenses and entitlements, meant to prevent unauthorized use, according to Paul Dant, vice president for product management of security at Digital.ai. Dant is a reformed child hacker and former software pirate.

Worldwide Reach

Software piracy is so widespread that it exists in homes, schools, businesses, and government offices. Software piracy is practiced by individual PC users as well as computer professionals dealing wholesale in stolen applications, according to BSA.

The Software Alliance, headquartered in Washington, D.C. with operations in more than 30 countries, is an international organization representing the leading software developers and a foremost advocate for the global software industry before governments and in the international marketplace.

It generally issues a Global Software Report every two years. The last such report was published in 2018. That report found the use of unlicensed software, while down slightly over the previous two years, was still widespread. Unlicensed software is still used around the globe at alarming rates, accounting for 37 percent of software installed on personal computers -- only a two percent drop from 2016.

CIOs reported unlicensed software was increasingly risky and expensive. Malware from unlicensed software cost companies worldwide nearly $359 billion a year. CIOs disclosed that avoiding data hacks and other security threats from malware was the number one reason for ensuring their networks were fully licensed.

"Software piracy and cyberattacks continue to escalate, and so far the government has done little to protect its own programs, let alone the private sector," Miracco said. "Software companies need to take action and arm themselves with the best technological antipiracy solutions available to remain competitive and protect their assets."

Software Piracy Hotspots

China, whose industrial output now exceeds that of the U.S., and whose policies encourage the theft of foreign technology and information, remains the world's principal IP infringer. Other leading offenders include India and Russia, according to Miracco.

A report published by Revenera (formerly Flexera Compliance) helps companies find and mitigate security and license compliance issues, according to its website. The graph below shows its ranking of the world's top 20 license misuse and policy hot spots.

20 countries using pirated software as of Q2 2019

Some compliance companies specialize in helping enterprise software users voluntarily comply with commercial software licensing requirements. Other firms seek out illegal software users. BSA and other organizations in recent years took uncooperative offenders to court to pay up.

Globally, 37 percent of business users are not paying for software, making it a $46.3 billion problem. But eighty-three percent of these unlicensed users in mature markets are legally-inclined victims of software piracy who will pay for software, according to Revenera.

The company also claims that the commercial value of unlicensed software in North America and Western Europe was $19 billion. The rest of the world totaled $27.3 billion last year.

What Drives Piracy?

The number one reason for software piracy is the cost of software licenses, according to Cylynt's Miracco, followed by not seeing a reason to pay for something that is available free or at a cheaper price.

"In developing countries such as China, where the time and cost of developing high technology software from scratch is a barrier to leapfrogging the technology gap, the government encourages the theft of software," he said. "This is done to reach its goal of Made in China 2025 to make China the global leader in high-tech manufacturing by 2025."

In addition to deliberate software piracy, significant revenue is lost by software companies through unintentional misuse of licenses. Especially in today's WFH environment, employees are sharing licenses and/or downloading cheaper, illegal software not provided by their employers on their home computers, Miracco noted.

Consider this scenario as a check of your own potentially illegitimate software use, suggested a Cylynt representative. It helps understand the path software users follow -- sometimes unknowingly -- to piracy.

You download software to help with a project.

Did the software come from the company or a certified partner? Or, did it come from what seemed like a legitimate free download site?

If this is the case, did the original software manufacture put its software on the site or give permission for it to be freely downloaded?

If not, you could be in violation of the software owner's copyright. Or worse. It could be an unlicensed, pirated copy of the software full of malware about to set off a chain reaction within your company's IT network.


Part of the Problem or Hapless Victim?

Given the above example, are software "borrowers" complicit or innocent of piracy? Software users caught in the above situation become both, in Miracco's view.

Deliberate pirates, especially hackers from China, are encouraged by the government to steal software. In other cases, smaller companies that cannot afford to pay for expensive software buy illegal copies and provide them to their employees, who use whatever tools they are given in order to do their jobs, he reasoned.

"Sometimes, the use could be inadvertent. A WFH employee desperately needs a vital software tool and pulls it off the web without realizing it is a hacked or illegal copy," he said.

The Piracy Scheme

Software attackers reverse engineer the target software. They identify the areas of code that handle the security controls. Then they simply modify that code to bypass or disable them, according to Digital.ai's Dant.

"In other words, if I have your software, I can understand how it works and modify it to run completely under my control to include communication with your backend application servers. Without the appropriate protection in place, these attacks are trivial to carry out for an experienced software pirate," he told the E-Commerce Times.

Remember Dant's background as a reformed child hacker and former software pirate. He says this with great authority.

"We continue to struggle with software piracy today because the same inherent software exposures I utilized in the 80s and 90s still exist in plain sight," he asserted.

"Particularly in the age of mobile apps and IoT devices, the stakes go well beyond financial loss due to software piracy. If an application is compromised, we are now contending with everything from massive data exfiltration to degraded operations in healthcare facilities to threats against our privacy, safety, and health," he said.

Is Piracy a Problem Without a Workable Solution?

Absolutely not, retorts Miracco. Software developers who have adopted antipiracy and license compliance software and have built robust programs are satisfied with the results.

Some companies have opted to develop their own in-house programs. However, most have found that partnering with a company that specializes in antipiracy technology is less resource intensive and yields more, and higher quality, results.

"Some piracy will always exist, of course. For companies using antipiracy technology, the losses have declined sharply," he said.

Dant has a different approach to solving the problem. Software makers must make their software more difficult to reverse engineer. They need to enable their software to detect tampering and prevent further execution in a tampered state.

"While rarely mentioned in media coverage, it is those distinct exposures that provide an attacker with an initial advantage for researching and formulating attacks surreptitiously and anonymously. But, keep in mind that developers are not meant to address these concerns," he added.

There's no coding trickery to fix this, Dant cautions. Protection against this type of attack relies upon establishing continuous integration and delivery pipelines that instrument protection before release, transparent to developers, and without any disruption to release flows.
An Apt Solution

Nothing is ever completely secured, especially software, Dant offered. But if software companies focus on software protection that frustrates and deters the types of attacks that enable piracy (and beyond), that effort can effectively eliminate a substantial subset of potential attackers due to their lack of necessary technical skills and motivation.

Hackers' and pirates' motivations vary wildly. But they are often financial in nature. The better protected your software, the more likely an attacker will choose to move on and find a less protected application that requires fewer resources to attack, Dant suggested.

"Severely diminishing the attacker's return on investment is an effective risk mitigation strategy that can reduce the occurrence of piracy and other attacks against your software," he concluded.
https://www.ecommercetimes.com/story/86826.html





EU Campaign Claims Credit for 12 Per Cent Drop in Ads on Pirate Sites
Holly Brockwell

You wouldn't steal a handbag.

You wouldn't steal a car.

So why would you put ads on a piracy site and profit from internet thievery?

That was essentially the message of the EU's campaign for brands to stop advertising on pirate sites, and a new report from the European Commission suggests it was responsible for a 12 per cent drop in ads.

Of course, the campaign wasn't necessarily the reason: we'd wager lots of sites have seen a reduction in advertising in the last year or so, especially during the Hard Times of 2020.

The report also found that the number of brands advertising on pirate sites has actually gone up: previously, 38% of ads found on copyright-infringing sites were from brands, now it's 52%. And one of the biggest increases was in the UK.

There's no indication of how much actual money a 12 per cent drop equates to, but there'll apparently be more data on that later.

The campaign was launched back in 2016, when we thought some celebrities dying was as bad as things could possibly get. Several tech companies got on board, including Google, to help the European Commission with its mission to "deprive these websites and mobile applications of the revenue flows that make their activities profitable."

But ads aren't the only way pirate and torrenting sites make money, especially now. Lots of them have expanded into mining crypto, soliciting donations, and in some cases, shady stuff. So maybe like everyone else, pirates have just realised online advertising isn't a good way to make a living anymore.
https://www.gizmodo.co.uk/2020/08/eu...-pirate-sites/





Tackling Video Piracy Head-On
Ian Munford

We are clearly in a "new" golden age of TV. Audiences around the world have never had so many viewing options available. This has led to a creative surge in new groundbreaking storytelling and entertainment as both broadcasters and digital giants try to maintain the loyalty of their viewers. This is a double-edged sword however, because as the production of great content increases, so do the opportunities for video pirates. Akamai has, perhaps unsurprisingly, identified a dramatic rise in illegal pirate services that offer access to TV and movies and a corresponding rise in viewers who watch their content. While piracy has been prevalent for many years, improvements in digital technology and the lack of adequate global rights protection mean piracy is now a scaled and often lucrative business. Any industry would struggle to survive with "product shrinkage" caused by video pirates, which accordingly has the potential to impact the long-term viability of many businesses in the media value chain.

In-depth analysis through the "Inside the World of Video Pirates" white paper reveals that those exploiting the industry generate more than €941.7 million per annum in the European Union alone across 14 million households, as cited by the European Union Intellectual Property Office (EUIPO). Although revenue losses often catch the headlines, the TV and film industries also support millions of jobs, from set designers and musicians through to carpenters and technicians -- and piracy is putting these at risk. A report on the impact of digital piracy estimated that in 2019 between 230,000 and 560,000 jobs were lost in the United States alone as a direct result of pirating activity.

Moreover, we are beginning to see signs that piracy is impacting licensing -- the lifeblood of the creative industry, and arguably a more damaging strategic issue. Put simply, why would potential distributors pay significant sums of money for rights when the same content is readily found for free through pirate sites? Oscar-nominated producer Jason Blum described how piracy is having a direct impact on funds being made available for innovative, risky TV and film. He argues that the numbers are just becoming too unsustainable, and companies will eventually need to cut back their risk.

So we need to ask who is doing this, can we stop them, and how?

The pirate's profile

Whilst real insight is understandably difficult to come by, we do know that there is a complex array of pirate groups and subgroups, each with their own drivers and levels of sophistication. As examples, the release groups see themselves as altruistic revolutionaries. They are technically competent and prize early release assets but are not necessarily motivated by financial gain. Site operators certainly make money out of the process and often run highly scaled, very sophisticated businesses. Some site operators have passed themselves off as entrepreneurial chancers, but many have links into organised crime. As a contrast, we also see amateur pirates who are ambivalent to the fact that piracy is illegal and simply stream a live sports match over social media platforms using their phone.

The presence of these different groups makes fighting the problem complex and often frustrating, and the term "whack-a-mole" is often used to describe the approach.

The pirate's methods

As with most forms of cybercrime, if there is a weakness, the pirates will find and exploit it. With so many organisations and individuals involved in the production and delivery workflow, the structure of the industry itself presents a smorgasbord of opportunities for piracy to take place. Documented methods for video pirates include data centre breaches resulting in the theft of video assets, employee ID theft, providing access to video content through various production and edit systems, ripping content from legitimate sources (e.g., iTunes), and the tried and tested cinema filming systems.

One of the fastest-growing forms of piracy is the capture and redistribution of TV channels or live events. Popular methods include intercepting decrypted video using HDCP strippers, using stolen legitimate viewer details, or simply recording screens using a mobile device.

The pirate's kryptonite

With piracy so prevalent, the pertinent question is "can it be stopped?" Unfortunately, the answer is "not entirely." As long as great content is being created, there will always be pirates looking to exploit it. However, steps can be established to mitigate its prevalence and impact.

We can obviously try to reduce piracy demand by continuing to educate viewers about the impact on livelihoods whilst at the same time improving access to legitimate sources. But in order to make a real impact, we must also interrupt its supply.

This can only be achieved through improved data and insight, renewed emphasis by regulators and legislators around the world to prosecute pirate gangs, and companies across the value chain reviewing and mitigating any technical vulnerabilities. The era of allowing content to be unprotected is long gone. What that means in practice, however, is taking a strategic review of operations and identifying weak links in the technical value chain from production to distribution, and applying appropriate measures.

Video piracy is a complex, nuanced subject -- and unless the industry comes together to tackle it head-on, it has the potential to threaten its long-term viability. The good news is that the industry is starting to mobilise. Research into the subject is becoming more considered, tougher legislation is starting to appear, and technology vendors are combining their capabilities to maximise potential. Finally, we are seeing signs that rights owners are insisting on minimum standards of content protection across the technical workflow. Today, these are isolated instances or "suggestions," but moving forward we see these becoming a necessary function of doing business.

With these initiatives in place, we can minimise the issue so that financial losses are reduced, job opportunities are protected, and licensing can continue to thrive in a global marketplace.
https://blogs.akamai.com/2020/09/tac...y-head-on.html





VidAngel Settles Studios’ Copyright Suit for $9.9 Million
Gene Maddaus

VidAngel, the Utah-based streaming service that filters out offensive content for faith-and-family audiences, has agreed to settle its long-running copyright dispute with several major studios for $9.9 million.

A Utah bankruptcy court approved the deal on Friday, which allows VidAngel to emerge from bankruptcy and continue to operate.

The company appeared to be doomed in July 2019, when a Los Angeles jury ordered it to pay $62.4 million to Disney, Lucasfilm, Warner Bros. and Fox for infringing on hundreds of titles. As recently as a year ago, the studios argued that VidAngel would have no choice but to liquidate.

The deal brings an end to the four-year legal battle over “filtering.” Under the agreement, the company will drop its appeal of the jury’s verdict.

In an interview, VidAngel CEO Neal Harmon called the agreement “bittersweet,” because he had hoped for vindication from the appeals court.

“Sometimes it’s better to do what’s right than to be right,” he said. “We just finally decided it made too much business sense. It’s not worth taking the brain damage of all the litigation.”

VidAngel launched in 2015, allowing viewers to watch major Hollywood releases while skipping past sex, violence, foul language or other objectionable material. The service did not have a license from the content owners — instead, it ripped copies of DVD releases and made them available to customers for a rental fee as low as $1.

The studios filed suit in 2016, arguing that VidAngel was pirating content and engaged in unlawful competition to authorized streaming services. VidAngel countered that its conduct was permitted under the 2005 Family Movie Act.

A federal judge was unconvinced and ordered VidAngel to shut down, though the service later relaunched with a model focused on filtering content on Netflix and Amazon Prime. That model has not faced legal challenges.

Under the agreement approved on Friday, VidAngel will agree not to stream any of the studios’ content. The company will also pay off the $9.9 million settlement amount in quarterly installments over 14 years.

The deal also allows VidAngel to pay off the award early for a reduced sum — $7.8 million — if it abides by the terms of the agreement for three years.

Harmon said the service now has several hundred thousand subscribers under its Netflix filtering model.

“We offer filtering for all the studios except the plaintiffs,” he said. “We don’t offer it under the technology when we were sued… Our business works totally differently than it did then.”
https://variety.com/2020/biz/news/vi...os-1234760033/





How California’s Assembly Killed The Effort to Expand Broadband for All Californians
Ernesto Falcon and Hayley Tsukayama

California is facing a broadband access crisis, as parents are relying more on the Internet every day trying to keep their jobs in the midst of the pandemic while remotely educating their kids. The people of California need help, and the state should move forward now to begin the work needed to finally close the digital divide. Yet with just hours left in this year’s legislative session, the California Assembly refused to hear SB 1130, or any deal, to expand broadband access—a refusal that came out of the blue, without any explanation to the more than 50 groups that supported this bill. And that kind of blockade is only possible at the direction of California’s Speaker of the Assembly, Speaker Anthony Rendon.

A deal to expand broadband would have secured more than 100 million dollars a year to secure access to high-speed Internet for families, first responders, and seniors across the state. Senator Lena Gonzalez built a broad coalition of support for this bill, and had the support of the California Senate and Governor Gavin Newsom.

As Sen. Gonzalez said in a press release on the bill, “During this crisis, children are sitting outside Taco Bell so they can access the Internet to do their homework, but the Assembly chose to kill SB 1130, the only viable solution in the state legislature to help close the digital divide and provide reliable broadband infrastructure for California students, parents, educators, and first responders in our communities.”

Yet the Assembly insisted on poison pill amendments that support big industry instead of California residents and families. Despite your hundreds of phone calls and letters of support for this bill, the Assembly failed to do what’s right by the people of California this session.

We won’t stop fighting. EFF was proud to co-sponsor this bill with Common Sense Media, and will continue to explore all options to get the state to address this problem in the coming months and next session. Why? Because we, too, believe that every student should have an Internet connection at least as good as the Taco Bell down the street.

Playing Politics With Necessities

SB 1130 was in a strong position heading into the Assembly. The California Senate on June 26, 2020, voted 30-9 to pass the bill, giving its stamp of approval to update the California Advanced Services Fund (CASF) to expand its eligibility to all Californians lacking high-speed access. The bill paved the way for state-financed networks that would have been up to handling Internet traffic for decades to come, and would have been able to deliver future speeds of 100 mbps for download and upload without more state money.

The pandemic has exposed how badly a private-only approach to broadband has failed us all. The Assembly failed us, too.

Under the current law, only half of Californians lacking high-speed access are eligible for these funds, which also only requires ISPs to build basic Internet access at just 10 mbps for download and 1 mbps for upload. This is effectively is a waste of tax money today because it does not even enable remote work and remote education. Recognizing this, Senate leadership worked to address concerns with the bill, and struck a nuanced deal to:

• Stabilize and expand California’s Internet Infrastructure program, and allow the state to spend over $500 million on broadband infrastructure as quickly as possible with revenues collected over the years
• Enable local governments to bond finance $1 billion with state support to secure long-term low interest rates to directly support school districts
• Build broadband networks at a minimum of 100 mbps download, with an emphasis on scalability to ensure the state does not have to finance new construction later
• Direct support towards low-income neighborhoods that lack broadband access
• Expand eligibility for state support to ensure every rural Californian receives help

Yet the Assembly proposed amendments that would have weakened the bill and given unfair favors to big ISPs, which oppose letting communities build their own broadband networks. After repeatedly stalling attempts at negotiation, refusing to consider amendments, and using their delays as an excuse to hide behind procedural minutae, the bill was shelved at the direction of Assembly leadership on August 30, prompting our call for them to act before the session ended.

Assembly leadership and Speaker Rendon chose the path of inaction, confusion, and division—instead of doing the work critical to serve Californians at school and work who are desperately in need for these critical infrastructure improvements while they seek to shelter in place.

Why We Can’t Let Up

We will keep fighting. We got so close to expanding broadband access to all Californians—and that’s why the resistance was so tenacious. The industry knows their regional monopolies are in jeopardy if someone else builds vastly superior and cheaper fiber to the home networks. Support is building from local governments across the state of California, which are ready to debt-finance their own fiber if they can receive a small amount of help from the state.

Californians see this for what it is: a willful failure of leadership, at the expense of schoolchildren, workers, those in need of telehealth services, and first responders.

By not acting now, the Assembly chose to leave millions of Californians behind—especially in rural areas and in communities of color that big ISPs have refused to serve. The pandemic has exposed how badly a private-only approach to broadband has failed us all. The Assembly failed us, too.

We’re thankful the Senate, the governor, and supporters like you stand ready to address the critical issue of Californians’ broadband needs. California must not wait to start addressing this problem. EFF will continue exploring all options to close the digital divide, whether that happens in a special California legislative session, or in the next session.
https://www.eff.org/deeplinks/2020/0...l-californians





Ajit Pai Touted False Broadband Data Despite Clear Signs it wasn’t Accurate

“The FCC should fine itself”: Pai relied on ISP’s impossible deployment claims.
Jon Brodkin

Federal Communications Commission Chairman Ajit Pai touted inaccurate broadband-availability data in order to claim that his deregulatory agenda sped up deployment despite clear warning signs that the FCC was relying on false information.

Pai claimed in February 2019 that the number of Americans lacking access to fixed broadband at the FCC benchmark speed of 25Mbps downstream and 3Mbps upstream dropped from 26.1 million people at the end of 2016 to 19.4 million at the end of 2017, and he attributed the improvement to the FCC "removing barriers to infrastructure investment." The numbers were included in a draft version of the FCC's congressionally mandated annual broadband assessment, and Pai asked fellow commissioners to approve the report that concluded the broadband industry was doing enough to expand access.

But consumer-advocacy group Free Press subsequently pointed out that the numbers were skewed by an ISP called BarrierFree suddenly "claim[ing] deployment of fiber-to-the-home and fixed wireless services (each at downstream/upstream speeds of 940mbps/880mbps) to census blocks containing nearly 62 million persons." This is an implausible assertion and would have meant BarrierFree went from serving zero people to nearly 20 percent of the US population in just six months. BarrierFree admitted the error when contacted by Ars at the time, saying that "a portion of the submission was parsed incorrectly in the upload process."

Pai corrected the data to acknowledge that 21.3 million Americans still lacked broadband but didn't change his conclusion that broadband was "being deployed to all Americans in a reasonable and timely fashion." (Even that 21.3-million figure was likely an undercount because the FCC counts an entire census block as served even if only one home in the census block can get service.)

This week, the FCC released more details on BarrierFree's apparent history of violating rules requiring ISPs to submit "Form 477" broadband-deployment data every six months, and it shows that numerous warning signs were spotted by FCC staff long before Pai touted the inaccurate data. The FCC on Tuesday issued a Notice of Apparent Liability that proposed a $163,912 fine for BarrierFree, kicking off a process that gives BarrierFree a chance to respond to the allegations and fight the proposed penalty.

Although the FCC is trying to fine BarrierFree for submitting inaccurate data, the commission is not penalizing the ISP for failing to submit over 10 years' worth of required Form 477 reports.

"The Pai FCC slept on BarrierFree's repeated violation of FCC rules," Free Press Research Director Derek Turner told Ars, calling the FCC's attempt to downplay its own role in spreading inaccurate data "shameful."

FCC staff flagged “inaccurate” data in 2018

BarrierFree began serving customers in Suffolk County, New York, in May 2004 but did not submit any of the required Form 477 filings for more than a decade because, it later told the FCC, it thought the filings were voluntary unless an ISP was applying for government grants, the FCC said in the Notice of Apparent Liability.

FCC staff emailed BarrierFree in November 2015, saying that the ISP's attempt to finally submit a Form 477 report was incomplete and "further action is needed." FCC staff contacted BarrierFree again in January 2016, saying that "For filings that are valid and remain un-submitted, please keep in mind that your company may be referred to the Enforcement Bureau for non-compliance."

BarrierFree finally submitted a complete Form 477 filing in March 2018, reporting widespread deployment in Northeast and Mid-Atlantic states despite only serving customers in one part of New York. The filing set off alarm bells among FCC staff for several reasons, such as the ISP claiming that it had more residential broadband connections in one Long Island census tract than the number of actual household units in the tract.

Here's what happened next, according to the FCC:

“On June 5, 2018, [FCC] staff notified BarrierFree of "certain items in your filing which are unusual and potentially inaccurate, and corrections may be necessary." On July 31, 2018, staff repeated that admonition.

On July 11, 2018, Commission staff issued a Public Notice reminding Form 477 filers that the next filing would be due no later than September 4, 2018. On August 29, 2018, staff emailed BarrierFree that there was a "FILING DUE DATE APPROACHING." On September 18, 2018, staff again informed BarrierFree that "FILING DUE DATE MISSED... POTENTIAL ENFORCEMENT ACTION: Please note that the Commission tracks filers who consistently file Form 477 after the deadline. Failure to file FCC Form 477 in a timely manner may result in fines and penalties." And on October 15, 2018, staff reiterated that admonition.”

BarrierFree did not make the required September 2018 filing, the FCC said. FCC staff sent another warning to BarrierFree on February 19, 2019 about the upcoming March 2019 deadline. Yet on the very same day, the FCC chairman's office issued a press release touting the draft report that included deployment data inflated by BarrierFree's incorrect March 2018 filing. "This report shows that our approach is working," Pai said in the press release.
jump to endpage 1 of 2

Unclear why Pai touted false data

On March 5, 2019, after digging through public Form 477 reports, Turner notified the FCC about Free Press' findings, which ultimately led to Pai admitting the mistake and the FCC launching an investigation into BarrierFree.

It's not clear why Pai circulated a draft broadband deployment report in February 2019 that included BarrierFree's inaccurate data, given that FCC staff repeatedly warned BarrierFree that its filing was potentially inaccurate starting in June 2018. We asked Pai's office for comment and will update this article if we get a response.

In the Notice of Apparent Liability, the FCC said that "BarrierFree's incorrect deployment data had real and detrimental effect on preliminary Commission analysis in a draft broadband data report," but that "ultimately, the draft was revised to remove BarrierFree's incorrect data."

Though BarrierFree missed filing deadlines in September 2018 and March 2019, the ISP filed a revision to its March 2018 filing in March 2019 and submitted new filings in September 2019 and March 2020. All of these filings included inaccurate claims that BarrierFree served more households than actually existed in the Long Island census tract, the FCC said.

BarrierFree's filings listed the wrong census tract, the FCC said. "To make matters worse, BarrierFree's FCC Form 477 subscriber data make no geographic sense. Although BarrierFree claims its subscribers are on Fire Island, each of its four filings claim its subscribers are in Census Tract A, which is located not on Fire Island, but on the north shore of Long Island, New York abutting the Long Island Sound," the FCC said.

BarrierFree's filings would still have been wrong even if it had listed the correct census tract, the FCC said:

“Although Census Tract B has somewhat more housing units than Census Tract A, BarrierFree's reported subscription numbers still significantly exceed the number of housing units in Census Tract B, by approximately 15 percent in its March 2018 filing and revised filing, and by more than 8 times in its September 2019 and March 2020 filings.”

BarrierFree's explanation for claiming to offer service in states outside New York was "unpersuasive," the FCC said:

“BarrierFree claims its March 2018 deployment data are accurate because it has access to lines in the seven states and Washington, DC due to a business relationship with Verizon. However, the [FCC staff] investigation has uncovered no evidence that BarrierFree owns or leases any lines for the delivery of terrestrial wireless broadband service other than for the purpose of serving a portion of the Fire Island section of Suffolk County, NY.”

The FCC concluded that "BarrierFree had no reasonable basis" to believe it could report offering broadband service "in any census block outside the limited portions of Fire Island where it currently has customers." Even now, BarrierFree has still not provided requested financial documents and other documents, the FCC said.

We contacted BarrierFree about the FCC-proposed fine and allegations today and will update this article if we get a response.

“Maximum” fine

The proposed $163,912 fine is "the statutory maximum" and covers BarrierFree's inaccurate filings from 2018 to 2020 as well as its nonresponses and inaccurate responses to FCC inquiries, the FCC said. But FCC Commissioner Jessica Rosenworcel, a Democrat on the Republican-majority commission, argued that the FCC could have issued a much bigger penalty.

"As the record demonstrates, BarrierFree failed to file with the FCC on 27 separate occasions," Rosenworcel said in a partial dissent. "But on 26 of those occasions today's action gives the company a pass. This hardly feels like the vigorous enforcement our data-gathering efforts need. Instead of cleaning up this mess, giving the company a pass on so many filings only sweeps their transgressions under the rug."

Rosenworcel said her requests to impose penalties for the other 26 filing failures were denied on the premise that "the statute of limitations has expired for these violations. But in other FCC contexts—for example, failure to file hearing aid compatibility reports or reports under the Lifeline program—this agency has treated the failure to file a required form as an ongoing violation until it is cured. Why wouldn't we do so here? Nothing in the law prevents us from adjusting our approach now to align it with how the agency addresses other filing failures."

“The FCC should fine itself”

As Turner told Ars, the FCC "didn't even notice that the company didn't file until BarrierFree made itself known by starting but not completing a filing [in 2015]. Yet the FCC let it go." By July 2018, the FCC knew BarrierFree "had likely filed bad data for its year-end 2017 filing, and asked the company to fix it, but let it go," Turner said. Moreover, the FCC issued a draft "report that used that bad data" and "Pai made boasts about it."

"The FCC didn't move to correct the [broadband-deployment] report, correct the underlying data, or investigate BarrierFree until after our [filing] and the media attention it garnered," Turner said.

Even in this week's Notice of Apparent Liability, the FCC barely touched on its role in spreading data that FCC staff had flagged as potentially inaccurate. "This is shameful," Turner concluded. "The FCC should fine itself and Commissioner Pai. They either were hapless and not communicating internally, or they were corrupt in moving and boasting about the results of data they knew were horribly wrong."
https://arstechnica.com/tech-policy/...asnt-accurate/





SpaceX Launches 12th Starlink Mission, Says Users Getting 100Mbps Downloads

Company also says it has successfully tested Inter-Satellite links.
Eric Berger

On Thursday morning a Falcon 9 rocket lifted off from Kennedy Space Center, carrying SpaceX's 12th batch of Starlink Internet satellites. The mission went nominally, with the first stage making a safe landing several minutes after the launch, and the full stack of satellites deploying shortly thereafter.

Prior to launch, webcast commentator Kate Tice, a senior program reliability engineer at SpaceX, offered several details about development of the space-based Starlink Internet service.

"We are well into our first phase of testing of our private beta program, with plans to roll out a public beta later this year," Tice said. For several months, SpaceX has been collecting names and addresses for people interested in participating in the public beta here.

Tice also revealed the first official public information about internal tests, saying that SpaceX employees have been using Starlink terminals, collecting latency statistics, and performing standard speed tests of the system.

"Initial results have been good," she said. These tests reveal "super-low latency," and download speeds greater than 100 megabits per second. This, she noted, would provide enough bandwidth to play the fastest online games and stream multiple HD movies at once.

Talking to one another

These comments are in contrast to some recent Starlink speed tests posted anonymously online, which showed download speeds ranging from 11Mbps to 60Mbps. However it seems plausible that SpaceX is continuing to refine the software of its network and improve coverage as it adds more satellites. "Our network is very much a work in progress, and over time we will continue to add features to unlock the full potential of that network," Tice said.

SpaceX shared this information publicly for the first time as it is seeking to unlock access to the Federal Communications Commission's Rural Digital Opportunity Fund, which is set to pay up to $16 billion to Internet service providers over 10 years. To qualify, the company would need to deliver speeds of at least 25Mbps, with latencies below 100 milliseconds.

During the webcast, Tice also said SpaceX has successfully tested Inter-Satellite links for the first time on Starlink satellites.

"Recently, the Starlink team completed a test of two satellites in orbit that are equipped with our Inter-Satellite links, which we call space lasers," she said. "With these space lasers the Starlink satellites were able to transfer 100s of gigabytes of data. Once these space lasers are fully deployed Starlink will be one of the fastest options available to transfer data around the world."

This technology may prove useful not only to deliver lower latencies and a continuity of service but also to entice the US military to invest in Starlink for warfighter communications.
https://arstechnica.com/science/2020...bps-downloads/





European ISPs Report Mysterious Wave of DDoS Attacks

Over the past week, multiple ISPs in Belgium, France, and the Netherlands reported DDoS attacks that targeted their DNS infrastructure.

Catalin Cimpanu

More than a dozen internet service providers (ISPs) across Europe have reported DDoS attacks that targeted their DNS infrastructure.

The list of ISPs that suffered attacks over the past week includes Belgium's EDP, France's Bouygues Télécom, FDN, K-net, SFR, and the Netherlands' Caiway, Delta, FreedomNet, Online.nl, Signet, and Tweak.nl.

Attacks lasted no longer than a day and were all eventually mitigated, but ISP services were down while the DDoS was active.

NBIP, a non-profit founded by Dutch ISPs to collectively fight DDoS attacks and government wiretapping attempts, provided ZDNet with additional insights into the past week's incidents.

"Multiple attacks were aimed towards routers and DNS infrastructure of Benelux based ISPs," a spokesperson said. "Most of [the attacks] were DNS amplification and LDAP-type of attacks."

"Some of the attacks took longer than 4 hours and hit close to 300Gbit/s in volume," NBIB said.

The DDoS attacks against European ISPs all took place starting with August 28, a day after ZDNet exposed a criminal gang engaging in DDoS extortion against financial institutions across the world, with victims like MoneyGram, YesBank India, Worldpay, PayPal, Braintree, and Venmo.

While ZDNet does not yet have any evidence that the two series of incidents are connected, the DDoS attacks against financial services subsided right as the attacks against European ISPs got underway.

Furthermore, sources tracking the extortion group told ZDNet that just before attacking financial services, the same gang had also targeted several ISPs in Southeast Asia just weeks before.

In addition, several security experts have also told ZDNet that the massive CenturyLink outage that took place over the weekend is believed to have been the result of an initial DDoS attack. In separate reports, both Cisco and CloudFlare said the outage was caused by a bad Flowspec rule, a typical tool usually deployed when mitigating DDoS attacks.

Update on September 4: The Dutch cyber-security agency (NCSC) has published an advisory today confirming that the Dutch ISPs attacked this past week have been the subject for DDoS extortion attempts, with the attackers demanding large sums of money in Bitcoin to stop the attacks — similar to the attacks against financial institutions reported by ZDNet last week. There was no attribution to the attacks, so we still can't confirm it's the same group.
https://www.zdnet.com/article/europe...-ddos-attacks/





WebBundles Harmful to Content Blocking, Security Tools, and the Open Web
Peter Snyder

In a Nutshell…

Google is proposing a new standard called WebBundles. This standard allows websites to “bundle” resources together, and will make it impossible for browsers to reason about sub-resources by URL. This threatens to change the Web from a hyperlinked collection of resources (that can be audited, selectively fetched, or even replaced), to opaque all-or-nothing “blobs” (like PDFs or SWFs). Organizations, users, researchers and regulators who believe in an open, user-serving, transparent Web should oppose this standard.

While we appreciate the problems the WebBundles and related proposals aim to solve,[1] we believe there are other, better ways of achieving the same ends without compromising the open, transparent, user-first nature of the Web. One potential alternative is to use signed commitments over independently-fetched subresources. These alternatives would fill a separate post, and some have already been shared with spec authors.

The Web Is Uniquely Open, and URLs Are Why

The Web is valuable because it’s user-centric, user-controllable, user-editable. Users, with only a small amount of expertise, can see what web-resources a page includes, and decide which, if any, their browser should load; and non-expert users can take advantage of this knowledge by installing extensions or privacy protecting tools.

The user-centric nature of the Web is very different from most application and information distribution systems. Most applications are compiled collections of code and resources which are difficult-to-impossible to distinguish and reason about. This difference is important, and is part of the reason there are many privacy-protecting tools for the Web, but very few for “binary” application systems.

At root, what makes the Web different, more open, more user-centric than other application systems, is the URL. Because URLs (generally) point to one thing[2], researchers and activists can measure, analyze and reason about those URLs in advance; other users can then use this information to make decisions about whether, and in what way, they’d like to load the thing the URL points to. More important, experts can load https://tracker.com/code.js, determine that it’s privacy-violating, and share that information with other users so that they know not to load that code in the future.

WebBundles Make URLs Meaningless

Google has recently proposed three related standards, WebBundles, Signed HTTP Exchanges (sometimes abbreviated to SXG), and Loading. Unless otherwise noted, this post will use the single term “WebBundles” to refer to all three specs. So far, WebBundles have been pitched for use in proposed advertising systems (i.e., TURTLEDOVE, SPARROW) and as parts of a follow-up to Google’s AMP system, although I suspect this is just the tip of the iceberg.

At a high level, WebBundles are a way of packing resources together, so that instead of downloading each Website, image and JavaScript file independently, your browser downloads just one “bundle”, and that file includes all the information needed to load the entire page. And URLs are no longer common, global references to resources on the Web, but arbitrary indexes into the bundle.

Put differently, WebBundles make Websites behave like PDFs (or Flash SWFs). A PDF includes all the images, videos, and scripts needed to render the PDF; you don’t download each item individually. This has some convenience benefits, but also makes it near-impossible to reason about an image in a PDF independently from the PDF itself. This is, for example, why there are no content-blocking tools for PDFs. PDFs are effectively all or nothing propositions, and WebBundles would turn Websites into the same.

By changing URLs from meaningful, global identifiers into arbitrary, package-relative indexes, WebBundles give advertisers and trackers enormously powerful new ways to evade privacy and security protecting web tools. The next section gives some examples why.

WebBundles Allow Sites to Evade Privacy and Security Tools

URLs in WebBundles are arbitrary references to resources in the bundle[3], and not globally shared references to resources. This will allow sites to evade privacy and security tools in several ways.

At root, the common cause of all these evasions is that WebBundles create a local namespace for resources, independent of what the rest of the world sees, and that this can cause all sorts of name confusion, undoing years of privacy-and-security-improving work by privacy activists and researchers. The sections below discuss just three ways that this confusion could be exploited by Websites with WebBundles.

Evading Privacy Tools By Randomizing URLs

Previously, if a site wanted to include (say) a fingerprinting script, it would include a <script> tag pointing to the fingerprinting script on the site. Each page on the site would refer to the same fingerprinting script by the same URL. Researchers or crowd-sourcers could then record the URL of that fingerprinting script in a list like EasyPrivacy, so that privacy-minded users could visit the site without fetching the fingerprinting script. This is how the vast majority of blocking and privacy tools work on the Web today.

WebBundles make it easy for sites to evade privacy tools by randomizing URLs for unwanted resources. What on the current Web is referred to everywhere as, say, example.org/tracker.js, could in one WebBundle be called 1.js, in the next 2.js, in the third 3.js, etc. WebBundles encourage this by removing all costs to the site; caching becomes a wash (because you’re already returning all resources to every user and caching the entire bundle), and there is no need to maintain a URL mapping (because the bundle you sent to the user already has the randomized URL).

Evading Privacy Tools By Reusing URLs

Even worse, WebBundles would allow sites to evade blocking tools by making the same URL point to different things in each bundle. On the current Web, https://example.org/ad.jpg points to the same thing for everyone. It’s difficult[4] for a website to have the same URL return two different images from the same URL. As a result, blocking tools can block ad.jpg knowing that they’re blocking an advertisement for everyone; there is little risk that it’s an advertisement for some users, and the company logo for others.

WebBundles again change this in a dangerous way. Example.org could build a WebBundle so that https://example.org/ad.jpg in one bundle refers to an advertisement, in another bundle refers to the site’s logo, and in a third bundle refers to something else. Not only does this make building lists for researchers difficult-to-impossible, but it gives sites a powerful new capability to poison blocking lists.

Evading Privacy Tools By Hiding Dangerous URLs

Finally, WebBundles enable an even more dangerous form of evasion. Currently, groups like uBlock Origin and Google’s Safe Browsing project build lists of URLs of harmful and dangerous web resources. Projects such as these two consider the URL either the only, or a significant, input when determining whether a resource is harmful. The universal, global nature of a URL is again what makes these lists useful.

WebBundles again enable sites to evade these protections, by allowing sites to refer to known-bad-resources by known-good-urls. It would be very difficult to get sites to treat, say, https://cdn.example.org/cryptominer.js, as if it were https://cdn.example.org/jquery.js (and vice versa) on the wider Web; in a WebBundle it’d be trivial.

WebBundles Make Privacy Violations that are Currently Difficult, Easy

The designers and advocates of the WebBundle specs argue that none of this is new, that all of the above ways of circumventing privacy protections are already possible. This is technically true, but misses the big picture by ignoring economics. WebBundles make circumvention techniques that are currently expensive, fragile and difficult, instead cheap or even free.

For example, it’s true that web sites can use a large number of URLs to refer to the same file, to make things difficult for blocking tools, but in practice this is difficult for sites to do. Randomizing URLs harms caching, requires that a mapping from the random URL to the true value be stored somewhere persistently and pushed out to CDNs, and so on. It’s possible for sites to do this, but it’s difficult and costly, and so it’s uncommon.

In a similar vein, sites today can use cookies or other tracking mechanisms to get a single URL to behave differently for every user, to do the same kinds of URL-confusion attacks discussed above. But again, this is fragile (what to do for new visitors?), difficult (you need to maintain and distribute the cookie-to-resource mapping), and expensive (most web servers and hosting systems rely on aggressive caching to scale, making these evasion techniques prohibitive to sites operating below the very-large-corporation scale).

In general, WebBundles make something undesirable, dramatically easier because it is cheaper.

Additional Concerns

This post focuses on the harm WebBundles will do to privacy and security tools. We have additional concerns with WebBundles and the related standards. We may write about them in future posts, but a partial list includes:

• SXG lacks a repudiation system: If a site accidentally includes, say, malware today, the site can solve the problem by just updating the site. If a site signs a WebBundle with SXG, there is no clear way for the signer to indicate “no longer trust this specific bundle.”
• Interactions with Manifest v3: Manifest v3 limits extensions to using URL patterns for blocking; WebBundles makes those URLs meaningless. These two features together will allow sites to completely circumvent blocking.
• Origin Confusion: Loading + SXG allow you to fetch content from one server, but execute it with the privacy and security properties of another server. The potential for user confusion is enormous, and while we are confident that Googlers are working hard to try and address the UI/UX issues here, the risk to users is enormous and not reduced to manageable forms.

Conclusion

Brave works to improve privacy on the Web, in the web browser we build, the tools we build and share, and the advocacy we do in standards bodies. The concerns shared in this post are just one example of the work Brave does to try and make sure Web standards stay focused on privacy, transparency and user control.

We’ve tried to work at length with the WebBundle authors to address these concerns, with no success. We strongly encourage Google and the WebBundle group to pause development on this proposal until the privacy and security issues discussed in this post have been addressed. We also encourage others in the Web privacy and security community to engage in the conversation too, and to not implement the spec until these concerns have been resolved.

One way to join the conversation would be to comment on this issue describing these concerns with WebBundles (both the issue and this blog post were written by the same person). Other options include opening new issues against the spec, or letting your web browser know how important privacy tools are to you, and that the risk this proposal poses to those tools.

[1] Particularly in ensuring the integrity of the initial page and its subresources.

[2] There are exceptions, and there is nothing in the web platform that requires this, but it’s nevertheless the case that URLs are generally expected to be semi-permanent. This semi-permanent expectation is reflected across the Web platform, including aspects of cache policy, how libraries instruct people to deploy code, etc.

[3] They can also be references to resources outside the bundle, but doing so defeats the purpose of the bundle in the first place, so it’s not discussed further in this post.

[4] As discussed later, not impossible, but difficult. The point is that WebBundles make evasion methods that are currently difficult and fragile, easy and effortless for attackers.
https://brave.com/webbundles-harmful...-the-open-web/





U.S. Court: Mass Surveillance Program Exposed by Snowden was Illegal
Raphael Satter

Seven years after former National Security Agency contractor Edward Snowden blew the whistle on the mass surveillance of Americans’ telephone records, an appeals court has found the program was unlawful - and that the U.S. intelligence leaders who publicly defended it were not telling the truth.

In a ruling handed down on Wednesday, the U.S. Court of Appeals for the Ninth Circuit said the warrantless telephone dragnet that secretly collected millions of Americans’ telephone records violated the Foreign Intelligence Surveillance Act and may well have been unconstitutional.

Snowden, who fled to Russia in the aftermath of the 2013 disclosures and still faces U.S. espionage charges, said on Twitter that the ruling was a vindication of his decision to go public with evidence of the National Security Agency’s domestic eavesdropping operation.

“I never imagined that I would live to see our courts condemn the NSA’s activities as unlawful and in the same ruling credit me for exposing them,” Snowden said in a message posted to Twitter.

Evidence that the NSA was secretly building a vast database of U.S. telephone records - the who, the how, the when, and the where of millions of mobile calls - was the first and arguably the most explosive of the Snowden revelations published by the Guardian newspaper in 2013.

Up until that moment, top intelligence officials publicly insisted the NSA never knowingly collected information on Americans at all. After the program’s exposure, U.S. officials fell back on the argument that the spying had played a crucial role in fighting domestic extremism, citing in particular the case of four San Diego residents who were accused of providing aid to religious fanatics in Somalia.

U.S. officials insisted that the four - Basaaly Saeed Moalin, Ahmed Nasir Taalil Mohamud, Mohamed Mohamud, and Issa Doreh - were convicted in 2013 thanks to the NSA’s telephone record spying, but the Ninth Circuit ruled Wednesday that those claims were “inconsistent with the contents of the classified record.”

The ruling will not affect the convictions of Moalin and his fellow defendants; the court ruled the illegal surveillance did not taint the evidence introduced at their trial. Nevertheless, watchdog groups including the American Civil Liberties Union, which helped bring the case to appeal, welcomed the judges’ verdict on the NSA’s spy program.

“Today’s ruling is a victory for our privacy rights,” the ACLU said in a statement, saying it “makes plain that the NSA’s bulk collection of Americans’ phone records violated the Constitution.”

Reporting by Raphael Satter; Editing by Tom Brown
https://www.reuters.com/article/us-u...-idUSKBN25T3CK





Trump Administration Plans Expanded Use of Personal Data
Ben Fox

The Trump administration announced plans Tuesday to expand the collection of personal “biometric” information by the agency in charge of immigration enforcement, raising concerns about civil liberties and data protection.

In a statement, the Department of Homeland Security said it would soon issue a formal proposal for a new regulation for expanding “the authorities and methods” for collecting biometrics, which are physical characteristics such as fingerprints used to identify individuals.

U.S. Customs and Border Protection, a component of DHS, already collects biometric data, including iris scans, from people captured trying to enter the country without legal authority.

DHS said in a written statement that the new rule would authorize new techniques, including voice and facial recognition to verify people’s identity.

The agency did not release the proposed regulation or provide details. BuzzFeed News, which obtained a draft of the policy, reported earlier Tuesday that it included a provision for U.S. Citizenship and Immigration Services, which is also a component of DHS, to collect biometric data from non-citizens legally working and living in the U.S. or seeking to do so.

It would also require U.S. citizens sponsoring relatives to come to the country to provide biometric data, including in some cases their DNA, if it was needed to verify someone’s identity.

“This is a remarkable expansion of surveillance, especially the idea that immigrants could be called in at any point to give these biometrics,” said Sarah Pierce, an analyst with the Migration Policy Institute.

It typically takes several months for a new regulation to take effect after a public comment period. This measure is likely to prompt legal challenges, as have most immigration measures introduced under President Donald Trump.

Acting Deputy DHS Secretary Ken Cuccinelli characterized the new regulation in a written statement as a way to improve the verification of people’s identities and modernize operations.

“Leveraging readily available technology to verify the identity of an individual we are screening is responsible governing,” Cuccinelli said. “The collection of biometric information also guards against identity theft and thwarts fraudsters who are not who they claim to be.”

DHS is charged with enforcing the strict immigration enforcement policies that have been a hallmark of the Trump administration. But the agency is also in charge Citizenship and Immigration Services, which is responsible for enabling people to legally live and work in the United States.

A lawyer with the Electronic Frontier Foundation, a privacy rights watchdog, said there’s no justification for expanding biometric data collection, and no clear rules for how long the information can be retained, how it can be used, and whether it can be shared with foreign governments or other agencies.

“There doesn’t really seem to be any indication that this will help with combating fraud or anything like that,” said EFF staff attorney Saira Hussain. “Rather, it’s about making it so the government can engage in dragnet surveillance of immigrant communities by being able to access some of their most unique and sensitive biometric information.”

There are also concerns about protecting the data. CBP said last year that photos of travelers and their license plates at a border crossing were compromised in a cyber attack on a government contractor.

“The more data you collect and the more sensitive it is the more that opens up the government to potential data breaches,” Hussain said.

Andrea Flores, deputy director of policy for the American Civil Liberties Union, said the new policy would be an invasion of privacy rights and is a part of a broader administration effort to curtail all immigration.

“They really are trying to shut down legal immigration by creating new barriers, in this case asking people to turn over their most personal information and discouraging people from coming forward and using our legal immigration pathways.” said Flores, a policy analyst at DHS in the Obama administration. “It’s saying that immigrants are suspect and not welcome here, and if you’re related to an immigrant we’re also concerned about your presence.”

_____

Associated Press writer David Koenig in Dallas contributed to this report.
https://apnews.com/b417ac4f12cb04f4805e881a91834a11

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

August 29th, August 22nd, August 15th, August 8th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - November 24th, '12 JackSpratts Peer to Peer 0 21-11-12 09:20 AM
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 03:21 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)