P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 13-03-24, 01:30 PM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,017
Default Peer-To-Peer News - The Week In Review - March 16th, ’24

Since 2002































March 16th, 2024




Tor’s New WebTunnel Bridges Mimic HTTPS Traffic to Evade Censorship
Sergiu Gatlan

The Tor Project officially introduced WebTunnel, a new bridge type specifically designed to help bypass censorship targeting the Tor network by hiding connections in plain sight.

Tor bridges are relays not listed in the public Tor directory that keep the users' connections to the network hidden from oppressive regimes. While some countries, like China and Iran, have found ways to detect and block such connections, Tor also provides obfsproxy bridges, which add an extra layer of obfuscation to fight censorship efforts.

WebTunnel, the censorship-resistant pluggable transport inspired by the HTTPT probe-resistant proxy, takes a different approach. It makes it harder to block Tor connections by ensuring that the traffic blends in with HTTPS-encrypted web traffic.

Since blocking HTTPS would also block the vast majority of connections to web servers, the WebTunnel connections will also be permitted, effectively circumventing censorship in network environments with protocol allow lists and deny-by-default policies.

"It works by wrapping the payload connection into a WebSocket-like HTTPS connection, appearing to network observers as an ordinary HTTPS (WebSocket) connection," said the Tor Project.

"So, for an onlooker without the knowledge of the hidden path, it just looks like a regular HTTP connection to a webpage server giving the impression that the user is simply browsing the web."

To be able to use a WebTunnel bridge, you'll first have to get bridge addresses from here and add them manually to Tor Browser for desktop through the following procedure:

1. Open Tor Browser and go to the Connection preferences window (or click "Configure Connection").
2. Click on "Add a Bridge Manually" and add the bridge addresses.
3. Close the bridge dialog and click on "Connect."
4. Note any issues or unexpected behavior while using WebTunnel.

You can also use WebTunnel with Tor Browser for Android by configuring a new bridge and entering the bridge addresses after clicking "Provide a Bridge I know."

The WebTunnel pluggable transport was first introduced in December 2022 as an integration that could be tested using a Tor Browser test build.

It has also been available for deployment by bridge operators as part of a trial soft launch since June 2023, with the Tor Projects asking for more testers in October in "regions or using Internet providers where the Tor network is blocked or partially blocked."

"Right now, there are 60 WebTunnel bridges hosted all over the world, and more than 700 daily active users using WebTunnel on different platforms. However, while WebTunnel works in regions like China and Russia, it does not currently work in some regions in Iran," the Tor Project said.

"Our goal is to ensure that Tor works for everyone. Amid geopolitical conflicts that put millions of people at risk, the internet has become crucial for us to communicate, to witness and share what is happening around the world, to organize, to defend human rights, and to build solidarity."
https://www.bleepingcomputer.com/new...de-censorship/





Decentralized File Sharing, Explained
Guneet Kaur

1 The importance of decentralization in file sharing

Decentralized file sharing revolutionizes data access by eliminating dependence on centralized servers and utilizing P2P technology to distribute files across a network of nodes.

Distributing and accessing data without depending on a centralized server is possible with decentralized file sharing. Rather, files are kept on a network of linked nodes, frequently through the use of peer-to-peer (P2P) technology.

To enable file sharing, each network user can provide bandwidth and storage space. BitTorrent and InterPlanetary File System (IPFS) are two well-known instances of decentralized file-sharing protocols.

The decentralization of file sharing has completely transformed the way users access and store digital content. In contrast to conventional centralized file-sharing systems, which store files on a single server, decentralized file-sharing uses a P2P mechanism. Dispersing files among a network of linked nodes promotes a more robust and secure system.

2 Key components of decentralized file sharing

Decentralized file sharing depends on a number of essential elements to allow for a dispersed and safe data exchange.

Firstly, P2P networks, which enable direct user contact in the absence of a centralized server, are the backbone of a decentralized file-sharing system. By doing this, a robust system where participants directly share files is fostered.

Blockchain technology is essential to maintaining integrity and trust in decentralized file-sharing networks. It improves the general security of transactions and file transfers by enabling transparent and impenetrable record-keeping. Smart contracts are self-executing contracts with pre-established rules that automate tasks like access control and file verification.

Furthermore, files are distributed throughout a network of nodes using decentralized storage systems, which often use protocols like BitTorrent or IPFS. This approach eliminates the need for a central server and enhances the availability and reliability of data due to its redundant nature.

Cryptographic methods also protect the integrity and privacy of data. User confidence in decentralized file-sharing systems is increased by end-to-end encryption, which guarantees that only authorized parties may view the content. Together, these elements essentially provide a safe and dispersed setting for easy file sharing via the decentralized web.

3 How does decentralized file sharing work?

Decentralized file sharing operates on P2P networks by leveraging a distributed architecture rather than relying on a central server.

Peer discovery

Participants in the network (peers) need a way to discover one another, which is accomplished by using distributed hash tables (DHTs) or decentralized protocols. Peers build a network without a central authority by keeping track of other peers with whom they are linked.

DHTs are decentralized systems that enable distributed storage and retrieval of key-value pairs across a network, while decentralized protocols enforce communication rules that enable peer-to-peer interactions without relying on a central authority or server.

File distribution

A file is split up into smaller parts where every component is dispersed among several network peers. This approach enhances file availability, as it is not stored in a single location, ensuring better accessibility and reliability.

Dispersed storage

By distributing file portions over several nodes, decentralized storage systems lessen reliance on a single server. For instance, IPFS employs a content-addressed approach, in which files are recognized by their content as opposed to their physical location.

Peer interaction

Peers request and share file portions directly with one another. The coordination of file transfers no longer requires a central server, thanks to this direct connection. Every peer participates in the file distribution process by serving as both a client and a server.

Blockchain and smart contracts

Blockchain technology is incorporated into several decentralized file-sharing systems to increase security and transparency. Smart contracts are self-executing contracts with pre-established rules that can automate tasks such as access restriction and file verification and reward participants with tokens.

Often, decentralized file-sharing systems use cryptographic techniques like end-to-end encryption to provide privacy and security for the shared files. This ensures that the content can only be accessed and deciphered by authorized users.

4 Advantages of decentralized file sharing

The benefits of decentralized file sharing include enhanced resilience, improved privacy, scalability and censorship resistance.

By removing a single point of failure, it improves reliability and resilience. In a peer-to-peer network, where files are dispersed among several nodes and peers, the system continues to function even in the event that some nodes go down.

Also, decentralized file sharing, by its very nature, offers enhanced security and privacy. By ensuring that only authorized users can access and decode shared content, cryptographic solutions like end-to-end encryption help lower the danger of unauthorized spying or data breaches.

Better scalability can also be attained as the network expands. In decentralized networks, more users add to the network’s capacity, allowing it to accommodate more demand and traffic without requiring modifications to the centralized infrastructure.

Additionally, decentralized file sharing encourages resistance against censorship. It is harder for any organization to censor or limit access to particular files or information because there isn’t a single entity in charge of the network.

Furthermore, decentralized file sharing frequently incorporates incentive mechanisms through token economies or other reward systems to encourage users to contribute resources like bandwidth and storage, thereby creating a cooperative and self-sufficient environment.

5 Challenges and limitations of decentralized file sharing

Challenges associated with decentralized file sharing involve scalability issues, consistency concerns, user adoption complexities, security risks and regulatory uncertainties.

Firstly, as the network grows, scalability issues become more pressing. A poor user experience may result from increased involvement if it causes slower file retrieval times and greater bandwidth requirements.

Moreover, in decentralized systems, problems with consistency and coordination could surface. It may be difficult to maintain consistency in file versions throughout the network in the absence of a central authority, which could result in conflicts and inconsistent data.

Complicated interfaces and user acceptance present another difficulty. When compared to centralized options, decentralized file-sharing platforms frequently have a higher learning curve, which may put off consumers who are not familiar with P2P networks or blockchain technology.

Furthermore, security vulnerabilities still exist, especially in the early phases of decentralized file-sharing deployments. As these systems grow more widely used, they are targeted by different types of attacks, which makes the continuous development of strong security measures necessary.

Regulatory uncertainty is another difficulty. The adoption and long-term viability of decentralized file-sharing platforms may be impacted by the changing legal environment surrounding cryptocurrency and decentralized technology.

6 The future landscape of decentralized file sharing

The future of decentralized file sharing involves blockchain technology, P2P networks and tokenization for secure, efficient and collaborative data exchange, which challenge traditional models.

Decentralized file sharing is expected to bring about a more inclusive, secure and productive environment. Distributed ledger and blockchain technology will be essential in guaranteeing tamper-proof and transparent transactions and facilitating file sharing among users without depending on centralized intermediaries.

Decentralized protocols powering peer-to-peer networks will enable direct data transmission between users, cutting down on latency and reliance on centralized servers. Strong encryption techniques will allay privacy concerns and provide consumers with more control over their data.

Furthermore, tokenization could encourage resource sharing among users, resulting in the development of a collaborative ecosystem. Innovative file-sharing services will probably proliferate as decentralization gains pace, upending established paradigms and promoting a more robust and democratic digital environment.
https://cointelegraph.com/explained/...ring-explained





New York Disbars Infamous Copyright Troll

For whom the bell tolls; it tolls for copyright trolls.
Joe Patrice

Dictionary Series – EthicsFor years, Richard Liebowitz ran a very successful operation mostly sending threatening letters to companies claiming that they had infringed upon copyrights held by his photographer clients. Under the best of circumstances it’s a niche practice area that’s… kinda shady. But Liebowitz gained a degree of infamy across a number of matters for high-profile missteps in cases that sparked the ire of federal judges. Now, finally, New York has disbarred him.

Liebowitz wasn’t alone in the copyright trolling practice. A number of entities scour the internet looking for photographs that they can claim are “unlicensed” and demanding thousands of dollars to settle the matter knowing that between statutory damages for copyright infringement and the cost of litigation, most companies will just pay it. Many times, the photo in question actually is legally licensed through an agency like Getty Images, but the plaintiff photographer has, for whatever reason, pulled the image since the license was granted.

This runs the risk that some plaintiff might do this on purpose hoping to catch some legal licenseholder unawares and bank on the target just settling to avoid bringing any lawyers into the situation. Which is why, for example, a judge in one case cited by the disbarment opinion ordered Liebowitz “produce to the defendant records sufficient to show the royalty paid the last three times that the picture at issue was licensed, and the number of times the picture was licensed in the last five years; if the picture was never licensed, the plaintiff was to certify that fact as part of the plaintiff’s production.” In this case, Liebowitz “did not timely produce the required royalty information to the defendant” per the disbarment opinion.

Though most of the opinion describes more fundamental case management problems. From a case brought in 2017:

The respondent stated under penalty of perjury that he did not and had never made a settlement demand in this matter. In fact, the respondent had sent the defendant’s counsel an email in which the respondent proposed settling the matter for the sum of $25,000.

And another case brought in 2017:

On January 13, 2018, the respondent submitted a letter (hereinafter the January 13, 2018 letter) to the District Court, requesting an adjournment of the pretrial conference scheduled for January 19, 2018, and stating that the defendant “had yet to respond to the complaint” and that the plaintiff intended to file a motion for a default judgment. Judge Cote granted the request and ordered the motion for entry of default due on January 26, 2018.

The respondent’s statement in his January 13, 2018 letter that the defendant “had yet to respond to the complaint” was false and misleading, and the respondent knew that it was false and misleading when he made it. The January 13, 2018 letter failed to advise the court of the months-long history of communication between the parties, beginning in July 2017, as mentioned above.


From yet another matter:

The plaintiff admitted in a deposition and in other documents that the Photograph had been previously published on numerous occasions. To prevent the defendants from learning that the plaintiff did not hold a valid registration, the respondent stonewalled the defendants’ requests for documents and information. The respondent also failed to comply with an order by Magistrate Judge Debra Freeman to obtain and produce Copyright Office documents to demonstrate a valid registration. After it came to light that the Photograph was not registered, and despite the record stating otherwise, the respondent argued, without evidence, that the lack of registration was merely a mistake.

If there’s a lesson to take away from these and the many, many more examples included in the opinion, it’s that copyright trolling outfits are largely unprepared for someone to push back on their demands. Firing off demand letters, memorializing boilerplate licensing agreements, and collecting cash is a tidy business model right up until a firm has to juggle hearings and discovery requests and experts and “not committing perjury.”

But perhaps the most bizarre story involves Liebowitz missing an April 12, 2019 hearing, explaining that his grandfather had passed. When Judge Seibel directed Liebowitz under penalty of contempt to furnish evidence or documentation regarding the date of his grandfather’s death, Liebowitz shot back that the order “likely constitutes a usurpation of judicial authority or a breach of judicial decorum.”

On November 7, 2019, the respondent retained counsel to represent him in the contempt proceedings, and on November 11, 2019, the respondent sent a letter to Judge Seibel admitting that he failed to carry out his responsibilities to the District Court and to his adversary. The respondent also admitted that his grandfather died on April 9, 2019, and was buried that same day.

Just. Wow. You know, “my grandfather died this week” is something you can tell a court before a hearing and they’ll probably grant it. It’s not like you’re asking to give birth or anything. But to lie about it to the court and then keep doubling down is… a choice.

And a poor one as it turns out:

ORDERED that pursuant to 22 NYCRR 1240.13, the respondent, Richard P. Liebowitz, a suspended attorney, is disbarred, effective immediately, and his name is stricken from the roll of attorneys and counselors-at-law….

Matter of Liebowitz [New York Courts]
https://abovethelaw.com/2024/03/new-...pyright-troll/





Just Because Your Favorite Singer is Dead Doesn't Mean You Can't See Them 'Live'
Chloe Veltman

Concert experiences headlined by simulations instead of live artists have captured the public imagination. Tupac Shakur's posthumous appearance at California's Coachella Festival back in 2012 is a cultural touchpoint to this day. Audiences have been flocking to the ongoing ABBA Voyage experience in London since it debuted in May 2022. And there's been lots of chatter over the past couple of months about the forthcoming Elvis Evolution.

Scheduled to launch in November, Elvis Evolution will employ the latest in machine-learning technologies to reanimate the late King of Rock 'n' Roll. But this show — as well as many other so-called "hologram" stage productions featuring dead or absent musical celebrities — also make use of many longer-standing technologies, including a magic trick that's nearly 200 years old.

Don't say 'hologram'

The first thing to clear up about these types of virtual performers is that, technically speaking, none of them are holograms.

"Holograms are actually three-dimensional still images created by laser technology, like the security icon on a credit card," said Paul Debevec, a research professor in computer graphics at the University of Southern California's Institute for Creative Technologies.

Debevec said we can blame a little sci-fi movie made back in 1977 for the word's misuse.

"If you watch the movie Star Wars, they use 'holograms' to mean a person suspended, floating in space," said Debevec, referencing a well-known moment from the original 1977 film, in which the droid R2-D2 plays a recorded video message of Princess Leia asking for help.

"The whole 'Help me, Obi-Wan Kenobi' effect, that certainly captured the popular imagination," Debevec said.

The making of AI Elvis

So if the theatrical reincarnations of Elvis and his ilk aren't holograms, what are they?

Andrew McGuinness is the CEO and founder of Layered Reality, the London-based company behind Elvis Evolution, as well as two other high-tech immersive experiences currently running in London – The War of the Worlds and The Gunpowder Plot.

According to McGuinness, the technology for putting The King back on the concert stage can essentially be broken down into two parts.

"First of all, how we will create the content," McGuinness said. "And that's where the AI comes into it."

McGuinness said his company acquired the global rights for the creation of an immersive entertainment experience based upon Elvis and his story, and as a result is feeding all sorts of material from the star's official archives — hundreds of hours of video footage, photos, music — into a computer model.

This model effectively learns in minute detail how Elvis sings, talks, dances and walks.

"So, for example, if a performance of Elvis was originally shot from the front, we will be able to show you a camera angle from behind that was never actually shot," McGuinness said.

An old trick

The second part of the process involves delivering this AI Elvis in a way that will make him seem real to live theater audiences. That's where a slew of other technologies, including an old stage trick called Pepper's Ghost, come in.

"I think audiences would be surprised, going to see these 'state of the art' displays, that what they're looking at is something from 1862," said Jim Steinmeyer, a designer of stage illusions who also writes books about magic history, including one about Pepper's Ghost.

"It was used to put a three-dimensional ghost on stage, interacting with actors," Steinmeyer said, adding that the Victorian trick originally involved an actor dressed as a ghost hidden beneath the stage. "And then an angled piece of glass at the front of the stage would let you see the actors on the stage. But it would also work to reflect the actor that was concealed."

British engineer Henry Dircks first came up with the concept.

"It should have been called Dirck's Ghost," Steinmeyer said. "And Dircks was bitter about that for many, many years."

But Steinmeyer said scientist John Henry Pepper, a P.T. Barnum-like figure who created elaborate, crowd-pleasing public lectures in London, turned Pepper's Ghost into a truly workable system.

"It was a scientific novelty, but it was a kind of fantastic visual success," Steinmeyer said. "Since then, it's been used in many different forms."

Outside of concert settings, this old illusion has shown up in all sorts of places.

Among them, the 1931 movie romance Daddy Long Legs, in which a rich old man is haunted by visions of a young orphaned woman; the 1980s arcade video game Asteroids Deluxe; and, perhaps most famously, the Haunted Mansion attraction at Disneyland, where spectral figures play the organ and waltz across the ballroom. A version of the technology has also been used in "heads-up displays" in cars.

Technology isn't the most important thing

Layered Reality's McGuiness said he's excited to bring together many old and new technologies for Elvis Evolution. But ultimately, he said, it all needs to be in service of the audience's emotional journey.

"What I dream about is for people to forget about the technology in its entirety," McGuinness said. "I want them to feel like they've really seen Elvis perform."
https://www.npr.org/1238448991





Neil Young Says His Music Is Returning to Spotify

The singer-songwriter previously had his music removed from the platform over what he deemed the spread of vaccine misinformation on the Joe Rogan Experience.
Chris Eggertsen

Neil Young is bringing his music back to Spotify more than two years after requesting its removal from the platform, the singer-songwriter announced Tuesday (March 12).

In January 2022, Young published an open letter asking Spotify to pull down his catalog, citing what he called the spread of vaccine misinformation on the wildly popular Joe Rogan Experience podcast, which was then hosted exclusively on the streaming platform. Several other artists, including Joni Mitchell, Indie.Arie and Young’s Crosby, Stills, Nash & Young bandmates David Crosby, Stephen Stills and Graham Nash, subsequently followed suit, though CSN/CSN&Y and Arie’s music have since been restored to the service; Mitchell’s catalog remains absent.

In a new post on his Neil Young Archives website, the legendary artist said the end of Spotify’s exclusive deal with Rogan led to his decision to restore his music to the service. “My decision comes as music services Apple and Amazon have started serving the same disinformation podcast features I had opposed at Spotify,” the post reads – a clear reference to the Joe Rogan Experience, though Young never mentions it by name.

“I cannot just leave Apple and Amazon, like I did Spotify, because my music would have very little streaming outlet to music lovers at all, so I have returned to Spotify, in sincere hopes that Spotify sound quality will improve and people will be able to hear and feel all the music as we made it,” Young continued, before shouting out Qobuz and Tidal, where his catalog also lives, as “High res” streaming options.

Young concludes his post by stating his hope that Spotify “will turn to Hi Res as the answer and serve all the music to everyone. Spotify, you can do it! Really be #1 in all ways. You have the music and listeners!!!! Start with a limited Hi res tier and build from there!”

Spotify announced plans to roll out a HiFi tier in February 2021, though those plans have yet to come to fruition. In June 2023, Bloomberg reported the streaming giant would finally launch the product later in the year, but the company declined comment when reached by Billboard – and the calendar rolled over without the tier materializing.

Young has long been an advocate of high-resolution audio, even launching his own (now-defunct) high-res audio download platform, Pono, in 2015 before shuttering it two years later.

In September, Billboard estimated that the absence of Young’s catalog on Spotify had cost him roughly $300,000 in lost recorded music and publishing royalties to that point.

At press time, Young’s music catalog had yet to be restored to Spotify, which did not immediately respond to Billboard‘s request for comment.
https://www.billboard.com/business/s...fy-1235631717/





Kanye West Says He Won’t Be Releasing Vultures 2 on Streaming Services: “Streaming is Basically Pirating”
Iesha

Over the weekend, leaked DMs between Kanye West and the YEFANATICS fan page hinted that the rapper won’t release Vultures 2 on DSPs, asserting that streaming is a form of piracy.

“I was talking with the team about how to release the next album. Like James Blake said, streaming devalues our music. We sell albums on Yeezy.com,” Kanye tweeted. “I got 20 million Instagram followers. When five percent of my followers buy an album, that’s one million albums sold That’s 300K more than the biggest album last year.”

He added, “We sold 1 million items on Yeezy.com on Super Bowl Sunday, so we know it’s possible. How do you feel about us not streaming and only selling the album digitally.”

In a subsequent message, the individual behind the YEFANATICS fan page informed the rapper about a survey, indicating that 86% of respondents opposed releasing the next two parts of the Vultures series on Yeezy.com. The YEFANATICS representative recommended opting for streaming platforms for the albums instead.

“It would be nice for our community to support the albums. Streaming is basically pirating,” Ye’ replied.

He later tweeted that Vultures 2 will be available on Yeezy.com for $20, suggesting that streaming platforms negatively affect the music industry.

Vultures 2 experienced its second delay after the initially scheduled Friday release. The first installment of Vultures faced at least three delays before officially being released on February 8.
https://balleralert.com/profiles/blo...-on-streaming/





Spotify to Test Full Music Videos in Potential YouTube Faceoff

Swedish music streaming company Spotify (SPOT.N) is rolling out full-length music videos in a limited beta launch for premium subscribers, venturing into an arena that YouTube has dominated for nearly two decades.

Music videos will be available to premium users in the UK, Germany, Italy, Netherlands, Poland, Sweden, Brazil, Colombia, Philippines, Indonesia, and Kenya, in beta starting on Wednesday, the company said, as it attempts to grow its user base.

While it aims to reach 1 billion users by 2030, Spotify's new plan faces competition from Apple Music (AAPL.O) and Alphabet's (GOOGL.O) YouTube, which allows users to watch music videos for free.

Spotify's roll-out will include a "limited catalog of music videos, including hits from global artists like Ed Sheeran ... or local favorites like Aluna," it said.

In March last year, Spotify had introduced "clips", under-30-second vertical videos that are uploaded directly to Spotify for artists.

The company has also expanded its offerings to include podcasts and audiobooks in a bid to attract more users.

In February, it forecast premium subscribers would reach 239 million in the current quarter, above estimates of 238.3 million.
https://www.reuters.com/technology/s...ff-2024-03-13/





MusicWatch Reports Results of 2023 Annual Music Study: Record Numbers of Music Streamers and Paid Subscribers
Russ Crupnick

MusicWatch Annual Music Study points to continued health of the US music market, a record number of paid subscribers, and increased spend on recorded music.

Over 90% of internet users are streaming music in the US.

MusicWatch today published its 22rd edition of the US Annual Music Study, the most comprehensive overview of audio and music purchasing and listening in the US. MusicWatch shared a “bakers dozen” of facts and findings from the study.

1. Growing customer base. The US recorded music category continues to show healthy growth, with an increase of 10 million “buyers” in 2023. Buyers include those who purchase or pay for CDs, Vinyl Records, Digital Downloads or Music Streaming Subscriptions.

2. Increased spend. Spending on recorded music increased by 7 percent compared with year-ago, thanks to expenditures on Streaming Subscriptions, Vinyl and CDs.

3. Record numbers of paid subscribers. The number of paid music subscribers in the US hit a record, at 109M; 136M if SiriusXM and Amazon Prime music listeners are included. That means over half the population* are paying for an audio subscription.

4. More “juice to squeeze”. There is potential for additional subscription growth, though 7 of 10 Millennials already pay to subscribe. This places a premium on moving GenZ from free to paid services as well as converting the older, and more resistant, demographic.

5. The Walkman dream realized. The top reason for music streaming is the ability to listen to music anyplace and anywhere. Connectivity resonated as a key theme motivating subscribers.

6. Happy Birthday HipHop. In 2023 HipHop (finally) passed Classic Rock as America’s favorite genre, as measured by what we listen to, follow on social, purchase, interact with on streaming services or spend money on live events.

7. Content opportunities abound. Eighty percent of streamers regularly listen to audio categories besides music. Comedy, current events and podcasts closely compete for the #2 spot.

8. Will the vinyl revolution continue? Nearly nine of ten vinyl buyers plan to buy more or the same number of records in 2024.

9. End of the trend coming? The TikTok juggernaut continued in 2023, with 8 percent more users specifically engaging in music-based activities on the platform. Social video accounted for an increasing share of music listening time.

10. Dialing in. Music on broadcast radio, while not the powerhouse it once was, gained listeners in 2023 and continues to be the #1 in-car listening option- used by 69% of in-car listeners.

11. Pirate ships sinking. Music piracy continues to dampen, with fewer overall users getting files from mobile apps, streamripping, file transfers and P2P networks. Sharing of streaming accounts is also in decline.

12. Superfan gap? One in five (about 50 million) consider themselves superfans for their favorite artists, yet only 9% of these elite fans purchased a VIP package created by artists for fans (VIP live experience, exclusive CDs, vinyl or merch). Meaning there’s opportunity to close the gap between fanship and revenue generation. The good news is that half these superfans are active spenders. They bought CDs, vinyl, download, merch or tickets to live events at stadiums or festivals within the past year.

13. Remember the future! Streamers are interested in lots of potential capabilities, from AI to gaming integration and real-time livestreams. One of the top features remains hi-resolution sound quality. One in three streamers strongly agree with all three of the following:

-Obtaining the best quality sound is important.

-They would be interested in streaming or buying more in hi-res or immersive formats.

-They would be willing to pay more for hi-res sound quality.


*The MusicWatch Annual Music Study is compiled from two surveys; the core AMS study is conducted online among 4,000 respondents, weighted and projected to represent the US internet population aged 13 and older. audiocensus is a separate study of 3,000 and is the longest running US survey of music and audio time spent listening.
https://musicwatchinc.com/blog/music...l-music-study/





From Mono to Mainstream: 20 Years of Bluetooth Audio

A journey through wireless audio history with Bluetooth SIG’s Chuck Sabin.
Billy Steele

“When you think about the history of Bluetooth, and specifically about audio, you really have to go back to the mid-to-late ’90s.”

Chuck Sabin is a Bluetooth expert. As a senior director at Bluetooth Special Interest Group (SIG), he oversees market research and planning as well as business development. He’s also leading the charge for emerging uses of Bluetooth, like Auracast broadcast audio. In other words, he’s an excellent person to speak to about how far Bluetooth has come — from the days of mono headsets solely used for voice communication to today’s devices capable of streaming lossless-quality music.

In the mid ’90s, mobile phones were starting to become a thing, and of course so were regulations about hands-free use of them in cars. Sabin previously worked in the cellular industry, and he remembers how costly and intrusive the early hand-free systems were in vehicles. Bluetooth originated from cell phone companies working together to cut the cord to headphones since using those not-yet-wireless audio accessories in the car was cumbersome. One of the first mobile phones with Bluetooth was from Ericsson in the late ’90s, although an updated model didn’t make it to consumers until 2001. That same year, the IBM ThinkPad A30 became the first laptop with Bluetooth built in. At that time, the primary intent of the short-range radio technology was for voice calls.

“You had a lot of people who ended up with these mono headsets and boom mics,” he explained. You know, the people we all probably made fun of — at least once. Most of these things were massive, and some had obnoxious blinking lights. They’re definitely a far cry from the increasingly inconspicuous wireless earbuds available now.

Bluetooth as a specification continued to evolve, with companies leveraging it for music and streaming audio. To facilitate music listening, there had to be faster communication between headphones and the connected device. Compared with voice calling, continuous streaming required Bluetooth to support higher data speeds along with reduced latency. Where Bluetooth 1.0 was call specific, version 2.0 began to achieve the speeds needed for audio streaming at over 2 Mb/s. However, Sabin says, the 2.1 specification adopted by Bluetooth SIG in 2007 was when all streaming audio capabilities were implemented in automobiles, phones, headphones, headsets and more.

Of course, it would still be a few years before wireless headphones were mainstream. In the early 2000s, headphones were still directly connected to a mobile phone or other source device. Once Bluetooth became a standard feature in all new phone models, as well as its inclusion in laptops and PCs, consumers could count on wireless connectivity being available to them. Even then, music had to be loaded onto a memory card to get it on a phone, as dedicated apps and streaming services wouldn’t be a thing until the 2010s.

“The utility of the device that you carried around with you all the time was evolving,” Sabin said. “Bluetooth was ultimately riding that continued wave of utility, by providing the opportunity to use that phone as a wireless streaming device for audio.”

About the time wireless headphones had become popular, a few companies arrived with a new proposition in 2015: true wireless earbuds. Bluetooth improvements meant reduced power requirements leading to much smaller devices with smaller batteries — and still provide the performance needed for true wireless devices. Bragi made a big splash at consecutive CESs with its Dash earbuds. The ambitious product had built-in music storage, fitness tracking and touch controls, all paired with a woefully short three-hour battery life. Perhaps the company was a bit overzealous, in hindsight, but it did set the bar high, and eventually similar technologies would make it into other true wireless products.

“Companies that were building products were really starting to stretch the specification to its limits,” Sabin explained. “There was a certain amount of innovation that was happening [beyond that] on how to manage the demands of two wireless earbuds.” Bluetooth’s role, he said, was more about improving performance of the protocol as a means of inspiring advances in wireless audio devices themselves.

He was quick to point out that, for the first few years, true wireless buds accepted the Bluetooth signal to only one ear and then sent it to the other. That’s why the battery in one would always drain faster than the other. In January 2020, Bluetooth SIG announced LE Audio at CES as part of version 5.2. LE Audio delivered lower battery consumption, standardized audio transmission and the ability to transmit to multiple receivers — or multiple earbuds. LE Audio wouldn’t be completed until July 2022, but it offers a lower minimum latency of 20 to 30 milliseconds versus 100 to 200 milliseconds with Bluetooth Classic.

“All of the processing is now done back on the phone itself and then streamed independently to each of the individual earbuds,” Sabin continued. “That will continue to deliver better performance, better form factors, better battery life and so on because the processing is being done at the source level versus [on] the individual earbuds.”

The increased speed and efficiency of Bluetooth has led to improvements in overall sound quality too. Responding to market demands for better audio, Qualcomm and others have developed various codecs, like aptX, that expand what Bluetooth can do. More specifically, aptX HD provides 48kHz/24-bit audio for wireless high-resolution listening.

“One of the elements that came into the specification, even on the classic side, was the ability for companies to sideload different codecs,” Sabin explained. “Companies could then market their codec to be available on phones and headphones to provide enhanced audio capabilities.”

LE Audio standardizes Bluetooth connectivity for hearing aids, leading to a larger number of supported devices and interoperability. The use cases range from tuning earbuds to a user’s specific hearing or general hearing assistance needs, with or without the help of active noise cancellation or transparency mode, to simply being able to hear valuable info in public spaces via their earbuds or hearing aid.

“Bluetooth is becoming integral for people with hearing loss,” he explained. “Not only for medical-grade hearing aids, but you’re seeing hearing capabilities built into consumer devices as well.”

Sabin also noted how the development of true wireless earbuds have been key for people with hearing loss and helped reduce the stigma around traditional hearing aids. Indeed, companies like Sennheiser and Sony have introduced assistance-focused earbuds that look no different from the devices they make for listening to music or taking calls. Of course, those devices do that too, it’s just their primary aim is to help with hearing loss. The boom, which has been going on for years, was further facilitated by a 2022 FDA policy change that allowed over-the-counter sale of hearing aids.

One of the major recent developments for Bluetooth is broadcast audio, better known as Auracast. Sabin described the technology as “unmuting your world,” which is exactly what happens when you’re able to hear otherwise silent TVs in public spaces. You simply select an available broadcast audio channel on your phone, like you would a Wi-Fi network, to hear the news or game on the TV during your layover. Auracast can also be used for things like PA and gate announcements in airports, better hearing at conferences and sharing a secure audio stream with a friend. Companies like JBL are building it into their Bluetooth speakers so you can link unlimited additional devices to share the sound at the press of a button.

“You’re seeing it in speakers, you’ll see it in surround sound systems and full home or party-in-a-box type scenarios,” he said. Sabin also noted that applications beyond the home could simplify logistics for events, since Auracast audio comes from the same source before it’s sent to a PA system or connected earbuds and headphones with no latency. Sabin said the near-term goal is for Bluetooth audio to be as common in public spaces as Wi-Fi connectivity, thanks to things like Auracast and the standard’s constant evolution.

Even after 20 years, we’re still relying on Bluetooth to take calls on the go, but both the voice and audio quality have dramatically improved since the days of the headset. Smaller, more comfortable designs can be worn all day, giving us constant access to music, podcasts, calls and voice assistants. As consumer preferences have changed to having earbuds in at all times, the desire to tune into our surroundings rather than block them out has increased. “Unmuting your world” is now of utmost importance, and the advancement of Bluetooth technology, from the late ’90s through LE Audio, continues to adapt to our sonic preferences.
https://www.engadget.com/from-mono-t...153031600.html





FCC Scraps Old Speed Benchmark, Says Broadband Should be at Least 100Mbps

Standard of 100Mbps down and 20Mbps up replaces old 25Mbps/3Mbps benchmark.
Jon Brodkin

The Federal Communications Commission today voted to raise its Internet speed benchmark for the first time since January 2015, concluding that modern broadband service should provide at least 100Mbps download speeds and 20Mbps upload speeds.

An FCC press release after today's 3-2 vote said the 100Mbps/20Mbps benchmark "is based on the standards now used in multiple federal and state programs," such as those used to distribute funding to expand networks. The new benchmark also reflects "consumer usage patterns, and what is actually available from and marketed by Internet service providers," the FCC said.

The previous standard of 25Mbps downstream and 3Mbps upstream lasted through the entire Trump era and most of President Biden's term. There's been a clear partisan divide on the speed standard, with Democrats pushing for a higher benchmark and Republicans arguing that it shouldn't be raised.

The standard is partly symbolic but can indirectly impact potential FCC regulations. The FCC is required under US law to regularly evaluate whether "advanced telecommunications capability is being deployed to all Americans in a reasonable and timely fashion" and to "take immediate action to accelerate deployment" and promote competition if current deployment is not "reasonable and timely."

With a higher speed standard, the FCC is more likely to conclude that broadband providers aren't moving toward universal deployment fast enough and to take regulatory actions in response. During the Trump era, FCC Chairman Ajit Pai's Republican majority ruled that 25Mbps download and 3Mbps upload speeds should still count as "advanced telecommunications capability," and concluded that the telecom industry was doing enough to extend advanced telecom service to all Americans.

2-2 deadlock delayed benchmark increase

Democrat Jessica Rosenworcel has been the FCC chairwoman since 2021 and was calling for a speed increase even before being promoted to the commission's top spot. Rosenworcel formally proposed the 100Mbps/20Mbps standard in July 2022, but the FCC had a 2-2 partisan deadlock at the time and the 25Mbps/3Mbps standard stayed in place a while longer.

Biden's first nominee to fill an empty FCC seat was stonewalled by the Senate, but Democrats finally got a 3-2 majority when Biden's second pick was confirmed in September 2023. Today's 3-2 party-line vote approved the 100Mbps/20Mbps standard and a report concluding "that advanced telecommunications capability is not being deployed in a reasonable and timely fashion," the FCC said in its press release.

That conclusion is "based on the total number of Americans, Americans in rural areas, and people living on Tribal lands who lack access to such capability, and the fact that these gaps in deployment are not closing rapidly enough," the press release said. Based on data from December 2022, the FCC said that fixed broadband service (excluding satellite) "has not been physically deployed to approximately 24 million Americans, including almost 28 percent of Americans in rural areas, and more than 23 percent of people living on Tribal lands."

A draft of the FCC report was released before the meeting. "Based on our evaluation of available data, we can no longer conclude that broadband at speeds of 25/3Mbps—the fixed benchmark established in 2015 and relied on in the last seven reports—supports 'advanced' functions," the report said. "We find that having 'advanced telecommunications capability' for fixed broadband service requires access to download speeds of at least 100Mbps and upload speeds of at least 20Mbps. The record overwhelmingly supports increasing the fixed speed benchmark in this manner."

The report also sets a "long-term speed goal" of 1Gbps download speeds paired with 500Mbps upload speeds. The FCC said it intends to use this speed goal "as a guidepost for evaluating our efforts to encourage deployment."
https://arstechnica.com/tech-policy/...least-100mbps/





SpaceX Gets E-Band Radio Waves to Boost Starlink Broadband
Jason Rainbow

SpaceX has secured conditional approval to use extremely high-frequency E-band radio waves to improve the capacity of its low Earth orbit Starlink broadband constellation.

The Federal Communications Commission said March 8 it is allowing SpaceX to use E-band frequencies between second-generation Starlink satellites and gateways on the ground, alongside already approved spectrum in the Ka and Ku bands.

Specifically, SpaceX is now also permitted to communicate between 71 and 76 gigahertz from space to Earth, and 81-86 GHz Earth-to-space, using the up to 7,500 Gen2 satellites SpaceX is allowed to deploy.

SpaceX has plans for 30,000 Gen2 satellites, on top of the 4,400 Gen1 satellites already authorized by the FCC.

However, the FCC deferred action in December 2022 on whether to allow SpaceX to deploy the other three-quarters of its Gen2 constellation, which includes spacecraft closer to Earth to improve broadband speeds.

The regulator also deferred action at the time on SpaceX’s plans to use E-band frequencies, citing a need to first establish ground rules for using them in space.

In a March 8 regulatory filing, the FCC said it found “SpaceX’s proposed operations in the E-band present no new or increased frequency conflicts with other satellite operations.”

But the order comes with multiple conditions, including potentially forcing SpaceX to modify operations if another satellite operator also seeks to use the radio waves.

Starlink satellites use Ku-band to connect user terminals. In October, the FCC allowed SpaceX to also provide fixed-satellite services from Gen2 spacecraft using V-band spectrum, which like E-band is also extremely high frequency (EHF) and in its commercial infancy.

Higher frequency spectrum bands promise more bandwidth and throughput as they become increasingly subject to weather attenuation and other issues.

Last year, SpaceX said using E-band radio waves for backhaul would enable Starlink Gen2 to provide about four times more capacity per satellite than earlier iterations, without elaborating.

There are currently around 1900 Starlink satellites launched under the Gen2 license in orbit, according to spacecraft tracker and astrophysicist Jonathan McDowell — about two-thirds of these satellites are significantly larger and more powerful than Gen1 but smaller than full-scale versions slated to launch on SpaceX’s Starship vehicle. Around 3,600 separate satellites in orbit are classed as Gen1.

The FCC continues to defer action over whether to allow SpaceX to deploy the other 22,500 satellites in its proposed Gen2 constellation.
https://spacenews.com/spacex-gets-e-...ink-broadband/





FCC Denies Starlink Low-Orbit Bid for Lower Latency

Agency says SpaceX craft could hinder International Space Station
Mark Harris

The FCC has once again rejected a Starlink plan to deploy thousands of internet satellites in very low earth orbits (VLEO) ranging from 340 to 360 kilometers. In an order published last week, the FCC wrote: “SpaceX may not deploy any satellites designed for operational altitudes below the International Space Station,” whose orbit can range as low as 370 kilometers.

Starlink currently has nearly 6000 satellites orbiting at around 550 kilometers that provide internet access to over 2.5 million customers around the world. But its service is currently slower than most terrestrial fiber networks, with average latencies (the time for data to travel between origin and destination) over 30 milliseconds at best, and double that at peak times.

“The biggest single goal for Starlink from a technical standpoint is to get the mean latency below 20 milliseconds,” said Elon Musk at a SpaceX event in January. “For the quality of internet experience, this is actually a really big deal. If you play video games like I sometimes do, this is also important, otherwise you lose.”

The easiest way to reduce latency is to simply shorten the distance the data have to travel. So in a February letter, SpaceX pleaded with the FCC to allow its VLEO constellation: “Operating at these lower altitudes will enable SpaceX to provide higher-quality, lower-latency satellite service for consumers, keeping pace with growing demand for real-time applications.” These now include the military use of Starlink for communications in warzones such as Ukraine.

Starlink also argued that its VLEO satellites would have collision probabilities ten times lower than those in higher orbits, and be easier to deorbit at the end of their functional lives.

But the FCC was having none of it. The agency had already deferred VLEO operations when it licensed Starlink operations in December 2022, and used very similar languages in its order last week: “SpaceX must communicate and collaborate with NASA to ensure that deployment and operation of its satellites does not unduly constrain deployment and operation of NASA assets and missions, supports safety of both SpaceX and NASA assets and missions, and preserves long-term sustainable space-based communications services.”

Neither the FCC nor SpaceX replied to requests for comment, but the agency’s reasoning is probably quite simple, according to Hugh Lewis, professor of astronautics at the University of Southampton in the U.K. “We don’t understand enough about what the risks actually are, especially because the number of satellites that SpaceX is proposing is greater than the number they’ve already launched,” he says.

Although it might seem that having satellites orbiting below the International Space Station (ISS) would be safer than orbiting above, the fast-moving, SUV-sized Starlink craft might restrict when astronauts could reach the ISS—or leave in an emergency. “We are already seeing interruptions in launch windows thanks to Starlink,” says Lewis. “If you fill that region with tens of thousands of satellites, it would put an even bigger squeeze on them and really compromise your ability to service the space station.”

In February 2022, NASA recommended that SpaceX prepare an analysis of launch window availability for the space station and interplanetary missions to ensure that Starlink would not significantly reduce access to space. No such analysis has been made public.

John Crassidis, professor of mechanical and aerospace engineering the University at Buffalo, isn’t convinced the VLEO satellites would be that disruptive. “I think the FCC might be overreacting. We will know where all the satellites are, we can watch them and avoid them,” he says. “It is the stuff we can’t see that’s the problem.”

While VLEO is almost empty compared to higher orbits, satellites there still risk collisions from satellites transiting up to their operational altitudes—and particularly from objects making uncontrolled descents to Earth. “There’s a persistent stream of things that are coming down, old cubesats and debris,” says Lewis. “It’s like a constant rain coming down.”

New guidelines that are meant to leave fewer dead satellites in space for decades could also mean more transits through lower orbits, according to a paper Lewis wrote last year. He thinks that impacts in VLEO could easily eject high speed fragments up to higher orbits: “So even though you’re below the ISS, the ISS would still be within range of a debris cloud for a collision at 350 kilometers.”

Crassidis disagrees. “You’d have to have a very violent collision to make that happen,” he says. “That’s something I’m not worried about.”

Aside from safety considerations, other internet satellite operators also seem skeptical of SpaceX’s VLEO plans. Amazon asked the FCC for more opportunity to comment, while the Betzdorf, Luxembourg-based satellite telecom company SES sent a letter citing concerns about VLEO Starlinks interfering with its own satellites.

Although SpaceX will have to keep deploying its satellites well above 500 kilometers, the battle for a low-latency VLEO constellation isn’t over. The FCC only deferred its decision on the low-flying satellites, along with 22,488 other satellites from SpaceX’s original application, leaving the door open for future changes.

But for now at least, the astronauts of the ISS have won, and Musk and other online gamers will need to just keep on losing.
https://spectrum.ieee.org/starlink-vleo-below-iss





Undersea Cable Damage Causes Internet Outages Across Africa

• At least three telecommunications cables are offline
• Follows damage to three cables in Red Sea last month

Loni Prinsloo, Mpho Hlakudi, and Yinka Ibukun

Damage to at least three subsea cables off the west coast of Africa is disrupting internet services across the continent.

The West Africa Cable System, MainOne and ACE sea cables — arteries for telecommunications data — were all affected on Thursday, triggering outages and connectivity issues for mobile operators and internet service providers, according to data from internet analysis firms including NetBlocks, Kentik and Cloudflare. The cause of the cable faults has not yet been determined.

Data show a major disruption to connectivity in eight West African countries, with Ivory Coast, Liberia and Benin being the most affected, NetBlocks, an internet watchdog, said in a post on X. Ghana, Nigeria, and Cameroon are among other countries impacted. Several companies have also reported service disruptions in South Africa.

“This is a devastating blow to internet connectivity along the west coast of Africa, which will be operating in a degraded state for weeks to come,” said Doug Madory, director of internet analysis firm Kentik.

The cable faults off the Ivory Coast come less than a month after three telecommunications cables were severed in the Red Sea, highlighting the vulnerability of critical communications infrastructure. The anchor of a cargo ship sunk by Houthi militants was probably responsible, according to assessments by the US and cable industry group the Internet Cable Protection

⚠ Confirmed: Live network data show a major disruption to internet connectivity in and around West and Central #Africa; the incident affects networks supplying telecoms via subsea cables to multiple countries and operators

The Red Sea is a critical telecommunications route, connecting Europe to Africa and Asia via Egypt. The damaged cables carried about 25% of traffic in the region, according to estimates from Hong Kong-based internet provider HGC Global Communications, which uses the cables. It was re-routed via alternative cables, including via the west coast of Africa.

Together, the problems with cables on either side of the continent create a capacity crunch, with customers of those cables scrambling to find alternative routes.

Africa’s biggest wireless carriers MTN Group and Vodacom Group Ltd. said connectivity issues on undersea cable failures were affecting South Africa network providers. “Multiple undersea cable failures between South Africa and Europe are currently impacting network providers,” Vodacom said in a text message.

MTN said services in several West African countries were affected and it was working to “reroute traffic through alternative network paths” and “engaging with our partners to speed up the repair process for the damaged cables.”

Microsoft Corp. reported disruptions to its cloud services and Microsoft 365 applications across Africa.

“We have determined that multiple fiber cables on the west coast of Africa have been impacted which reduced total capacity supporting our regions in South Africa,” said Microsoft in a status update, adding that the Red Sea cable cuts are also impacting the east coast. “The combination of incidents has impacted all Africa capacity — including other cloud providers and public internet as well.”

Last year, the West African Cable System, along with another cable – the South Atlantic 3 – were damaged at a slightly different location, near the mouth of the Congo River following an undersea landslide. The loss of the cables knocked out international traffic traveling along the west coast of Africa and took about a month to repair.
https://www.bloomberg.com/news/artic...in-west-africa
















Until next week,

- js.



















Current Week In Review





Recent WiRs -

March 9th, March 2nd, February 24th, February 17th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
__________________
Thanks For Sharing
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 01:33 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)