View Single Post
Old 06-06-07, 07:38 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,018
Default Peer-To-Peer News - The Week In Review - June 9th, '07

Since 2002


































"The major labels these days are like the dinosaurs sitting around discussing the asteroid." – Paul McCartney quoting David Kahne


"This could replace the entire supply chain that has been in existence since Gutenberg." – Jason Epstein


"If you want to lose weight, turn off the television or watch something boring." – Alan Hirsch


"With Barbie, if you want clothes, it costs money. You can do it on the Internet for free." - Presleigh Montemayor, age 9


"If Macmillan's CEO really thinks that's the same medicine, than someone ought to check what medication he's taking." – Mike Masnick


"We’re the antithesis of MySpace. MySpace is about sharing information. We’re all about not being able to share information." – Lane Merrifield


"This poor guy now faces [the] daunting reality of having to litigate this on appeal against Gateway. By winning, he's lost." – Cliff Palefsky


"The science fiction writer’s job is to survey the future and report back to the rest of us." – Brent Staples


"What happened to Ms. Amero shouldn't happen to anyone — nor should it ever happen again." – The Day


"This internet-is-dangerous resolution was passed…unanimously." – David Cassel































June 9th, 2007








Does Digital File Sharing Render Copyright Obsolete?
Victoria Shannon

When the 1980s pop star Robin Gibb writes a song these days, he says he doesn't think about whether it is copyrighted or licensed - he devotes himself to his art and lets his handlers see to its legal and financial well-being.

But when NoobishPineapple, an 18-year-old from Spearfish, South Dakota, uploads his 36-second rap video about fast food onto YouTube, he has no staff of assistants to make sure his creation is protected or paid for - and he probably doesn't care, anyway.

That makes people like David Ferguson, head of the British Academy of Composers and Authors, nervous about how art will be sustained in the future. And it is giving people like Lawrence Lessig, founder of Creative Commons, an opening to promote alternatives to the world's increasingly maligned copyright systems.

The youth craze for making and posting digitized audio and video on the Internet - their own creations and those of others, without regard to ownership or payment - is driving a wedge between the traditional "commercial" economy and the upstart "sharing" market, analysts say. Likewise, it is paralyzing and polarizing the groups that are supposed to make sure writers and composers get the royalties they are due.

At a self-described summit meeting on copyrights in Brussels last week, the world's major groups representing creative authors - the collecting societies at "the bottom of the food chain," griped one executive - vented, fumed and wrung their collective hands about their future. At the end of the event, Italian authors called for a "strike" to suspend licensing any form of public performance for a week in June to call attention to illegal downloading and authors' rights.

In the absence of a wholesale update of royalty systems, billion-dollar court battles - like the Viacom lawsuit against Google, which owns the YouTube video-sharing site - will most likely be the determinant of the value of digital copyrights, analysts say.

"There are an extraordinary number of people who are creating on their own and doing so for a different reason than money," Lessig, a lawyer who allies himself with Google in copyright positions, said during an interview. "Somehow we've got to find a system that ratifies both kinds of creativity and doesn't try to destroy one in order to preserve the other."

Ben Verwaayen, chief executive of BT Group, the British phone company, laid the blame at the feet of the societies, not technology or authors themselves. "The problems are the institutions," he said. "They have to change."

In Europe, collecting societies have so far dodged a bullet aimed at them last year, after the European Commission started antitrust proceedings against their 150-year-old system of coordinating royalty payments and redistributing them to authors.

Gibb, part of the successful BeeGees band of "Night Fever" fame, testified on behalf of author societies at the hearing last summer on the commission's objections over royalty competitiveness issues. Now, he is adopting a more formal role; on Friday, he took over as president of Cisac, the international collecting-rights umbrella organization that sponsored the meeting.

"I feel strongly that it's a moral right for everybody to get what they deserve if they write a piece of work," Gibb said during an interview, "and they have a right to see that it's not used in a way that they're left out of the loop."

The commission has not closed its investigation. But since Gibb's intervention and other conversations with many of the 217 societies in Cisac, Ferguson said, "they are no longer talking about fining us, and they're not talking about taking money out of the pockets of creators."

But something has to give, most agree. Roger Faxon, chairman and chief executive of EMI Music Publishing, said the rigidity of European licensing had crimped digital music sales in Europe.

"We need to loosen it up," he said. "If we don't, we may well go back to a world in which you need a patron in order to make a living as a songwriter."

Gerd Leonhard, chief executive of a digital music start-up and author of "The End of Control," said he believes that the existing structure has outlived its usefulness, and - at a time when royalty-payment functions can be automated - he gives the collecting societies no more than three or five years of life.

Everyone seemed to have their own new way of going forward. After the European Commission's move against the collecting societies, EMI set up Celas, a one-stop shop for pan-European licensing of online and mobile service rights.

"In many ways it is an experiment, an attempt to find a different approach to try to solve the problem," Faxon said.

In Britain, meanwhile, Ferguson and Gibb are starting a cooperative record label called Academy Recordings that is designed from the ground up for the music writers. Ferguson said Academy had already struck deals with Apple's iTunes and the British start-up We7, the ad-supported British music download service backed by the rock artist Peter Gabriel.

Its first release will include members of the British music writers group like Gibb, Gabriel and the Pretenders. Like others before him, Ferguson envisions "a brand new digital business model."

The more, the merrier, some say. "We can't rely on knowing which business model is the one that is going to work," said Larry Kenswil, executive vice president of business strategy for Universal Music. "As content owners, we're obligated to try everything."

Joe Mohen, chairman and founder of SpiralFrog, which aims to start its advertising-supported free digital music store by the end of the summer, urged radical action, saying he had to cut deals with 38,000 music publishers in the United States alone.

"For new companies starting up, it is impossible to license country by country," he said. "If legitimate businesses are forced to do that, they're never going to be able to compete with the pirates. There's got to be some sort of pan-European licensing, and frankly global licensing is the preferred way."

Lessig, whose Creative Commons alternative licenses have been almost as abhorrent as online music theft to the societies, has nevertheless gained a grass-roots following as well as limited adoption by companies like Microsoft and the BBC. The licenses let the author determine whether to apply commercial rights and how much. They are available in 34 countries and were applied an estimated 145 million times last year.

Many collecting societies in Australia, Finland, France, Germany, Luxembourg, Spain, Taiwan and the Netherlands manage authors' rights for them, so individuals cannot apply a Creative Commons license. Lessig, a Stanford University law professor who is on a teaching fellowship at the American University in Berlin, said he hoped to announce a breakthrough agreement with a collecting society at the time of a Creative Commons conference in Croatia on June 15.

The author groups themselves are obviously conflicted, trying to balance supporting the audio and visual arts and making sure their creators get a portion of the royalty pie, when no one knows what the pie will look like.

Last year, more than half of all music acquired by consumers was unpaid, according to NPD Group, a market research company. Social "sharing" of CDs by friends accounted for 37 percent of all music consumption, NPD said.

"The CD is dying at a rate that is predictable at this point," Kenswil said. "It will someday level off into a niche market the way vinyl has. In five years, it will be of very little consequence.

"The problem is there is no physical medium to replace it. It's digital, but digital is in its infancy."

And many of those who would abolish copyrights in the digital age also are young, influenced by Internet social movements like free software code, blogging and file-sharing.

"A lot of people under 30 are 'can't pay, won't pay,' " Mohen said. "Many of them have never purchased a CD, and many never will. They have more time than they have money."

Ferguson can imagine the music distribution business disrupted so much in a few years that the entire world's catalog of music may be prepackaged and prepaid on some kind of key chain sold at gas stations. But he does not see the end of authors' rights groups.

"We're still going to need to license the hairdresser, the restaurant, the small radio station, national broadcasters," he said, noting that digital downloading may well represent an unsustainable business bubble.

But Alex Callier, songwriter and bass player with Hooverphonic, the Belgian pop band, bemoans the focus on business models and digital sleights of hand around "user-generated content."

"It used to be you had to know how to play the guitar and have some talent to make it in the music business," he said. "Some of the mystery and magic is gone."

Gibb, who said he never thought of his work as "intellectual property" but rather the result of an overwhelming need to write and perform, nonetheless was hopeful. "We're chipping away at the stone," he said.

Peter Jenner, chairman of the International Music Managers Forum, suggested a different approach. "I'd lock all the societies in a room until they get their act together," he said.
http://www.iht.com/articles/2007/06/...s/rights04.php





Is EMI the Chink in the RIAA Armor?
Marc Wagner

Does anyone really believe the RIAA when they tell us that ‘the sky is falling’ from rampant piracy?

Just last year, the recording industry, was trying to strong-arm Apple into raising its base price for it’s entire DRM-protected music library. Apple held firm on its 99 cent pricing model and drew much criticism — not only from the recording industry but also from those who object to Apple’s considerably market influence — even when that influence helps keep consumer prices low.

Sure, there are other legitimate music services offering a variety of pricing models which permit some users to enjoy more aggressive per-song pricing on some titles. Still, Apple’s combination of iTunes, premium iPod products and interface options, an extensive library from a broad segment of the recording industry, and a consistent pricing model — absent ongoing subscription requirements, has provided Apple a winning combination which makes them the clear market leader. (Much to the chagrin of the EU.)

So what happened this year? EMI announced that they will offer their extensive music library DRM-free through the iTunes music service. The catch? A 30-cent premium price for DRM-free music. The hope? That consumers will pay that premium to have DRM-free music. The carrot? This DRM-free music is provided in a higher bit-rate than the DRM-protected alternative.

Will this strategy work? We won’t know for awhile yet but one has to wonder why EMI would distribute any of its on-line music DRM-free if the situation were as dire as the RIAA would have us believe.

To be sure, piracy is a legitimate concern for the music industry. But what is the extent of that threat? My guess is that bootlegged CD’s (distributed for profit) represent a much larger threat to the music industry’s bottom line than the ‘casual piracy’ taking place on college campuses all over America (dare I say, the world?) Chris Dawson’s recent article “Can RIAA really make a case against students/schools?” only reinforces my perception.

The premise of RIAA claims is that piracy costs the recording industry money due to lost music sales but that premise assumes that those participating in the piracy would otherwise purchase the music if they couldn’t ’steal’ it. I believe this is a false assumption.

Today, 90% of the music industry’s music sales is still in the form of CD’s. Only 10% is on-line music sales. The assumption is that the decline in CD sales year-over-year is attributable to on-line piracy. Maybe — but then again, maybe not.

As Chris so aptly points out, ripping CD’s is the most common source of ‘casual piracy’. In my college days, the mode of piracy was limited to blank cassette tape and LP’s but the result was the same. Multiple people shared a single copy of an album. As cash-strapped students, if we couldn’t share, we would have had to do without the music. Plain and simple. And today, my peers and I (who were once among those “casual pirates”) are consumers who are buying CD’s because we value the superior quality of the medium. Did the music industry lose any money off of our piracy? Not a penny! (most of the music I pirated off of LP’s in the 1970’s has long since been replaced with CD’s of the same material.)

This is not to say that students who are targeted by the RIAA should ignore the attention they have drawn. Anyone whose on-line piracy has drawn the attention of the RIAA could find themselves in serious trouble with federal authorities — should the RIAA decide to prosecute (as ill-advised as that might be).

So what does EMI know that the RIAA (representing that recording industry as a whole) does not know? They know that suing customers (or even potential customers) is bad for business!

Will their strategy work? Maybe not! But if it fails, EMI has cost itself no customers and it has at least endeared itself to a few for the courage to buck the trend and offer DRM-free music.

Ultimately, piracy only becomes a problem for the music industry when a third-party makes money off of that piracy instead of the rightful owner of the material. There is no convincing evidence that those who are being targetted by the RIAA (predominantly, college students) are making any money off of on-line piracy.

So who is? The people making money off of on-line piracy are peer-to-peer service providers. Peer-to-peer services have many legitimate uses but some of those vendors are targetting those who would use their services to pirate music. Others provide free software designed to quietly share music without the user’s knowledge. These deceptive practices endanger their users in far more deceptive ways than anything the RIAA is doing.

The actions of the RIAA are heavy-handed and intended to intimidate users but they are direct. These peer-to-peer vendors are preying on the innocent. If the RIAA had the courage of its convictions, they wold be pursuing these vendors — not students.
http://education.zdnet.com/?p=1090





Major Webcasters to Face Billions in New Fees?
Anne Broache

We already know that Webcasters small and large are outraged at the prospect of having to pay higher royalty fees to the music industry, particularly when compared with what is required of their satellite and terrestrial radio counterparts.

But the heightened royalty rates enacted by the U.S. Copyright Royalty Board earlier this year and scheduled to take effect July 15 are not the only thing that's firing up leading Internet radio industry companies like RealNetworks, Yahoo, Pandora and Live365.

In letters distributed to various Capitol Hill offices on Thursday morning, the four companies' CEOs argue that the music industry will also be forcing collection of more than $1 billion per year from three services alone--Yahoo, RealNetworks and Pandora--in the name of covering so-called administrative costs.

Here's how they say they derived that figure: When the CRB decided earlier this year to change the rules for Internet broadcasters, it also decided to levy a $500 minimum annual fee per Internet radio "channel." SoundExchange, the non-profit music industry entity that collects the royalty and other fees on behalf of record labels, says that minimum payment is supposed to cover administrative costs.

But since some of the larger Internet radio services potentially offer their listeners hundreds of thousands of unique "channels" (RealNetworks' Rhapsody offered more than 400,000 in 2006 alone, according to a company spokesman), the companies view the ruling as forcing them to multiply that mandatory minimum payment accordingly (for Real, that would amount to $200 million).

Such an amount would far outpace the $20 million in total royalty fees collected by SoundExchange from the Internet radio industry last year, the CEOs note in their letter. And besides, it's not even clear that those payments would go to artists, as royalty payments do, the companies argue.

"While we don't imagine SoundExchange would keep this $1 billion all to itself, this lack of clarity is absurd," RealNetworks spokesman Matt Graves told CNET News.com.

SoundExchange did not respond to requests for comment.

Thursday's letter is just the latest step by the Internet radio industry to combat the CRB ruling. An alliance of commercial Webcasters and National Public Radio has already asked a federal appeals court for an emergency halt to the CRB's decision, which is currently scheduled to take effect July 15. They're hoping politicians will move quickly to enact legislation that would overturn the new requirements and level Internet radio royalty rates with those required of satellite radio providers.
http://news.com.com/8301-10784_3-972...=2547-1_3-0-20





The Internets, They Can Be Cruel
Noam Cohen

THE Internet can sometimes seem overrun with versions of Simon, the alpha male tech guy from the BBC series “The Office.”

When the show’s protagonist, Tim, asks Simon what he is doing to his computer, Simon at first answers, “You don’t need to know,” and then allows that he is installing a firewall for Tim.

“O.K. What’s that?”

“It protects your computer against script kiddies, data collectors, viruses, worms and Trojan horses, and it limits your outbound Internet communications. Any more questions?”

“Yes. How long will it take?”

“Why? Do you want to do it yourself?”

“No, I can’t do it myself. How long will it take you, out of interest?”

“It will take as long as it takes.”

“Right, er, how long did it take last time when — ”

“It’s done.”

Any modern office worker quickly learns that professing complete inadequacy is usually the best move in the presence of such a character. The same rules, however, don’t apply in public life.

The latest example of the danger of revealing technological weakness is Judge Peter Openshaw of the High Court, Queen’s Branch Division, in London. While presiding over the trial of three men charged under antiterrorism laws, he was quoted by Reuters last month as saying in open court: “The trouble is, I don’t understand the language. I don’t really understand what a Web site is.”

The pillorying of the judge — and Wikipedia entry — came nearly instantly (en.wikipedia.org/wiki/Peter_Openshaw). Not since Senator Ted Stevens, Republican of Alaska, referred to the Internet as “a bunch of tubes,” or President Bush spoke of the “Internets,” had the Web grinned so widely.

The next day, the Judiciary of England and Wales issued an extraordinary statement asserting that Judge Openshaw’s comments had been taken out of context. (Reuters, while reporting on that statement, said it “stands by its story.”)

“Trial judges always seek to ensure that everyone in court is able to follow all of the proceedings,” the statement said, adding that Judge Openshaw was acting not for himself but “on behalf of all those following a case, in the interests of justice.”

In case anyone was curious, the statement concluded that “Mr. Justice Openshaw is entirely computer literate and indeed has taken notes on his own computer in court for many years.”

Some in the British press parsed that official explanation, wondering why the judge hadn’t said explicitly that he was asking on behalf of the jury, if that is what he meant. Others came to his defense.

Alex Carlile, a member of the House of Lords, wrote in The Independent that he could attest that Judge Openshaw had a “sharp, incisive brain often well ahead of the barristers before him. He is at ease with a computer; I have seen him taking notes on it in court, as habitually he does.”

When I spoke by phone with the judiciary’s press office, I suggested that the judge use e-mail to send me his thoughts on the controversy and let the medium be the message. A spokeswoman there cast doubt on that idea, saying that the judicial e-mail system worked only internally — but she said that she would relay my request.

I suppose she may have sent him an electronic message with the details, but I’d like to think a clerk wrapped the note in a ribbon and rode by carriage to the courthouse. I never heard back, but I haven’t checked with Western Union for several days.

Such tales of technological cluelessness can serve as entertainment, or even provide a common folklore for the online world. But surely there is also an aspect of marking turf against “the technologically ignorant masses,” an instinct Simon the tech guy would surely recognize. Silence, while the master works.

Edward W. Felten, a professor of computer science and public affairs at Princeton, says it is hardly a coincidence that Internet commentators seize on slips by public officials. “There is a widespread belief online that many politicians and policy makers don’t understand the Internet well enough to regulate it,” he said.

Senator Stevens offered his description of the Internet last summer, as chairman of the committee responsible for telecommunications, commenting on the issue known as Net neutrality. “The Internet,” he said, “is not something that you just dump something on. It’s not a big truck. It’s, it’s a series of tubes.”

To the online activists who favor Net neutrality — the principle that high-speed Internet companies not be allowed to charge content providers for priority access — Mr. Stevens is the enemy, and an unworthy one at that. Hardly a day goes by on liberal sites like Daily Kos or Eschaton that “bunch of tubes” won’t appear. On Thursday, it was prominently displayed on both. (A Google search of “bunch of tubes” and “Stevens” brings in 8,310 results.)

Professor Felten, using his authority as a computer expert, defended the senator.

“I felt the criticism had gone too far,” he said, “had gone to the point of unfairness. It seemed to me that talking about the Internet as ‘tubes’ wasn’t too far from what even some of the experts do — talking about ‘pipes.’ ”

Professor Felten describes himself on the Net neutrality issue as believing “there is a problem, but I don’t think government can solve it.” But he said his sympathy for the senator had nothing to do with a shared outlook. Instead, he preached humility.

“The Internet is pretty complicated,” he said. “Nobody understands everything about how the Internet works.” He then spoke about the complexity of “emergent behavior” and some other ideas I hadn’t heard before.

When I told him that, of course the Internet wasn’t a bunch of tubes, it was a “bunch of wires,” he laughed. “Saying the Internet is a ‘bunch of wires’ is like saying your body is a bunch of meat.”

Know-it-all.
http://www.nytimes.com/2007/06/04/te...gy/04link.html





P2P Breaking Internode's Bank
Andrew Colley

INTERNODE says skyrocketing peer-to-peer traffic would have threatened its viability within a year if it hadn't brought on broadband price hikes announce today.

Internode says customers are abusing its download 'shaping' policy

Internode product manager Jim Kellett said that the business could no longer endure massive increases in its subscribers' appetite for bandwidth while decreases in prices for traffic to the US began were starting to bottom out.

“It’s not currently threatening the business in any way, but if it had carried on for another 12 months I think it would start to,” Mr Kellett said.

Mr Kellett said peer-to-peer accounted for a “ridiculously high” proportion of bandwidth used by its broadband subscribers.

Internode subscribers using plans that offer high download quota were to be hit hardest by the price increases, which ranged from $5 to $40 per month and affected most of its regular commercial services.

Internode customers on home and small business power plans face increases of $40 per month.

Charges for its basic service HOME-512-Value has been retained but the download quota attached to the plan has been scaled back from 8GB to 5GB per month.

The company said it may also introduce “additional access constraints” against customers who accrue large downloads after exceeding their quotas.

The Adelaide-based internet service provider said the customers were abusing its ‘shaping’ policy, which sees connections throttled after reaching their download allowance rather than have customers charged for additional bandwidth.

“Again, shaping is intended to be a gentle alternative to excess download fees, for customers who inadvertently exceed their download quota. It is not intended to be used as an additional unpaid download allocation," Internode said in a statement posted on its website.

“Unfortunately a small percentage of customers are exploiting the current shaped excess traffic scheme to download a very large additional quantity of data beyond the quota level in their selected plan."

Mr Kellett said that customers who had signed up to ISP recently would be able to cancel their service without incurring a $65 cancellation fee the company normally levies against customers who terminate their service within 6 months.
http://australianit.news.com.au/stor...-15306,00.html





UCLA Disputes Position on Congressional Piracy List
Paul McCloskey

Administrators from the University of California at Los Angeles are disputing the validity of data used by two congressional committees to identify universities that allowed the most illegal downloading of movie and music content on their campuses.

The House Committee on the Judiciary and the House Committee on Education and Labor recently sent letters to19 universities, asking them to complete a survey about actions they have taken to curb illegal file sharing. The universities had been identified by movie and music industry lobbies based on the number of copyright violation notices they issued to the schools.

But UCLA university officials said last week they believe the data used to determine the prevalence of the piracy on their campus are misleading.

Kenn Heller, assistant dean of students at UCLA, said the school has records for only 200 Digital Millennium Copyright Act violation notices, instead of the 889 notices claimed by the movie and recording industries.

"Our data is far, far less [than the industry's]," Heller told the Daily Bruin campus newspaper. "[We're] in the process of reconciling the data and [figuring out] why there is such a large gap." He said he believes the information was taken out of context by industry officials because they do not factor in how many students attend the university when looking at the number of offenses.

Heller said UCLA does not block peer to peer software because there are legal and academic purposes for file sharing. "It's not an option the university has considered," he said.
http://campustechnology.com/articles/48371/





Congress, RIAA and Universities Prepare for P2P "Arms Race"
Ken Fisher

The heated debate over file sharing and the role that American colleges and universities should take in response to it ratcheted up a notch Tuesday as industry players and college and university reps gathered in DC to discuss the problem and potential solutions in a hearing before the House Committee on Science and Technology.

Universities are clear on their position: they do not see themselves as extensions of either the RIAA or the MPAA and are reluctant to get into the game of chasing down students and disciplining them at the behest of such organizations. Congress, however, is making it clear that they expect schools to do something, and the RIAA is waiting in the wings, promising to increase their use of pre-litigation letters.

House Committee members suggested that technological "solutions" should be tested more extensively at schools across the country. Chairman Bart Gordon (D-TN) said that laws will not be enough to curb piracy. "Technology will be the first line of defense," he said. Thus, network monitoring services and traffic shaping tools are being discussed as the solution to this problem, but there are simultaneous warnings that schools could end up spending hundreds of thousands of dollars on technology that will only be circumvented by determined students.

Dr. Charles Wight, associate VP for academic affairs and undergraduate studies at the University of Utah, said that his institution has benefited greatly from technological measures and has seen a 90% reduction in complaints from the RIAA and MPAA. Not only have students backed off file sharing, but he said that the school has saved more than $1.2 million in bandwidth fees and thousands on personnel costs.

The University of Utah uses Audible Magic's CopySense appliance on its campus residence network to thwart the sharing of files recognized by the service. Because that's not a complete solution and can easily be overcome by encryption, the school also monitors the throughput of students' machines and disables their networking capabilities if they exceed 2GB of outgoing traffic in a single day. When this happens, the university contacts the students affected and informs them about what's going on. Students who were engaging in P2P are then asked to sign a letter promising not to do it again, and Wight says there are very few repeat offenders. However, Wight did admit that their system has cut off users who were doing nothing wrong, such as using VoIP.

The entire hearing turned upon the fact that no solution is 100 percent effective and that students can get around "almost any technology solutions to this problem," as Dr. Wight noted. This problem is best addressed by throwing multiple solutions at the problem, including personalized investigation of issues when they arise.

But others warned of a costly "arms race" with students who are technically savvy and already capable of defeating many of the technological solutions on the market. Dr. Greg Jackson, VP and CIO of the University of Chicago, said that his school focuses on educational approaches to curbing piracy, and they have also seen successes that, when described, sound quite similar to that at the University of Utah.

In Jackson's view, technological solutions are not capable of truly dealing with high-traffic networks like those at major universities, because even if they are effective today, students are already finding ways around them, and P2P itself is becoming an increasingly popular approach to networking for things other than illicit file sharing. Jackson questions how something like Audible Magic will continue to work once DRM-free music sales take off, and networks are full of legitimate music files that lack DRM but look "pirated" to commercial systems.

"The only successful, robust way to address problems that involve personal responsibility and behavior is with social rather than technological tools," he said. "If we instead try and restrict behavior technologically... the only result will be an arms race that nobody wins."

The question boils down to money. How much are schools willing to pay to see this problem go away? Educational programs cost money; technological solutions cost even more. In the latter case, there's an emerging sense that making it difficult for Joe User may be worth the money, even if more savvy users can still step around such costly measures. Congress may not wait for such matters to be weighed, however. Rep. Tom Feeney (R-FL) said that schools may have to adopt antipiracy technology "whether you like it or not" if the situation does not improve. "The Judiciary Committee is not going to be patient for very long," he warned.

Addendum: For those keeping score, here are some of the failed approaches that have led to the current impasse between schools and the RIAA.

• Lawsuits! The original "solution" is still the most common one. It's not enough though, so they've also tried:
• Free music! MIT trials LAMP to get students free access to music in a reasonable manner. "Scratch the itch" was the idea. Music industry flips out, says no way. They need money.
• Free music (not really)! MIT didn't think about the RIAA's revenue concerns. Now music downloads are tested by forcing students into commercial music subscription services. Some schools are still using this, but by and large it's going nowhere. The lesson? There doesn't seem to be an easy way for the RIAA to monetize their problems.
• Propaganda! Scare the bejeebus out of students by telling them that file sharing is a felonious offense. High-profile lawsuits continue, MPAA gets in on the game. This is where we are at now: the threat of legal onslaught.
• A new file sharing overlord! New technological approaches to making file sharing difficult or impossible on school-owned networks. Ohio U made the jump, and others are watching. Schools can point to technological solutions and say, "hey, we're paying big bucks for the AntiPirazyZon2000+++, don't come after us!"
http://arstechnica.com/news.ars/post...arms-race.html





TorrentSpy Ordered By Federal Judge to Become MPAA Spy
Enigmax

TorrentSpy, one of the world’s largest torrent dump sites, has been ordered by a federal judge to monitor its users in order to create detailed logs of their activities which must then be handed over to the MPAA.

On May 29, TorrentSpy - one of the web’s most famous .torrent dump sites was told by federal judge Jacqueline Chooljian in the Central District of California that despite the site’s privacy policy which states they will never monitor their visitors without consent, they must start creating logs detailing their user’s activities.

Understandably, this is a worrying move by the court - even more so when one considers these logs must then be turned over to the MPAA. This is believed to be the first time a judge has ordered a defendant to log visitor activity and then hand over the information to the plaintiff. The decision - arrived at last month but under seal - could force sites that are defendants in a law suit to track the actions of their visitors.

The owners have been granted a stay of the order in order to make an appeal, which must be filed by June 12, says Ira Rothken, TorrentSpy’s attorney.

“It is likely that TorrentSpy would turn off access to the U.S. before tracking its users,” said Rothken. “If this order were allowed to stand, it would mean that Web sites can be required by discovery judges to track what their users do even if their privacy policy says otherwise.”

This action follows MPAA action in 2006 against several BitTorrent sites, TorrentSpy included. According to the MPAA, Torrentspy helps others commit copyright infringement by directing people to sites which enable them to download copyright material, an offense claims the MPAA, of secondary copyright infringement.

At the time, Rothken said “It [TorrentSpy] cannot be held ‘tertiary’ liable for visitors’ conduct that occurs away from its web search engine”. TorrentSpy claims it did nothing illegal and suggested the MPAA should sue Google.

An attorney with the Electronic Frontier Foundation referred to the order to demand a defendant log visitor activity and then hand over the information to the plaintiff as “unprecedented.” He continued “In general, a defendant is not required to create new records to hand over in discovery. We shouldn’t let Web site logging policies be set by litigation”

One way or another, it seems that the MPAA is determined to obtain information about TorrentSpy and its users. A complaint issued by TorrentSpy suggests the MPAA paid a hacker $15,000 to steal e-mail correspondence and trade secrets. The hacker admitted that this was true.
http://torrentfreak.com/torrentspy-o...f-of-the-mpaa/





New Marvell Chipset Gets a BitTorrent Boost

Marvell has announced that it's new 88F5182 system-on-chip (SoC) platform will be BitTorrent capable, it will allow consumer electronics manufacturers who make products such as set top boxes, NAS (Network attached storage) and other devices to utilize BitTorrent for high speed content delivery. Manufacturers using the BitTorrent-certified 88F5182 chipset can provide a complete media solution featuring compatibility with BitTorrent. With an installed base of over 150 million users worldwide the next generation of Internet-connected devices are integrating BitTorrent to ensure a simple and seamless digital entertainment experience for consumers.

This is an important additional step in integrating the BitTorrent protocol for use in consumer electronic devices, especially those that are designed for home entertainment usage and data storage. It's one more progression towards consumers being able to watch and retrieve content quickly and seamlessly from their comfort of their living rooms.

"Optimizing BitTorrent to work with Marvell silicon at the component-level not only provides manufacturers high-performance and quick time-to-market, but takes full advantage of the entertainment content and commercial content delivery services offered by BitTorrent." says Ashwin Navin, president of BitTorrent, Inc.
http://www.newlaunches.com/archives/...rent_boost.php





Download Any MP3 From MySpace Bands
Aniscartujo

Forget about complex methods to download MP3s from MySpace, just enter the Band Name and get a list of MP3s to download. Its 100% Free!!
http://www.digg.com/music/Download_A..._MySpace_Bands





Anubis P2P 1.2

Downloads: 0 Requirements: Windows 95/98/Me/NT/2000/XP/2003 Server/Vista
Publisher: CitrixWire License: Free
Date added: 03-JUN-07 File Size: 2.58MB

Anubis P2P (peer-to-peer) is a new file sharing program that includes all the recent p2p optimizations, helping users to search and download over some several networks (including eD2K and Kad) around the internet. All in one features like file manager, download statistics, chat and IP filters make this p2p a complete tool for all kind of users. You can monitor all your activity in statistics area viewing download/upload reports gathered by Anubis. Version 1.2 is a bug fixing release.

Note: This software comes with Dealio toolbar for Internet Explorer, that can be installed or uninstalled at the user's choice.
http://asia.cnet.com/downloads/pc/sw...265110s,00.htm





FM Radio Waves are Stopped at the Border
Doreen Carvajal

When the music died one gloomy morning in April, residents of Brighton, England, who had been happily listening to illegally transmitted French radio for almost 10 years, were first stunned and then angered.

There was defiant talk of anchoring a clandestine FM transmitter on a boat off the coast of Brighton to bring back France Inter Paris, or FIP, which broadcasts a quirky blend of jazz, pop and rock, Dizzy Gillespie and Jimmy Hendrix. Someone started surreptitiously putting up posters along local streets. "Missing FIP," the poster said. "Can you help?"

A wistful Web site quickly appeared online with ardent testimonials to the advertising-free station.

"It's just one of those cool stations," said David Mounfield, a loyal listener and organizer of the ongoing British rebellion who does not speak French. "There wasn't much talking except a sexy female French voice, and it wasn't some inane English DJ yammering on in Britain, where it's all done by demographics and key markets to push and sell music."

The British listeners managed to tune in to the French radio for about 10 years through the aid of a radio engineer who set up illegal FM transmitters in well-placed houses.

In April, the British media regulator, Ofcom, silenced the station by confiscating the transmitters.

In the European Union, the borders for traditional radio stations remain firmly barricaded despite local demand for choice.

The walls remain high in France, too, where in May the government regulator, the Conseil Supérieur de l'Audiovisuel, or CSA, rejected three English-language channels, including the BBC, which sought precious FM frequencies on the crowded band in Paris. There are no English-language FM radio stations in the capital although there are FM stations broadcasting in Armenian, Portuguese, Yiddish, Russian and Serbo-Croatian, among other languages.

But the French regulator granted one open frequency in Paris to Tropique, a Creole-language station, because it argued that it reached a French Antilles audience similar to that of an outgoing station, Média Tropical. Critics remain skeptical of that logic. They note that the French international broadcaster RFI has not been allowed to offer radio programming in Britain.

"The back story is indeed a sort of tit for tat, but both countries are guilty of obtaining FM coverage in other countries and then refusing access to either Paris or London," said Jonathan Marks, a radio consultant in the Netherlands. "Kenya is arguing with the BBC about why they can't get access on FM to Kenyans in the U.K. when the BBC has FM outlets in Nairobi."

The barriers between countries have long existed, with national regulators, rather than the European Union, presiding over the airwaves. But rapidly evolving forms of new media will soon start offering ways to circumvent international barriers, particularly with many radio stations now accessible over the Internet.

"If you're an expat living in Paris, the best way to hear foreign radio in your kitchen is to install a Wi-Fi network at home and use one of the Wi-Fi radios now appearing in U.K. retail outlets," Marks said. He added that, as fixed-fee Internet service becomes more popular for portable telephones, "the alternative to FM for niche channels will be in the palm of your hand."

For now, though, that is small comfort for the British pining for their French radio or for the three English-language applicants that sought FM frequencies in Paris. One of them was Paris Live, which has broadcast on cable and satellite and was founded by Ian de Renzie Duncan, an Australian lawyer.

"It says to me that they have no respect for the million English-speaking people who have houses in France. Or they just don't care," Duncan said.

He said the impact on his own fledging business has been devastating, since he had counted on access to the more profitable FM band.

"I spent a half million euros and five years of my life working on this. And my family life is completely destroyed," Duncan said. "But I'm going to fight this with an appeal."

That rebellious spirit is also shared in Brighton, where on Thursday about 150 people gathered for a fund-raiser in the Hope Pub to savor music played regularly on the French radio station FIP. The aim was to finance a campaign to bring FIP back.

Mounfield, the listener who helped organize the "Vive le FIP" night, said regular listeners were weighing alternatives. He said they could take the more cumbersome legal approach, seeking a special community license to rebroadcast FIP in Britain. Or they could take a defiant approach by setting up a transmitter system timed to start broadcasting on Bastille Day.
http://www.iht.com/articles/2007/06/...ss/radio04.php





Guitartabs.com down ftm

NMPA Letter

Today I received a certified letter from Moses & Singer LLP, a law firm in New York City which asserts that they are acting as counsel for the National Music Publishers Association and The Music Publishers Association of America. They have stated that guitar tablature hosted on my site violates the copyrights of several of their clients.

I have long been of the understanding that an original, by-ear transcription of a song, which is a duplicate of no copyrighted work and which generally deviates substantially from the work on which it is based is the property of its transcriber, and not the original composer of the song. The NMPA and MPA clearly disagree, and are threatening to send a DMCA letter to my host, as well as pursue other undisclosed legal actions in the event that I were to fall short of full cooperation with their demands.

I have not yet decided what response is appropriate. This site has been a part of my life for ten years now, and I honestly believe that what I'm doing is neither illegal nor harmful to the music publishing industry. My site generates interest in playing music, which can only lead to more purchases of licensed sheet music. In addition, I have referred tens of thousands of dollars in licensed sheet music sales to my affiliates over the years. The notion that a musician serious enough to spend $30 on a sheet music book would instead settle for a by-ear tablature interpretation seems unlikely to me. Whlie highly paid laywers may easily be able to use corrupt, recently-manipulated and poorly-tested copyright law to suggest that I am violating the law, the argument that I have actually damaged their industry in the process seems ludicrous.

I have not had a chance to scan the letter yet, but I have typed it out. Please excuse any typoes I have made in haste.

Moses & Singer LLP
Via Certified Mail (Return Receipt Requested) and Electronic Mail

Peter J. Allen
(Address withheld for online version)

Re: Guitartabs.com

Dar Mr. Allen:

We are counsel to the National Music Publishers' Association ("NMPA") and The Music Publishers' Association of the United States, Inc. ("MPA"), not-for-profit trade associations of music publishers. Many NMPA and MPA member publishers create and distribute printed sheet music and guitar tablature products for educational, concert and recreational purposes. These products often account for a significant portion of the publishers' revenues, revenues which are shared with composers and songwriters.

It has come to our attention that your website, Guitartabs.com, makes available tablature versions of copyrighted musical compositions owned or controlled by members of the NMPA and MPA, without permission from the publishers. A representative listing of those compositions and the publishers who control the copyrights is attached as Schedule A. Examples of the compositions infringed include "Beautiful Day" written by Clayton/Evans/Mullen/Hewson and administered by Universal Music Publishing, and "I Want To Hold Your Hand" written by Lennon/McCarthy and administered by Sony/ATV Tunes LLC.

The versions of these publishers' musical works that you post on your website are not exempt under copyright law. In fact, U.S. copyright law specifically provides that the right to make and distribute arrangements, adaptations, abridgements, or transcriptions of copyrighted musical works, including lyrics, belongs exclusively to the copyright owner of that work. Many, if not all, of the compositions on your website, including the works listed on Schedule A, are protected by copyright. Therefore, you needed, but did not obtain, permission from the copyright owners to make a tablature version of those songs and to post them on your site. Under the circumstances, both the transcriber of the compositions and you as the owner of the website are copyright infringers.

We have been asked by the members of the NMPA and MPA to take all appropriate steps to remove unauthorized sheet music and tablature versions of the publishers' copyrighted works from the Internet.

In so enforcing the rights of the creators and publishers of music, it is our intent to ensure that composers and songwriters will continue to have incentive to create new music for generations to come.

Enclosed herewith is a copy of a notice we intend to send to your service provider, EV1Servers, unless you remove all infringing material from your site voluntarily within ten (10) days from the date of this notice. In accordance with the provisions of the Digital Milllennium Copyright Act ("DMCA), 17 U.S.C. 512, the notice details your infringing activities and demands that EV1Servers take down your website because you have not removed the infringing material, or itself face liability for copyright infringement.

IN short, we ask that you promptly remove all unauthorized copyrighted material from your website and confirm its removal to us in writing. We anticipate and expect your cooperation in this matter. However, in the event that you choose to ignore this request, we shall press our demand that EV1Servers take down your site. The NMPA and MPA, and their respective members, also hereby reserve all of their rights and rememdies under the copyright law with regard to your infringing activities.

We hope that you will choose to respect the rights of the creators of musical works and that no further action will be necessary. Please do not hesitate to contact me at (number withheld) or my associate, Michelle Zarr, at (number withheld) if you have any questions.

Sincerely,

Ross J. Charap

http://www.guitartabs.com/nmpa.php





RIAA: Trying to 'Herd' Victims
p2pnet.net

Warner Music, EMI, Vivendi Universal and Sony BMG's RIAA has come up with a shameful and derisive, even for the Big 4, new tactic, says Recording Industry vs The People's Ray Beckerman.

They want to 'herd' cases together, and that's exactly what they're trying to do with three of his cases, all of which are completely unrelated.

You can almost hear the whips cracking.

The RIAA (Recording Industry Association of America) is trying to to get magistrate judge Robert M. Levy to agree to a "joint settlement conference" in Elektra v Schwartz, Maverick v Chowdhury, and Elektra v Torres, all in Brooklyn, New York, says Beckerman, going on:

I've also learned the RIAA is attempting to stage a massive 'group' settlement conference for many other of its Brooklyn cases.

As the attorney for the defendants in these three cases, I find this latest tactic to be outrageous and offensive.

The RIAA has been improperly designating all of its cases in Brooklyn federal court as "related" cases, in order to remove the random assignment system and make sure that every case winds up being administered by Magistrate Judge Levy. Neither Magistrate Levy nor District Judge Trager have done anything to stop this practice.

As a result, because most of the cases are pro se cases in which the defendant has defaulted altogether, or has simply forgotten to show up in court, for the past 4 years, the RIAA lawyers have been spending an enormous amount of time talking to Magistrate Judge Levy alone - with no defendant's lawyer present.

Now they want to herd the defendants together and lead them like sheep to the slaughter, to so-called "settlement conferences", hoping the prestige of the judge's position will browbeat and coerce those defendants not inclined to pay the RIAA's extortion demands into submission.

Also they're hoping to achieve the massive economy of scale which is available to them, but which isn't available to the defendants, to further increase the economic imbalance.

If they can have a herded mass settlement conference like this, they can have one lawyer handling dozens and dozens of cases, while the defendants are for the most part represented by many different lawyers, if they're lucky enough to have a lawyer at all.

The whole tone of the proceeding would be offensive suggesting, as it would, that these greedy, law-breaking, private litigants are some sort of quasi-public agency, and have a special seat at the table in this courthouse - which apparently they do.

What makes matters even worse is the RIAA doesn't even know what the word "settle" means.

It means, basically, compromise. But my experience with the RIAA has been that everything is non-negotiable, and everyone either has to pay them the extorted money, or turn in someone else who may have committed a copyright infringement. It's extortion + investigation and has nothing to do with what lawyers traditionally mean when they say "settlement".
http://p2pnet.net/story/12420





Piracy in China is Smart, Hilarious, Critics Say
Michael Kanellos

"We Dine in Hell!"

If that were a real battle cry uttered in the Persian War, just think how much different Western Civilization would be today.

Instead, it's the slogan emblazoned in large letters across a pirated copy of the movie 300 that some guy in downtown Beijing wanted to sell to me. He wanted 20 RMB (China Yuan Renminbi), but the price quickly went down to 10 RMB, or about $1.30. He didn't realize the comic gem he was holding. The four or so other guys who pestered me tried to sell 300, too. Their copies had the same movie poster art on the cover, but with the more appropriate "We Die in Hell."

Although the Chinese government is trying to crack down on piracy, illegal software and movies continue to thrive. An estimated 86 percent of software here is illegal. In fact, in some ways it seems a little worse.

Five years ago when I was last here, you had to go into the store and ask for DVD movies. The clerk would then lazily tilt his thumb toward a cardboard box full of titles.

Now they're offered on the street more than in the past. Plus, the selection of movies is getting closer to the time they are playing in the theaters. It used to be you were mostly offered movies that had just been released on DVD or older films, like The Wild Geese and Planes, Trains and Automobiles.

Not now. The first guy who ran up when I got out of my cab tried to pawn off Spiderman 3 on me. (In fact, every vendor led with Spiderman 3). Then he whipped out Shrek 3. Both are in theaters and neither is on disc yet. Then he started in with the movies that just came out on DVD: Casino Royale, The Queen, and so on. There was hardly anything more than 11 months old.

"Do you have Harry Potter...the new one?" I asked.

No, but his friend standing next to him did--an unreleased movie and for the same $1.30 price as The Queen and Bernie Mac's The Cleaner.

As an experiment, I bought three: Harry Potter and the Order of the Phoenix (not due in theaters for several weeks), Shrek 3 (in theaters) and Children of Men (on DVD).

What were the takeaways from my shopping spree?
The desire to see the real versions doesn't justify piracy, but you can see why this makes it tough to eliminate.

1. Piracy is a not just a bunch of random individuals. In a mile walk (admittedly, in an area known for touts), I got hit up at least five times for DVDs, once when a policeman was passing by. If everyone has the same movies, and in the same general area, there's some organization involved. And the government, while trying to crack down, seems to see this as a nuisance, at least on the street level.

Russia figures in here. Children of Men is in Russian. Shrek 3, meanwhile, is in English, but the credits are in Russian, so the cross-border trade is running smoothly.

2. On the other hand, we may not be dealing with a group of super criminals--or super consumers. If people were hawking copies of the Order of the Phoenix--a movie that's being kept under tight lock in a vault--on the street, the studios would clearly have some serious security issues. But no. It turns out that the disc is for Bibi Blocksberg, a German rip-off of Sweden's Pippi Longstocking dubbed in Mandarin with English subtitles. It's a multicultural fraud!

Shrek 3, the in-theater movie, was filmed by someone sitting in a theater. It's fuzzy and the light goes in and out, judging from the scenes I looked at. Pirated copies of this sort aren't going to put a huge dent in theater revenues, just as Rolex probably doesn't lose that many likely buyers to street vendors in Juarez, Mexico. In a sense, these guys are the best advertisers for the studios.

3. Still, you can see why DRM (digital rights management) is necessary. By contrast, Children of Men was pristine (except, of course, the Russian part). I also visited a "legitimate" DVD and CD store. They were selling Babel, a movie that came out last year, for 20 RMB. At around $2.60, that's nearly $18 off the normal retail price. Casino Royale cost 30 RMB. Both worked fine in a store demo. The store even had the original packaging.

4. There is a sociopolitical angle to this. An expatriate I was speaking to said that many of his Asian colleagues buy legitimate discs most of the time. But if the censors cut out profanity or sex, they will buy a pirated version copied from a U.S. disc. Sex and the City is bought this way. The desire to see the real versions doesn't justify piracy, but you can see why this makes it tough to eliminate.

5. The packaging on these things is a work of art. A few years ago, you'd get a disc in a sleeve--on one side there was the movie art poster. On the other, credits from another movie. In short, they looked somewhat hillbilly.

The pirates have upped their marketing and are aiming for a classier clientele. The Shrek 3 disc comes in a folding envelope that contains art from the same movie on both sides of the envelope. It even contains a blurb from David Ansen at Newsweek. "Smart and Hilarious," he said.

But if you read closely, you'll see that the director is listed as Joe Ptkya and it stars Michael Jordan. The sleeve of Children of Men lists as a bonus feature "all new deleted scenes" and a short on the making of Carlito's Way. Spencer Breslin is listed as the star. The typos alone make the packaging worth more than the sales price.

Anna Silk is the star of Order of the Phoenix. But the "Order" envelope also includes an ad on one of the inside panels for the pirated version of Spiderman 3 that the guy was selling, an interesting twist on cross-promotion. And the envelope may display an accurate version of what the legitimate movie art will look like. There's an ominous shot of Voldemort and his followers marching off somewhere in a scene that I don't recognize from the last movie.

Maybe they are all off to dine with the Spartans.
http://news.com.com/Piracy+in+China+...3-6187305.html





Florida Defendant Goes After RIAA for Fraud, Conspiracy, and Extortion
Eric Bangeman

As the RIAA has continued its legal assault on file-sharing, defendants are responding with what amount to boilerplate defenses and counterclaims against the RIAA's allegations of copyright infringement. One recent RIAA target, Suzy Del Cid, is fighting back with a counterclaim that accuses the RIAA of all sorts of nefarious misdeeds.

UMG v. Del Cid is being heard in the US District Court for the Middle District of Florida, and in a counterclaim filed late last week, Del Cid accused the RIAA of computer trespass, conspiracy, extortion, and violations of the Fair Debt Collection Practices Act.

We've seen many of these claims before. Tanya Andersen accused the RIAA of violating Oregon's RICO statute in Atlantic v. Andersen, saying that the RIAA "hired MediaSentry to break into private computers to spy, view files, remove information, and copy images." Del Cid's accusations are similar: "these record companies hired unlicensed private investigators—in violation of various state laws—who receive a bounty to invade private computers and private computer networks to obtain information—in the form of Internet Protocol ('IP') addresses—allowing them to identify the computers and computer networks that they invaded."

There are also allegations of a conspiracy in Del Cid's counterclaim. It details how the Settlement Support Center contacts those fingered by the RIAA after John Doe lawsuits are filed to learn the identity of those using an IP address allegedly engaged in file sharing. Del Cid says that the Settlement Support Center takes "no account of the merits" of a particular claim, instead relying on the "inherent inequality of resources and litigation power" between the record companies and defendants. It's almost the exact same argument Andersen made in 2005, when she said that "record companies have repeated these unlawful and deceptive actions with many other victims throughout the United States."

In this case, Del Cid accuses the plaintiffs of computer trespass, fraud, and abuse, saying that they "intruded into Del Cid's personal computer to obtain information." If Del Cid was indeed on Kazaa, or any other file-sharing network, she will have a hard time convincing a judge that MediaSentry trespassed. "There's no reasonable expectation of privacy, given [Kazaa's] settings," Rich Vazquez, a partner at Morgan Miller Blair, noted when discussing a similar claim in Atlantic v. Andersen.

Vazquez believes that in order for the RIAA to be vulnerable to claims of malicious prosecution in cases like this, someone involved on the music industry's side would have to flip. "It would likely take someone on the inside testifying that the RIAA pursued people that it knew were innocent," Vazquez told Ars. "Then there would be a serious risk of malicious prosecution. But you've got to have them cold."

Del Cid does appear to break new ground with a couple of her counterclaims. She alleges that the RIAA used private investigators unlicensed by the state of Florida (where she lives) to track her online activities in violation of Florida law. Del Cid also accuses the RIAA of violating the Fair Debt Collection Practices Act by "knowingly collecting an unlawful consumer debt," referring to the Settlement Support Center's attempts to settle the case before the lawsuit was filed.

We are going on four years since the first file-sharing lawsuits were brought by the RIAA, and over the past several months, defendants' responses echo a number of similar themes—no doubt due to attorneys sharing "best practices" with one another. The computer trespass and conspiracy claims outlined above are two examples. Other defenses include arguments that the RIAA is barred from recovering damages from individual defendants due to its $115 million settlement with Kazaa and that the damages of $750 per song requested by the record labels are unconstitutionally excessive.

Aside from the occasional claim under state law—like Del Cid's charges that the RIAA used unlicensed private investigators—we are likely at a point where there is nothing new under the sun when it comes to file-sharing litigation. Any significant developments are going to come in the form of rulings—like the award of attorneys' fees to Debbie Foster in Capitol v. Foster—or even jury trials. The RIAA and its defendants have thrown just about everything in the book at one another, and it's up to the courts to decide what charges are going to stick.
http://arstechnica.com/news.ars/post...extortion.html





RIAA Throws in the Towel in Atlantic v. Andersen
Eric Bangeman

One of the most notorious file-sharing cases is drawing to a close. Both parties in Atlantic v. Andersen have agreed to dismiss the case with prejudice, which means that Tanya Andersen is the prevailing party and can attempt to recover attorneys fees.

Tanya Andersen was originally sued by the RIAA in 2005. She's a disabled single mother with a nine-year-old daughter living in Oregon; she was targeted by the music industry for downloading gangster rap over Kazaa under the handle "gotenkito." She denied engaging in piracy and in October 2005, she filed a countersuit accusing the record industry of racketeering, fraud, and deceptive business practices, among other things.

As we noted earlier today, counterclaims accusing the RIAA of all sorts of wrongdoing have become increasingly common. Late last month, Andersen filed a motion for summary judgment, saying that the plaintiffs have "failed to provide competent evidence sufficient to satisfy summary judgment standards" to show that she engaged in copyright infringement. Most notably, a forensic expert retained by the RIAA failed to locate "any evidence whatsoever" on Andersen's PC that she had engaged in file-sharing.

The RIAA has already taken a beating in the press in this case—accusing a disabled single mother of sharing songs like "Hoes in My Room" over Kazaa and then pressing doggedly ahead with the case despite mounting evidence that it had erred tends to look bad. Faced with the prospect of a case that was all but unwinnable, the RIAA has cut its losses by agreeing to dismiss the case.

What's unusual is that the RIAA has stipulated to a dismissal with prejudice, completely exonerating Andersen. Next to a negative verdict, an exonerated defendant is the last thing the RIAA wants. When faced with an undesirable outcome, the RIAA's tactic has been to move to dismiss without prejudice, a "no harm, no foul" strategy that puts an end to a lawsuit without declaring a winner and a loser. Dismissing a case with prejudice opens the RIAA up to an attorneys' fee award, which happened in the case of another woman caught in the music industry's driftnet, Debbie Foster.

With the original RIAA complaint has dismissed, Andersen told Ars Technica in an e-mail that the counterclaim is "now standing on its own," meaning that she will still have the opportunity to argue her counterclaims before the court. Given the allegations she has made, prevailing with the counterclaim could prove even more troubling to the RIAA.

Given the facts of the case and the precedent set by Capitol v. Foster, an attorneys' fee award is not out of the question. Getting the RIAA to actually cut a check may prove to be a bit more difficult, as Foster's attorneys have discovered. You can track the progress of Foster's attempts to recover fees—and many other file-sharing cases—at Recording Industry vs. The People.
http://arstechnica.com/news.ars/post...-andersen.html





Committee Looks at Technology to Limit Illegal Filesharing
Press Release

Members of the House Science and Technology Committee today heard from university officials and a leading technology expert on different methods to reduce illegal filesharing on campus internet systems.

While most colleges and universities provide their students with internet access for educational and research purposes, a growing number of college students have instead come to use the system to illegally download and share copyrighted music and movies through free peer-to-peer (P2P) filesharing programs, such as eMule and LimeWire. In 2006, some 1.3 billion tracks were downloaded illegally in the U.S. by college students, compared with approximately 500 million legal downloads.

“Illegal filesharing isn’t just about royalty fees. It clogs campus networks and interferes with the educational and research mission of universities,” said Chairman Bart Gordon (D-TN). “It wastes resources that could have gone to laboratories, classrooms and equipment. And it is teaching a generation of college students that it’s alright to steal music.”

While other House Committees have examined the regulation of illegal filesharing, adequate technology will be the first line of defense in actually preventing it.

Witnesses at today’s hearing discussed their universities’ experiences with two different types of technological measures to prevent illegal filesharing on their networks: traffic-shaping systems and network-filtering systems.

Traffic-shaping systems control the speed of network transmissions based on where in the network they originate and what computer program sends them. This makes filesharing slower and more difficult by reducing the flow of data to and from computers that tend to transmit or receive copyright-infringing transmissions.

Network-filtering systems specifically identify and block transmissions that contain copyrighted material.

Witnesses testified on the extent to which these technologies reduced illegal filesharing, and also technological issues surrounding them–such as privacy and impacts on speed and reliability of campus networks.

“One of our nation’s greatest strengths is our educational system, and American universities are the envy of the world. Their mission is to educate students, and they should not condone or look the other way when their computer networks are used as clearinghouse for digital piracy and illegal filesharing,” said Gordon.

Today’s witnesses included Dr. Charles Wight, Associate Vice President for Academic Affairs and Undergraduate Studies, University of Utah; Dr. Adrian Sannier, Vice President and University Technology Officer, Arizona State University, on leave from Iowa State University; Mr. Vance Ikezoye, President and CEO of Audible Magic Corporation; Ms. Cheryl Asper Elzy, Dean of University Libraries, Illinois State University; Management Team, ISU’s Digital Citizen Project; Dr. Greg Jackson, Vice President and Chief Information Officer, University of Chicago.

(PDF), (PDF)
http://science.house.gov/press/PRArt...px?NewsID=1858





News From The North



The TankGirl Diaries


DC Users and Admins Raided in Finland
Tank Girl

The Finnish police raided earlier today a number private homes in three different cities (Turku, Oulu and Espoo), confiscating computers of suspected users and administrators of various Finnish Direct Connect hubs. The operation was initiated on the request of The Copyright Information and Anti-piracy Centre (CIAPC), known as TTVK in Finland. Immediately after the raids the international copyright organization IFPI came out with a well-prepared statement (in Finnish) about the raids, including typical propaganda comments from IFPI chairman John Kennedy and from the European MPAA representative Halli Kristinsson. There is no information on any arrests being made at the raids. All DC hubs in Finland have been invite-only for a longer time, so it is likely that the police or CIAPC used infiltrators to get access to the hubs in order to collect IP numbers and other information.

The previous major attack against Finnish filesharers happened in December 2004 when the popular torrent site Finreactor was busted, leading to dozens of charges against site administrators and moderators. Most of the Finreactor cases are still waiting for their turn in the appeals court. Finland and Sweden are among the leading Direct Connect countries in the world in terms of hub and user counts.
http://www.p2pconsortium.com/index.php?showtopic=13196





McAfee's Blubster Deal
Jon Newton

Yesterday, I did a post on the fact so-called security company McAfee has marked me RED! merely because I have an ad for p2p music sharing client Blubster on p2pnet.

McAfee claims Blubster is loaded all kinds of nasty stuff and is using this assertion to warn people away from it.

This McAfee item also carries a weird graphic (right) which among other things, has me linking to something called yourmercifulgod.co.uk.

I have so many links and hundreds, if not thousands, of comment posts include them as well. So it's quite possible there's a link to this sinful site somewhere on p2pnet. But only God knows where it is

And I link to SpyBot and The Pirate Bay too !!! Horror of Horrors!

So we'll soon see Google and Yahoo and everyone else similarly honoured with unmeritorious mentions on McAfee. Right?

Meanwhile, my mate Pablo Soto over in Spain (he's the guy who created Blubster, among other applications) has a post about this McAfee farce on his blog.

Here's what he says >>>>>>>>>>>>>

During the summer of 2002 McAfee approached me. A nice and pretty marketing manager of its Consumer Division wanted to ink a deal with us to distribute their Security Center software with our application: P2P music sharing client Blubster.

They were so excited. File sharing users download virus 24/7 they said. Not in an MP3 only network, I replied. But hey, it was the kind of deal that couldn't hurt neither our finance nor our users. And so we did.

Literally millions of our users opted-in to install McAfee's software and we received a good amount of revenue.

They were paying much less than what our competitors bundled, but it was a good deal.

We were not alone, days later more and more press releases started to flow announcing McAfee's deals with almost all other top tier P2P software distributors. Including Grokster, aka, the father of spyware. Yup, those users really needed a good antivirus, and I mean a good one.

Make no mistakes here, McAfee attracted more audience than they could have imagined. And they were obviously happy with it.

Today, I couldn't believe it when doing my daily check in my friend Jon's site I read that McAfee is targeting them. And for what you may ask? For linking Blubster.com (crowd wows).

Right. They bundled their software with us and now they protect our users from us because we bundle software, they say.

McAfee even removed the press releases anouncing the P2P deals.

Not only our software is totally free of viruses, spyware and unwanted programs, it was a major distributor of McAfee's products and they loved the traffic.

These guys must be kidding.

No, Pablo. They're perfectly serious.

Stay Tuned.
http://p2pnet.net/story/12363





Censorship 'Changes Face of Net'
BBC

Amnesty International has warned that the internet "could change beyond all recognition" unless action is taken against the erosion of online freedoms.

The warning comes ahead of a conference organised by Amnesty, where victims of repression will outline their plights.

The "virus of internet repression" has spread from a handful of countries to dozens of governments, said the group.

Amnesty accused companies such as Google, Microsoft and Yahoo of being complicit in the problem.

Website closures

When challenged on their presence in countries such as China in the past, the companies accused have always maintained that they were simply abiding by local laws.

Amnesty is concerned that censorship is on the increase.

"The Chinese model of an internet that allows economic growth but not free speech or privacy is growing in popularity, from a handful of countries five years ago to dozens of governments today who block sites and arrest bloggers," said Tim Hancock, Amnesty's campaign director.

"Unless we act on this issue, the internet could change beyond all recognition in the years to come.

More and more governments are realising the utility of controlling what people see online and major internet companies, in an attempt to expand their markets, are colluding in these attempts," he said.

According to the latest Open Net Initiative report on internet filtering, at least 25 countries now apply state-mandated net filtering including Azerbaijan, Bahrain, Burma, Ethiopia, India, Iran, Morocco and Saudi Arabia.

Egyptian blogger

Filtering was only one aspect of internet repression, the group said. It added that increasingly it was seeing "politically motivated" closures of websites and net cafes, as well as threats and imprisonments.

Twenty-two-year-old Egyptian blogger Abdul Kareem Nabeel Suleiman was imprisoned for four years in February for insulting Islam and defaming the President of Egypt.

Fellow Egyptian blogger Amr Gharbeia told the BBC that the internet was allowing people to express themselves: "The web is creating a more open society, it is allowing more people to speak out. It's only natural that upsets some people."

The Amnesty conference - Some People Think the Internet is a Bad Thing: The Struggle for Freedom of Expression in Cyberspace - will have some well-known speakers including Wikipedia founder Jimmy Wales.

It marks the first anniversary of Amnesty's website irrepressible.info, which is being relaunched to become an information hub for anyone interested in the future of internet freedom.
http://news.bbc.co.uk/go/pr/fr/-/2/h...gy/6724531.stm





Do Obama and Lieberman Think the Internet is Dangerous?
David Cassel

Does Joe Lieberman hate the internet? Is Barack Obama trying to scare you? Welcome to National Internet Safety Month. Its sole purpose? Reminding America how dangerous the internet is.

I’m not kidding. That’s the gist of an official resolution, quietly signed by 18 U.S. Senators in both parties at the end of May (including Senators Obama and Lieberman). Senate Resolution 207 specified that the month of June provides Americans an opportunity to “learn more about the dangers of the Internet.” Got anything positive to say about the net? Save it for July, pal. June is for commending organizations which “promote awareness of the dangers of the Internet.”

They might as well call it internet-is-dangerous month. But let’s look at some of their examples. What constitutes a danger? If someone puts a filter on your computer to censor it — it’s dangerous to disable it! You can say this about America’s youth — more than 3 out of 10 can de-activate censor-ware, according to the Senators’ own statistics. Congratulations, kids! Whoops, I’m sorry — I mean…danger!!

They’re actually talking about 18-year-olds here, at least in some cases. The Senators cite an age range from 5th grade through high school seniors. I guess we wouldn’t want any of those 18-year-olds thinking for themselves.

Another “danger” is online bullying — although apparently 77% of the students surveyed said that hadn’t happened to them, and that they didn’t even know anyone that it had happened to. And what’s dangerous about your mom knocking on your door asking what you’re doing? Not telling her. Danger! Danger! This calls for a Senate resolution….

Less than a quarter of the teenagers in their sample are even bothering to hide what they’re doing online, according to the Resolution. But that’s good enough for the Senators. And another “dangerous” behavior cited in the anti-internet resolution? Daring to meet someone in real life — ever — after having first met them online. Your virtual friends should never, ever be met. Until you’re 18.

But it’s not just a resolution. A few corporations are actually trying to cash in on this misguided disinformation campaign, including BSafe Online, a Tennessee company which markets a PC filtering software. (I wonder if it’s one of the ones that can be disabled by 31% of America’s teenagers…) Their CEO has an encouraging message for parents about safety on the internet. “This is a battle they must fight everyday with their children in order to keep pornographers, sexual predators and cyber-bullies at bay.” And keeping those pornographers and sexual predators away will cost you a mere $70 a year…

The co-founder of another filter company promised parents “a spike in persistence of online predators” this summer. And as an added bonus, PC Pandora has also added the ability to spy on your partner (in case you’ve accidentally married a sexual predator.) If you want to start worrying right away, they’ve even published a web page with 29 possible possible signs that your partner might be cheating on you. (Which include working late, avoiding you, not avoiding you….) Maybe they’re just getting a jump on National Internet Marital Fidelity Month.

So now you know. Your tax dollars paid for a bunch of techno-phobes to pass congratulatory resolutions about mom, Apple Pie, and the need to keep teenagers off MySpace. (BSafe’s press release specifically touts their ability to squelch all social networking sites.) Here’s a list of the Senators that co-sponsored this resolution.

Lisa Murkowski (R - AK) Joe Lieberman (I - CT)
Sheldon Whitehouse (D - RI) Barack Obama (D - IL)
Ted Stevens (R - AK) Mary Landrieu (D - LA)
David Vitter (R - LA) Norm Coleman (R - MN)
Larry Craig (R - ID) Evan Bayh (D - IN)
Kay Hutchison (R - TX) Blanche Lincoln (D - AR)
Mike Crapo (R - ID) Charles Schumer (D - NY)
Max Baucus (D - MT) John Thune (R - SD)
Patrick Leahy (D - VT) Pete Domenici (D - NM)

But if you think your Senator is more enlightened about the internet, remember — those are just the co-sponsors.

This internet-is-dangerous resolution was passed…unanimously.
http://tech.blorge.com/Structure:%20...-is-dangerous/





Saving the Internet

Cyberspace can be made safer from the chaos and crime that threaten to overwhelm it. But most recipes for security and order come at a very steep price: the loss of the Internet’s creative potency.

By Jonathan Zittrain

The famed Warner Bros. cartoon antagonist Wile E. Coyote demonstrates a fundamental principle of cartoon physics. He runs off a cliff unaware of its ledge and continues forward without falling. The Coyote defies gravity until he looks down and sees that there is nothing under him. His mental gears turn as he contemplates his predicament. Then: Splat.

The Internet and the PC are following a similar trajectory. They were designed by people who shared the same love of amateur tinkering as the Coyote and who dealt with problems only as they arose—or left them to individual users to deal with. This “procrastination principle,” together with a design premised on contributions from anyone who cared to pitch in, have caused the Internet and PC to emerge from the realms of researchers and hobbyists and to win out over far more carefully planned and funded networks and information appliances.

The runaway successes of the Internet and PC with the mainstream public have put them in positions of significant stress and danger. Though the Internet’s lack of centralized structure makes it difficult to assess the sturdiness of its foundations, there are strong signals that our network and computers are subject to abuse in ways that have become deeper and more prevalent as their popularity has grown.

The core boon and bane of the combined Internet and PC is its generativity: its accessibility to people all over the world—people without particular credentials or wealth or connections—who can use and share the technologies’ power for various ends, many of which were unanticipated or, if anticipated, would never have been thought to be valuable.

The openness that has catapulted these systems and their evolving uses to prominence has also made them vulnerable. We face a crisis in PC and network security, and it is not merely technical in nature. It is grounded in something far more fundamental: the doubled-edged ability for members of the public to choose what code they run, which in turn determines what they can see, do, and contribute online.

Poor choices about what code to run—and the consequences of running it—could cause Internet users to ask to be saved from themselves. One model to tempt them is found in today’s “tethered appliances.” These devices, unlike PCs, cannot be readily changed by their owners, or by anyone the owners might know, yet they can be reprogrammed in an instant by their vendors or service providers (think of TiVo, cell phones, iPods, and PDAs). As Steve Jobs said when introducing the Apple iPhone earlier this year, “We define everything that is on the phone. You don’t want your phone to be like a PC. The last thing you want is to have loaded three apps on your phone, and then you go to make a call and it doesn’t work anymore. These are more like iPods than they are like computers.”

If enough Internet users begin to prefer PCs and other devices designed along the locked-down lines of tethered appliances, that change will tip the balance in a long-standing tug of war from a generative system open to dramatic change to a more stable, less-interesting system that locks in the status quo. Some parties to the debates over control of the Internet will embrace this shift. Those who wish to monitor and block network content, often for legitimate and even noble ends, will see novel chances for control that have so far eluded them.

To firms with business models that depend on attracting and communicating easily with customers online, the rise of tethered appliances is a threat. It means that a new gatekeeper is in a position to demand tribute before customers and vendors can connect—a discriminating “2” inside “B2C.”

Two Generative Triumphs: Network and PC

Some brief history: The mainstream consumer network environment of the early 1990s looked nothing like today’s Internet, nor did it evolve to become the Internet we have today. As late as 1995, conventional wisdom held that the coalescing global network would be some combination of the proprietary offerings of the time, services like CompuServe, AOL, and Prodigy. Yet those companies went extinct or transformed into entirely different businesses. They were crushed by a baling-wire-and-twine network built by government researchers and computer scientists, one that had no CEO and no master business plan.

The leaders of the proprietary networks can be forgiven for not anticipating the Internet’s rise. Not only was there no plan for the provision of content on the Internet, there was an outright hostility toward many forms of it. The Internet’s backbone, operated by the U.S. National Science Foundation, had an acceptable-use policy prohibiting commercial endeavors. For years the Internet remained a backwater, a series of ad hoc connections among universities and research laboratories whose goal was to experiment with networking. Yet what the developers made was a generative system, open to unanticipated change by large and varied audiences. It is this generativity that has caused its great—and unanticipated—success.

Consumer applications were originally nowhere to be found on the Internet, but that changed in 1991, after the Internet’s government patrons began permitting personal and commercial interconnections without network research pretexts, and then ceased any pretense of regulating the network at all. Developers of Internet applications and destinations now had access to a broad, commercially driven audience. Proprietary network service providers who had seen themselves as offering a complete bundle of content and access became mere on-ramps to the Internet, from which their users branched out to quickly thriving Internet destinations for their programs and services. For example, CompuServe’s Electronic Mall, an e-commerce service intended to be the exclusive means by which outside vendors could sell products to CompuServe subscribers, disappeared under the avalanche of individual Web sites selling directly to anyone with Internet access.

PCs likewise started off slowly in the business world (even the name “personal computer” evokes a mismatch). Businesses first drew upon custom-programmed mainframes—the sort of complete package IBM offered in the 1960s, for which software was an afterthought—or relied on information appliances like smart typewriters. Some businesses obtained custom-programmed minicomputers, and employees accessed the shared machines through dumb workstations using small, rudimentary local-area networks. The minicomputers typically ran a handful of designated applications—payroll, accounts receivable, accounts payable, and company-specific programs, such as case-management systems for hospitals or course-registration programs for universities. There was not much opportunity for skilled users to develop and share innovative new applications.

Through the 1980s, the PC steadily gained traction. Its ability to support a variety of programs from a variety of makers meant that its utility soon outpaced that of specialized appliances like word processors. Dedicated word processors were built to function the same way over their entire product lifetimes, whereas PC word-processing software could be upgraded or replaced with an application from a competitor without having to replace the PC itself. This IT ecosystem, comprising fixed hardware and flexible software, soon proved its worth.

PCs had some drawbacks for businesses—documents and other important information ended up stored across different PCs, and enterprise-wide backup could be a real headache. But the price was right, and people entering the workforce soon could be counted on to have skills in word processing and other basic PC tools. As a round of mature applications emerged, there was reason for most every white-collar worker to be assigned a PC, and for an ever broader swath of people to want one at home. These machines might have been bought for one purpose, but their flexible architecture meant that they could quickly be redeployed for many others. A person who bought a PC for word processing might then discover the joys of e-mail, gaming, or the Web.

Four Elements of Generativity (Located at the end of this article)

Bill Gates used to describe Microsoft’s vision as “a computer on every desk.” That may have reflected a simple desire to move units—nearly every PC sold meant more money for Microsoft—but as the vision came true in the developed world, the implications went beyond Microsoft’s profitability. Whether running Mac or Windows, an installed base of tens of millions of PCs meant that there was tilled soil in which new software could take root. A developer writing an application would not need to convince people that it was worth buying new hardware to run it. He or she would need only persuade them to buy the software itself. With the advent of PCs connected to the Internet, people would need only click on the right link and new software could be installed. The fulfillment of Gates’s vision significantly boosted the generative potential of the Internet and PC, opening the floodgates to innovation.

Benefits of Generativity: Innovation and Participation

Generativity is a system’s capacity to produce unanticipated change through unfiltered contributions from broad and varied audiences. As such, generativity produces two main benefits: The first is innovative output—new things that improve people’s lives. The second is participatory input—the opportunity to connect to other people, to work with them, and to express one’s own individuality through creative endeavors.

Nongenerative systems can grow and evolve, but such growth is channeled through their makers: Sunbeam releases a new toaster in response to anticipated customer demand, or an old proprietary network like CompuServe adds a new form of instant messaging by programming it itself. When users pay for products or services, they can exert market pressure on the companies to develop the desired improvements or changes. This is an indirect path to innovation, and there is a growing body of literature about its chief limitation: a persistent bottleneck that prevents large incumbent firms from developing and cultivating certain new uses, despite the benefits they could enjoy with a breakthrough.

For example, Columbia Law School professor Tim Wu has shown that when wireless telephone carriers control what kinds of mobile phones their subscribers may use, those phones often have undesirable features and are difficult for third parties to improve. Some carriers have forced telephone providers to limit the mobile phones’ Web browsers to certain carrier-approved sites. They have eliminated call timers on the phones, even though these would be trivial to implement and are much desired by users, who would like to monitor whether they have exceeded the allotted minutes for their monthly plan. These limitations persist despite competition among several carriers.

The reason big firms exhibit such innovative inertia, according to a theoretical framework by Clayton Christensen, is twofold: Big firms have ongoing investments in their existing markets and in established ways of doing business, and disruptive innovations often capture only minor or less-profitable markets—at first. By the time the big firms recognize the threat, they are not able to adapt. They lose, but the public wins.

For disruptive innovation to come about, newcomers need to be able to reach people with their offerings. Generative systems make this possible. Indeed, they allow users to try their hands at implementing and distributing new ideas and technologies, filling a crucial gap that is created when innovation is undertaken only in a profit-making model, especially one in which large firms dominate.

Consider novel forms of commercial and social interaction that have bubbled up from unexpected sources in recent years. Online auctions might have been ripe for the plucking by Christie’s or Sotheby’s, but upstart eBay got there first and stayed. Craigslist, initiated as a dot-org by a single person, dominates the market for classified advertising online. Ideas like free Web-based e-mail, hosting services for personal Web pages, instant-messaging software, social networking sites, and next-generation search engines emerged from individuals or small groups wanting to solve their own problems or try something neat, rather than from firms realizing there were profits to be gleaned.

Eric von Hippel, head of MIT’s Innovation and Entrepreneurship Group, has written extensively about how rarely firms welcome improvements to their products by outsiders, including their customers, even when they could stand to benefit from them (see “Customers as Innovators: A New Way to Create Value,” by Stefan Thomke and Eric von Hippel, HBR April 2002). In his work, von Hippel makes the case to otherwise-rational firms that the users of their products can and often do serve as disruptive innovators, improving products and sometimes adapting them to entirely new purposes. They come up with ideas before there is widespread demand and vindicate them sufficiently to get others interested. These users are commonly delighted to see their improvements shared. When interest gets big enough, companies can step in and fully commercialize the innovation.

We have thus settled into a landscape in which both amateur and professional, small- and large-scale ventures contribute to major innovations. Consumers can become enraptured by a sophisticated “first-person shooter” video game designed by a large firm in one moment and by a simple animation featuring a dancing hamster in the next. So it is unsurprising that the Internet and PC today comprise a fascinating juxtaposition of sweepingly ambitious software designed and built like a modern aircraft carrier by a large contractor, alongside killer applets that can fit on a single floppy diskette. OS/2, an operating system created as a joint venture between IBM and Microsoft, absorbed more than $2 billion of research and development investment before its plug was pulled, whereas Mosaic, the first graphical PC Web browser, was written by a pair of students during a university break.

Generative growth can blend well with traditional market models. Big firms can produce software where market structure and demand call for such enterprise; smaller firms can fill in niches; and amateurs, working alone and in groups, can design both inspirational applets and more labor-intensive software that increases the volume and diversity of the technological ecosystem. Once an eccentric and unlikely invention from outsiders has gained traction, traditional means of raising and spending capital to improve a technology can shore it up and ensure its exposure to as wide an audience as possible. An information technology ecosystem comprising only the products of the free software movement would be much less usable by the public at large than one in which big firms help sand off rough edges. GNU/Linux has become user friendly thanks to firms that package and sell copies, even if they cannot claim proprietary ownership of the software itself. Tedious tasks that improve ease of mastery for the uninitiated are probably best done through corporate models: creating smooth installation engines, extensive help guides, and other forms of hand-holding to help users embrace what otherwise might be an off-putting technical software program or Web service.

For the individual, there is a unique joy to be had in building something—even if one is not the best craftsperson. (This is a value best appreciated by experiencing it; those who demand proof may not be easy to convince.) The joy of being helpful to others—to answer a question simply because it is asked and one knows a useful answer, to be part of a team driving toward a worthwhile goal—is among the best aspects of being human. Our information technology architecture has stumbled into a zone where helpfulness and teamwork can be elicited among and affirmed for tens of millions of people. Novel invention by engineers at the technical layer allows artists to contribute at the content layer. The feeling is captured fleetingly when strangers are thrown together in adverse situations and unite to overcome them—an elevator that breaks down, a blizzard or blackout that temporarily paralyzes the normal cadences of life but that leads to wonder and camaraderie rather than fear. The Internet of the early twenty-first century has distilled some of these values, promoting them without the kind of adversity or physical danger that could make a blizzard fun for the first day but divisive and lawless after the first week without structured relief.

The Generative Stall

Generative technologies need not produce forward progress, if by progress one means something like enhancing social welfare. Rather, they foment change. Generative systems are by their nature unfinished, awaiting further elaboration from users and firms alike. As such, they can be threatened as soon as their popularity causes abusive business models to pop up. The very openness and user-adaptability that make the Internet a creative wellspring also allow for the propagation of assorted evils—viruses, spam, porn, predation, fraud, vandalism, privacy violations, and potentially ruinous attacks on Web sites and on the integrity of the Internet itself. This is becoming an existential threat to the generative IT ecosystem.

The benefit of the generative PC is that it may be repurposed by a neophyte user at the click of a mouse. That is also a huge problem, for two main reasons. First, the PC user who clicks on bad code in effect hands over control of the PC to a total stranger. Second, the threat presented by bad code has been steadily increasing. The most well-known viruses have so far had completely innocuous payloads. The 2004 Mydoom worm spread like wildfire and affected connectivity in millions of computers around the world. Though it cost billions of dollars in lost productivity, Mydoom did not tamper with data, and it was programmed to stop spreading at a set time. Viruses like Mydoom are more like the crime of graffiti, with no economic incentive, than like the sale of illegal drugs, with its large markets and sophisticated criminal syndicates.

There is now a business model for bad code—one that gives many viruses and worms payloads for purposes other than simple reproduction. What seemed truly remarkable when it was first discovered is now commonplace: viruses that compromise PCs to create large “botnets” open to later instructions. Such instructions have included directing the PC to become the botnet’s own e-mail server, sending spam by the millions to e-mail addresses harvested from the hard disk of the machine itself or from Web searches, all in a process typically unnoticeable to the PC’s owner. One estimate pegs the number of PCs involved in such botnets at 100 million to 150 million—one quarter of all the computers on the Internet as of early 2007. Such zombie computers were responsible for more than 80% of the world’s spam in June 2006, and spam in turn accounted for an estimated 80% of the world’s total e-mail that month.

Because the current computing and networking environment is so sprawling and dynamic, and its ever-more-powerful building blocks are owned and managed by regular citizens rather than technical experts, its vulnerability has increased substantially. The public will not and cannot maintain their PCs to the level that professional network administrators do, despite the fact that their machines are significantly more powerful than the minicomputers of the 1970s and 1980s. That vulnerability is exacerbated by people’s increased dependence on the Internet. Well-crafted worms and viruses routinely infect vast swaths of Internet-connected personal computers. In 2004, for example, the Sasser worm infected more than half a million computers in three days. The Sapphire/Slammer worm in January 2003 went after a particular kind of Microsoft server and infected 90% of them—120,000 machines—within ten minutes. These hijacked machines together were performing 55 million searches per second for new targets just three minutes after the first computer fell victim. If any of these pieces of malware had truly “mal” or nefarious purposes—for example, to erase hard drives or randomly transpose numbers in spreadsheets—nothing would stand in the way.

The fundamental tension is that the point of a PC is to be easy for users to reconfigure to run new software, but when users make poor decisions about what new software to run, the results can be devastating to their machines and, if they are connected to the Internet, to countless others. Simply choosing a more secure platform does not solve the problem. To be sure, Microsoft Windows has been the target of malware infections for years, but this in part reflects Microsoft’s dominant market share. As more users switch to other platforms, those platforms will become appealing targets as well. And the most enduring way to subvert security measures may be through the front door—by simply asking a user’s permission to add some malware disguised as new functionality—rather than trying to steal in through the back to silently exploit an operating system flaw.

PC and Internet security vulnerabilities are a legitimate menace, and people are right to be concerned. However, the most likely reactions if they are not forestalled will be at least as unfortunate as the security problems themselves. Users will choose PCs that operate more like appliances, forfeiting the ability to easily install new code themselves. Instead they will use their machines as mere dumb terminals linked to Web sites that offer added interactivity. Many of these Web sites are themselves amenable to appliance-like behavior. Indeed, what some have applauded as Web 2.0—a new frontier of peer-to-peer networks and collective, collaborative content production—is an architecture that can be tightly controlled and maintained by a central source, which may choose to operate in a generative way but is able to curtail those capabilities at any time.

Consider Google’s terrific map service. It is not only highly useful to end users, it also has an open application-programming interface to its map data. Thanks to the open API, a third-party Web site creator can start with a mere list of street addresses and immediately produce on her site a Google map with a digital pushpin at each address. This allows any number of “mashups” to be made, combining Google maps with third-party geographic data sets. Web developers are using the Google Maps API to create Web sites that find and map the nearest Starbucks; create and measure running, hiking, or biking routes; pinpoint the locations of traffic-light cameras; and collate prospective partners on Internet dating sites to produce instant displays showing where one’s best matches are located.

In allowing coders access to its map data, Google’s mapping service is generative. But its generativity is contingent: Google assigns each Web developer a key and reserves the right to revoke it at any time, for any reason—or to terminate the whole service. It is certainly understandable that Google, in choosing to make a generative service out of something in which it has invested heavily, would want to control it. But this puts within the control of Google, and anyone who can regulate Google, all downstream uses of Google Maps—and maps in general, to the extent that Google Maps’ excellence means other mapping services will fail or never be built.

What’s Generative and What’s Not? (Located at the end of this article)

The business models of other next-generation Internet appliances and services are neither enduringly generative nor, in some instances, as unambiguously generative as the open Internet and PC. For example, Microsoft’s Xbox is a video game console that has as much computing power as a PC and is networked to other users with Xboxes. Microsoft loses money on every Xbox it sells but makes it back by selling its own games and other software to run on it. Third-party developers can write Xbox games, but they must obtain a license from Microsoft (which includes giving Microsoft a share of the profits) before they can distribute them.

Most mobile phones are similarly constrained: They are smart, and many can access the Internet, but the access is channeled through browsers provided and controlled by the phone-service vendor. Many PDAs come with software provided through special arrangements between device and software vendors, as Sony’s Mylo does when it offers Skype. Without first inking deals with device makers, software developers cannot have their code run on the devices even if users desire it. In 2006, AMD introduced the Internet Box, a device that looks just like a PC but cannot run any new software without AMD’s permission. What’s more, AMD can install on the machines any software it chooses—even after they have been purchased.

The growing profusion of tethered appliances takes many Internet innovations and wraps them up neatly and compellingly, which is good—but only if the Internet and PC can remain sufficiently in the center of the digital ecosystem to produce the next round of innovations and to provide competition for the locked-down appliances. The balance between the two spheres is precarious, and it is slipping toward the appliances. People buy these devices for their convenience or functionality, and some may appreciate the fact that they limit the damage users can do through ignorance or carelessness. But appliances also circumscribe the beneficial applications users can create or receive from others—applications they may not realize are important to them when they purchase the device. The risk, then, is that users will unwittingly trade away the future benefits of generativity, a loss that may go unappreciated even as innovation tapers off.

Eliminate the PC from many dens and living rooms, and we eliminate the test bed and distribution point for new software. We also eliminate the safety valve that keeps information appliances honest: If TiVo makes a digital video recorder that too-strictly limits what people can do with their recorded video, customers will turn to DVR software like MythTV, which records and plays TV shows on PCs; if mobile phones are too expensive, people will use Skype.

Of course, people don’t buy PCs as insurance policies against appliances that limit their freedom (even though they serve this vital function); they buy them to perform certain preconceived tasks. But if Internet security breaches and other sorts of anarchy threaten the PC’s ability to perform those tasks reliably, most consumers will not see the PC’s merit, and the safety valve will be lost. If the PC ceases to be at the center of the information technology ecosystem, the most restrictive aspects of information appliances will become commonplace.

Information Appliances and Regulation

When information appliances stay connected to their makers, those companies can be asked to implement changes to the way they work long after they have been purchased for a specific use. Consider the case of TiVo v. EchoStar. TiVo introduced the first digital video recorder in 1998, allowing consumers to record and time-shift TV shows. In 2004, TiVo sued satellite TV distributor EchoStar for infringing TiVo’s patents by building DVR functionality into some of EchoStar’s dish systems. TiVo won and was awarded $90 million in damages and interest—but that was not all. In August 2006 the court issued an order directing EchoStar to disable the DVR functionality in most of the infringing units then in operation.

In other words, the court ordered EchoStar to kill DVRs in the living rooms of people around the world who had bought them and who might be watching programs recorded on them at that very instant. Imagine sitting down to watch a much-anticipated TV show or sportscast and instead finding that all your recordings have been zapped, along with the DVR functionality itself—killed by a remote signal traceable to the stroke of a judge’s quill. The logic is plain: If an article infringes intellectual property rights, under certain circumstances it can be impounded and destroyed. It is typically impractical to go around impounding every item that falls under this category (police officers don’t go door-to-door looking for Rolex and Louis Vuitton knockoffs), so plaintiffs and prosecutors traditionally go after only those selling the contraband goods. But the tethered functionality of a DVR means that EchoStar can easily effect the remote modification or even destruction of its units. (The case is currently on appeal.)
Remote modification can also allow makers to repurpose their appliances, sometimes in ways that are undesirable to their owners. General Motors and BMW offer onboard systems like OnStar to provide car owners with a variety of useful services and functions, including hands-free calling, turn-by-turn driving directions, tire pressure monitoring, and emergency roadside assistance. Because the systems are networked and remotely upgradeable, the U.S. Federal Bureau of Investigation sought to use the technology to eavesdrop on conversations occurring in a vehicle by remotely reprogramming the onboard system to function as a roving bug. The bureau obtained secret orders requiring one carmaker to carry out that modification, and the company complied under protest. A U.S. federal appellate court found in The Company v. the United States that the anonymous carmaker could theoretically be ordered to perform the modifications but that the FBI’s surveillance interfered with the computer system’s normal use. A car with a secret open line to the FBI could not simultaneously connect to the automaker. If the occupants tried to use the system to summon emergency help, it would not function. (Presumably, the FBI would not come to the aid of the motorist the way the automaker promises to do.) The implication of the ruling was that secret FBI surveillance of this sort would be legally permissible if the system were redesigned to simultaneously process emergency requests.

A shift to smarter appliances, ones that can be updated by—and only by— their makers, is fundamentally changing the ways in which we experience our technologies. They become contingent: Even if you pay up front for them, such appliances are rented instead of owned, subject to revision by the maker at any moment.

What price will that control exact? It is difficult to sketch a picture of all the innovative changes that will not happen in a future dominated by appliances, but history offers a guide. Before the generative PC and Internet entered the mainstream around 1995, the IT landscape saw comparatively few innovations. In the dozen years since then, the Internet and PC have combined to inspire accelerated technical innovation outside the traditional firm-based R&D process: new Web-powered forms of business value, new social networks and communities of interest, and experiments in collaborative, collective intelligence. They are crucibles for new forms of culture, political action, and participation, and they will lose their power if the Internet and its end points migrate toward more reliable but less changeable configurations.

Saving the Generative Internet

If the Internet status quo is untenable, and the solution of tethered appliances creates too many undesirable consequences, we must look for other solutions. The central challenge facing today’s information technology ecosystem is to maintain a generative openness to experiments that can be embraced by the mainstream with as few barriers as possible, in the face of potentially overwhelming problems that arise precisely because it is so flexible and powerful. We may draw useful general guidelines from some of the success stories of generative models that have shown staying power. Here is a brief sampling:

Netizenship.

One solution to the generative problem deploys tools for people to use, usually in small groups, to prevent what they see as abuse. For example, Wikipedia offers easy-to-master tools that make it possible for self-identified editors to combat vandalism that arises from allowing anyone to edit entries. It is a system at once naïve and powerful compared with the more traditional levers of regulation and control designed to stop outliers from doing bad things. It is the opposite of the client-service model in which a customer calls a help line. Rather, it is like a volunteer fire department or a neighborhood watch. Not everyone will be able to fight fires or watch the neighborhood—to be sure, some will be setting the fires!—but even a small subset can become a critical mass.

The propagation of bad code is a social problem as well as a technical one, and people can enter into a social configuration to attack it. A small application could run unobtrusively on PCs of participating users and report either to a central source, or perhaps only to each other, information about the vital signs and running code of that PC, which would help other PCs understand whether the code is risky or not. With that information, one PC could use other unidentified PCs’ experiences to empower the user: At the moment the user is deciding whether to run some new software, the application’s connections to other machines could show, say, how many of the other machines were running the code, whether the machines of self-described experts were running it, whether those experts had been moved to vouch for it, and how long the code had been available. It could also signal the amount of unintended network traffic, pop-up ads, or crashes the code appears to cause. These sorts of data could be viewed on a simple dashboard, letting PCs users make quick judgments in light of their own risk preferences.

Virtual machines.

For those people who simply want their PCs to operate reliably, a medium-term solution may lie in technologies that allow mission-critical work to be isolated from the whimsical, experimental activities that might be dangerous—or might become the next key use of the Internet. Computer scientist Butler Lampson and others are developing promising architectures that allow single PCs to have multiple zones, two or more virtual machines running within one box. A meltdown in the red, experimental zone cannot affect the more-secure green zone, and thus the consumer is spared having to choose between a generative box and an appliance. Tax returns and important documents go in green; Skype starts out in red and then moves over only when it seems ready for prime time.

More help from ISPs.

Maintaining security of a generative system is by its nature an ongoing process, one requiring the continuing ingenuity of those who want it to work well and the broader participation of others to counter the actions of a determined minority to abuse it. If the network is completely open, the end points can come under assault. If the end points remain free as the network becomes slightly more ordered, they act as safety valves should network filtering begin to block more than bad code. Today ISPs turn a blind eye to zombie computers on their networks, so they do not have to spend time working with their subscribers to fix them. Whether through new industry best practices or through a rearrangement of liability requiring ISPs to take action in the most flagrant and egregious of zombie situations, we can buy another measure of time in the continuing cat-and-mouse game of security.

Network neutrality for mashups.

Those who provide content and services over the Internet have lined up in favor of “network neutrality,” by which ISPs would not be permitted to disfavor certain legitimate content that passes through their servers. Similarly, those who offer open APIs on the Internet ought to be application neutral, so all those who want to build on top of their interfaces can rely on certain basic functionality.

Generative systems offer extraordinary benefits. As they go mainstream, the people using them can share some sense of the experimentalist spirit that drives them. The solutions above are sketched in the most basic of terms, but what they share is the idea that for the generative Internet to save itself, it must generate its own solutions. The more we can maintain the Internet as a work in progress, the more progress we can make.

Four Elements of Generativity

Four main features define generativity: (1) how strongly a system or technology leverages a set of possible tasks; (2) its adaptability to a range of tasks; (3) its ease of mastery; and (4) its accessibility. The greater the extent to which these features are represented in a system, the more readily it can be changed in unanticipated ways—and the more generative it is. For example, many tools can be leveraging and adaptable but are difficult to master—thus decreasing generativity.

Leverage.

Generative systems make difficult jobs easier. The more effort they save, and the greater the number of instances in which their use can make a difference to someone, the more generative they are. Leverage is not exclusively a feature of generative systems; nongenerative specialized technologies (a plowshare, for instance) can provide great leverage for the tasks they’ve been designed to perform.

Adaptability.

Adaptability applies to both the breadth of a system’s uses without change and the ease with which it can be modified to broaden its range of uses. Adaptability is a spectrum—a technology that offers hundreds of different kinds of uses is more adaptable, and thus more generative, than a technology that offers fewer.

Ease of mastery.

How easy is it for broad audiences to both adopt and adapt a technology? An airplane is neither easy to fly nor simple to modify for new purposes. Paper, on the other hand, can be readily mastered and adapted—whether to draw on or to fold into airplanes. The skills needed to use many otherwise-generative technologies may be hard to absorb, requiring apprenticeship, formal training, or long practice.

Accessibility.

The easier it is to obtain the technology, tools, and information necessary to achieve mastery—and convey changes to others—the more generative a system is. Barriers to access include the sheer expense of producing (and therefore consuming) the technology; taxes and regulations surrounding its adoption or use; and secrecy or obfuscation that its producers wield in order to maintain scarcity or control.

What’s Generative and What’s Not?

Legos and a dollhouse.

Legos are highly adaptable, accessible, and easy to master. They can be built, deconstructed, and rebuilt into whatever form the user wishes, and third parties can publish “recipes” for new forms. (To be sure, Legos are still usually only toys.) The less-generative dollhouse supports imaginative play but is itself unmodifiable.

Hammer and jackhammer.

A hammer is accessible, easy to master, and useful in any number of household tasks. A jackhammer is less broadly accessible, harder to master, and good only for breaking up asphalt, concrete, and stone.

PC and TiVo.

A PC is an adaptable multipurpose tool whose leverage extends through networked access to new software and other users. TiVo is an inflexible tethered appliance. Though based on the same technology as a PC, it can be modified only by its maker, restricting its uses to those that TiVo invents.

Bicycle and airplane.

Bicycles are accessible (there’s no license to pedal), relatively easy to master, and adaptable by large communities of avid users and accessorizing firms. Airplanes are highly useful for long-distance travel but not very accessible, adaptable, or easy to master.
http://harvardbusinessonline.hbsp.ha...requestid=6603





Doll Web Sites Drive Girls to Stay Home and Play
Matt Richtel and Brad Stone

Presleigh Montemayor often gets home after a long day and spends some time with her family. Then she logs onto the Internet, leaving the real world and joining a virtual one. But the digital utopia of Second Life is not for her. Presleigh, who is 9 years old, prefers a Web site called Cartoon Doll Emporium.

The site lets her chat with her friends and dress up virtual dolls, by placing blouses, hair styles and accessories on them. It beats playing with regular Barbies, said Presleigh, who lives near Dallas.

“With Barbie, if you want clothes, it costs money,” she said. “You can do it on the Internet for free.”

Presleigh is part of a booming phenomenon, the growth of a new wave of interactive play sites for a young generation of Internet users, in particular girls.

Millions of children and adolescents are spending hours on these sites, which offer virtual versions of traditional play activities and cute animated worlds that encourage self-expression and safe communication. They are, in effect, like Facebook or MySpace with training wheels, aimed at an audience that may be getting its first exposure to the Web.

While some of the sites charge subscription fees, others are supported by advertising. As is the case with children’s television, some critics wonder about the broader social cost of exposing children to marketing messages, and the amount of time spent on the sites makes some child advocates nervous.

Regardless, the sites are growing in number and popularity, and they are doing so thanks to the word of mouth of babes, said Josh Bernoff, a social media and marketing industry analyst with Forrester Research.

“They’re spreading rapidly among kids,” Mr. Bernoff said, noting that the enthusiasm has a viral analogy. “It’s like catching a runny nose that everyone in the classroom gets.”

Hitwise, a traffic measurement firm, says visits to a group of seven virtual-world sites aimed at children and teenagers grew 68 percent in the year ended April 28. Visits to the sites surge during summer vacation and other times when school is out. Gartner Research estimates that virtual-world sites have attracted 20 million users, with those aimed at younger people growing especially quickly.

Even as the children are having fun, the adults running the sites are engaged in a cutthroat competition to be the destination of choice for a generation of Americans who are growing up on computers from Day 1.

These sites, with names like Club Penguin, Cyworld, Habbo Hotel, Webkinz, WeeWorld and Stardoll, run the gamut from simple interactive games and chat to fantasy lands with mountains and caves.

When Evan Bailyn, chief executive of Cartoon Doll Emporium, said that when he created the site, “I thought it would be a fun, whimsical thing.” Now, he says, “it’s turned into such a competitive thing,” adding that “people think they are going to make a killing.”

Even Barbie herself is getting into the online act. Mattel is introducing BarbieGirls.com, another dress-up site with chat features.

In recent months, with the traffic for these sites growing into the tens of millions of visitors, the entrepreneurs behind them have started to refine their business models.

Cartoon Doll Emporium, which draws three million visitors a month, is free for many activities but now charges $8 a month for access to more dolls to dress up and other premium services. WeeWorld, a site aimed at letting 13- to-25-year-olds dress up and chat through animated characters, recently signed a deal to permit the online characters to carry bags of Skittles candy, and it is considering other advertisers.

On Stardoll, which has some advertising, users can augment the wardrobe they use to dress up their virtual dolls by buying credits over their cellphones. At Club Penguin, a virtual world with more than four million visitors a month, a $5.95-a-month subscription lets users adopt more pets for their penguin avatars (animated representations of users), which can roam, chat and play games like ice fishing and team hockey.

Lane Merrifield, chief executive of Club Penguin, which is based in Kelowna, British Columbia, said that he decided on a subscription fee because he believed advertising to young people was a dangerous proposition. Clicking on ads, he said, could bring children out into the broader Web, where they could run into offensive material.

Mr. Merrifield also bristles at any comparison to MySpace, which he said is a wide-open environment and one that poses all kinds of possible threats to young people.

To make Club Penguin safe for children, the site uses a powerful filter that limits the kinds of messages users can type to one another. It is not possible, Mr. Merrifield said, to slip in a phone number or geographic location, or to use phrases or words that would be explicit or suggestive. Other sites are also set up to minimize the threat of troublesome interactions or limit what users can say to one another.

“We’re the antithesis of MySpace,” Mr. Merrifield said. “MySpace is about sharing information. We’re all about not being able to share information.”

Other sites are more open, like WeeWorld, which permits people to create avatars, dress them up and then collect groups of friends who type short messages to one another. The characters tend to be cute and cartoonish, as do the home pages where they reside, but the chatter is typical teenager.

“There’s a lot of teasing and flirting,” said Lauren Bigelow, general manager of WeeWorld. She said that the site had around 900,000 users in April and is growing around 20 percent a month.

Ms. Bigelow said that 60 percent of WeeWorld users are girls and young women, a proportion that is higher on some other sites. Stardoll said that its users are 93 percent female, typically ages 7 to 17, while Cartoon Doll Emporium said that it is 96 percent female, ages 8 to 14.

Some of the companies are aiming even younger. The Ontario company Ganz has a hit with Webkinz, plush toys that are sold in regular stores and are aimed at children as young as 6. Buyers enter secret codes from their toy’s tag at webkinz.com and control a virtual replica of their animal in games. They also earn KinzCash that they can spend to design its home. The site draws more than 3.8 million visitors a month.

Sherry Turkle, a professor at the Massachusetts Institute of Technology who studies the social aspects of technology, said that the participants on these sites are slipping into virtual worlds more easily than their parents or older siblings.

“For young people, there is rather a kind of fluid boundary between the real and virtual world, and they can easily pass through it,” she said.

For some children, the allure of these sites is the chance to participate and guide the action on screen, something that is not possible with movies and television.

“The ability to express themselves is really appealing to the millennial generation,” said Michael Streefland, the manager of Cyworld, a virtual world that started in South Korea and now attracts a million users a month in the United States, according to comScore, a research firm. “This audience wants to be on stage. They want to have a say in the script.”

But Professor Turkle expressed concern about some of the sites. She said that their commercial efforts, particularly the advertising aimed at children, could be crass. And she said that she advocates an old-fashioned alternative to the sites.

“If you’re lucky enough to have a kid next door,” she said, “I’d have a play date instead of letting your kid sit at the computer.”
http://www.nytimes.com/2007/06/06/te...gy/06doll.html





Kids Socialize in a Virtual World as Avatars
Yinka Adegoke

Children have always enjoyed make-believe. Now, some new Web sites are letting them live out their fantasies in virtual worlds using self-designed avatars.

Unlike the often-violent world of videogames, virtual sites such as Stardoll, Doppelganger, Club Penguin and Gaia Online hark back to a more innocent time of tea parties and playing outdoors -- and they are winning young users in droves.

The success of Second Life, one of the most popular virtual lifestyle sites for adults, with even its own banks and real estate agents, has helped to raise interest in the genre.

At Stardoll, young girls can create their own online 'MeDoll' identities from a template that allows the user to choose everything from skin tone to eyebrow shapes.

Most important, it allows the user to dress-up their avatar in the latest teen fashion.

Stockholm-based Stardoll, which only started four years ago, has been a huge hit with girls aged seven to 17 years.

The founders say they have over 7 million users in dozens of countries, even though Stardoll only very recently became available in four languages other than English.

"Role-playing is a hugely important part of growing up, especially for girls," said Matteus Miksche, chief executive of Stardoll, whose backers include venture capital firms Sequoia Capital and Index Ventures.

Like many new-generation Web sites popular with youngsters, the attraction is as much in the ease of use as it is the ability to interact with others.

Stardoll users can measure their popularity by the number of "friends" they accumulate on their page, just as they might do on MySpace or Facebook -- except the photos are not real.

And as in real life, popularity has its benefits.

One of the most obvious on Stardoll is that you can be a "cover girl" of the Stardoll fashion magazine by getting the most votes for your MeDoll.

Fashion is important for young girls who buy the latest clothes and accessories from the various virtual stores in Stardoll with made-up fashion brands.

Virtual Hollywood

Another virtual life site, Doppelganger, was built to support music, media and fashion.

Aimed at a slightly older crowd than Stardoll, users can throw parties, attend live shows and recordings of talk shows.

Doppelganger has tied up with major fashion and entertainment names in the real world, including ex-model and talk show host Tyra Banks and youth fashion brands like Rocawear and Kitson, the Los Angeles brand made famous by Paris Hilton.

Acts like Maroon 5 and the Pussycat Dolls have also performed on Doppelganger and given virtual interviews.

"The fashion element is a big part of Doppelganger," said founder Tim Stevens. "It's like a virtual Hollywood."

Most of these sites also have virtual economies with their own currencies, which users can earn or buy with their parents' credit cards, other online payment systems or premium text messages widely available in Europe.

On Gaia Online, another rapidly growing site, users earn 'gold' to buy virtual goods -- usually clothes for their avatar. "Up to 99 percent of the experience online is free at Gaia," said Gaia Online Chief Executive Craig Sherman.

"In a world where teens are constantly packaging and branding themselves, whether it's on MySpace or in their high school, Gaia is a place for them to get away from it all to just hang out and be yourself," Sherman said.

Stardoll's Miksche agrees, saying the young visitors to his site are not in as much a hurry to grow up as adults might think. For his users, he says, it's more important to be who you want to be.

"Part of our success is that some users are maybe getting tired of having pages where they feel forced to look sexy or cool or write some outrageous stuff in order to stand out."
http://www.reuters.com/article/techn...18618120070601





A Jazzman So Cool You Want Him Frozen at His Peak
Terrence Rafferty

AT the very end of Bruce Weber’s seductive, unsettling “Let’s Get Lost,” the movie’s subject, the semi-legendary cool-jazz trumpeter and singer Chet Baker, looks back on the shooting of the film and says, in a quavery, almost tearful voice, “It was a dream.” Although in the preceding two hours Mr. Baker has delivered a fair number of dubiously reliable utterances, you’re inclined to believe him on this one because that’s what the movie feels like to the viewer too. It’s nominally a documentary (Oscar-nominated in that category in 1989), but it documents something that only faintly resembles waking reality. And Mr. Baker, who wanders through “Let’s Get Lost” with the eerie deliberateness of a somnambulist, appears to be a man who knows a thing or two about dreams.

Film Forum, which gave the movie its New York premiere 18 years ago, is reviving it for a three-week run (beginning Friday) in a restored 35-millimeter print, and Mr. Weber’s black-and-white hipster fantasia is as beautiful, and as nutty, as ever. Now, as in 1989, the filmmaker seems bent on stopping time in its tracks, preserving the illusion that nothing important has changed since the early 1950s, when Mr. Baker was a handsome young man with a sweet-toned horn, the great white hope of West Coast jazz.

He doesn’t look the same of course; actually he looks like hell. When “Let’s Get Lost” was shot, Mr. Baker was in his late 50s, and after 30-plus years of dedicated substance abuse (he wasn’t picky about the substance, though heroin was generally his first choice) his face is ravaged, cadaverous — groovy in entirely the wrong way. He often appears to be having some difficulty remaining awake, even while he’s performing, whispering standards like “My Funny Valentine” at tempos so languid that the songs kind of swirl and hover in the air like cigarette smoke until they finally just drift away.

The really peculiar thing about “Let’s Get Lost” is that its subject’s physical decrepitude and narcoleptic performance style seem not to bother Mr. Weber at all. This isn’t one of those documentaries that poignantly contrast the beauty and energy of youth with the sad debilities of age. Far from it. The picture cuts almost randomly between archival clips and 1987 footage to create a sort of perverse continuum, a frantic insistence that the essence of Chetness is unvarying, eternal. And you can’t always see much difference between the young Mr. Baker and the old. Even in his prime his cool was so extreme that he often looked oddly spectral, like someone trapped in a block of ice.

Chet Baker’s brand of frosty hipness was, in the ’50s, considered a sexy alternative to that era’s prevailing ethos of earnest, striving respectability (at least until rock ’n’ roll, which was more fun, came along). Maybe you had to have grown up in that nervous decade, as Mr. Weber did, to find Mr. Baker’s ostentatious laid-backness subversive, to imbue it with so much bad-boy allure. Mr. Weber, who is also a fashion photographer, is a glamorizer both by trade and by nature, and when something imprints itself as strongly on his fantasy life as the image of the young Chet Baker clearly did, he holds onto it tightly — cherishes it, embellishes it, uses it to transport himself back to his own hard-dreaming youth. “Let’s Get Lost” is his “Remembrance of Things Past,” with this strung-out trumpeter as his madeleine.

And what Mr. Weber winds up doing in this original, deeply eccentric movie is giving Mr. Baker a luxurious fantasy world to live in, a holiday condo of the imagination, where age and time are utterly irrelevant. The filmmaker supplies his weary but grateful subject with a ready-made entourage of shockingly good-looking young people (Chris Isaak and Lisa Marie among them) who, in shifting combinations, drink with him, ride in snazzy convertibles with him, giggle with him on a bumper-car ride and gaze on him reverently while he croons a breathy tune in the recording studio. Mr. Weber takes him to the Cannes Film Festival, where paparazzi surround him and he sings “Almost Blue” for celebrities at a glittery party. No wonder Mr. Baker gets misty-eyed at the end of the movie. The dream he’s been dreaming, courtesy of his devoted director, is the sweetest one a performer could ask for in his declining years: the dream that he still matters.

Chet Baker hadn’t mattered for a while when Mr. Weber was filming him. The movie’s release rekindled a bit of interest in his music, partly because he was dead by the time it came out: In 1988, at the age of 58, he was found on the street in Amsterdam, having apparently fallen from the window of his hotel. He’s practically forgotten now.

Jazz history hasn’t been kind to him; his talent, though real, was thin. Unlike his rival Miles Davis, he persisted, with a stubbornness that suggests a fairly serious failure of imagination, in playing the cool style long past the point at which it had begun to sound mannered and even a little silly. When you hear Mr. Baker’s stuff, you can’t help picturing his ideal listener as one of those lupine swingers of the Playboy era, decked out in a velvet smoking jacket and loading smooth platters onto the hi-fi to get a hot chick in the mood for love. The ’50s die before your eyes in “Let’s Get Lost.” It feels like the last stand of something that may not have been worth fighting for in the first place.

In a funny way, the movie gives the lie to the nostalgic illusions it seems to want to embody, just because the construction of this fragile, faded jazzman as the epitome of cool is so elaborate and so obviously effortful. It’s killing work to be this cool. When Mr. Weber starts interviewing people who loved the musician not from afar, as he did, but from too close — his bitter wife, a few girlfriends, three of his neglected kids — you see how tough it’s been: how many drugs it took, how much willful indifference, how much hollowing out of whatever self may once have inhabited the pale frame of Chet Baker.

The enduring fascination of “Let’s Get Lost,” the reason it remains powerful even now, when every value it represents is gone, is that it’s among the few movies that deal with the mysterious, complicated emotional transactions involved in the creation of pop culture — and with the ambiguous process by which performers generate desire. Mr. Baker isn’t so much the subject of this picture as its pretext: He’s the front man for Mr. Weber’s meditations on image making and its discontents.

If you want the true story of Chet Baker, you’d do better to look up James Gavin’s superb, harrowing 2002 biography, “Deep in a Dream: The Long Night of Chet Baker,” where you can also find, in the words of a pianist named Hal Galper, perhaps the most perceptive review of Mr. Weber’s slippery movie. “I though it was great,” Mr. Galper says, “because it was so jive. Everybody’s lying, including Chet. You couldn’t have wanted a more honest reflection of him.” That’s “Let’s Get Lost,” to the life: the greatest jive movie, or maybe the jivest great movie, ever made.
http://www.nytimes.com/2007/06/03/movies/03raff.html





Still Needing, Still Feeding the Muse at 64
Allan Kozinn

THE first video from Paul McCartney’s new album, “Memory Almost Full,” is an otherworldly fantasy, directed by the French filmmaker Michel Gondry, in which a postman brings this former Beatle a box with an old mandolin and, it turns out, an assembly of mischievous ghosts. As Mr. McCartney plays “Dance Tonight,” with its simple percussion and bright pop melody, the ghosts — including one played by Natalie Portman — leap around him, throw sparkling fireballs and scare off the postman. Mr. McCartney later follows them into the box, and as the clip ends, he is seen jamming with them, playing the drums.

Surreal as the video is, it says a lot about what Mr. McCartney is up to on “Memory Almost Full,” to be released on Tuesday on the Hear Music label, a joint venture between Starbucks and the Concord Music Group. The ghosts may terrify the postman, but Mr. McCartney happily cavorts with them. And while the ghosts don’t seem to be from Mr. McCartney’s past, his comfort with them suggests the ease with which his history informs many of the songs on the album, including a suite that moves from childhood memories to thoughts of death. He is describing “Memory Almost Full” as a “rather personal” album.

It almost wasn’t an album at all. Mr. McCartney began recording it at the end of 2003 with his touring band but abruptly shelved the project. It wasn’t that he was dissatisfied with the music, he said in a telephone interview from his recording studio in Sussex, England; but he had wanted to work with Nigel Godrich, Radiohead’s longtime producer. When Mr. Godrich became available, Mr. McCartney decided to start fresh and to play all the instruments himself. That collaboration yielded “Chaos and Creation in the Backyard” in 2005.

The tapes for “Memory Almost Full” languished as he moved on to other things, including his divorce from Heather Mills and the latest in his growing series of classical scores, “Ecce Cor Meum,” a large work for chorus and orchestra dedicated to the memory of his first wife, Linda, who died in 1998.

Then he remembered the recordings he had filed away.

“I realized that I didn’t want to have any unfinished work lying around,” he said.

His first move was to summon David Kahne, who produced his “Driving Rain” CD (2001) and the early recordings for the shelved album. It wasn’t Mr. McCartney’s plan to record the rest of the album without his band, but with a studio at his house it was hard to resist wandering out to finish tracks he was working on whenever the mood took him, and in the end he played all the instruments on about half the tracks.

“Memory Almost Full” is a change for Mr. McCartney, although not primarily in musical ways. It has, after all, hints of everything from the sound of his 1970s band, Wings, to echoes of relatively recent work like “Flaming Pie,” from 1997, and Mr. McCartney seems to heave steadfastly avoided hopping on current pop music trends.

Still, he wanted to shake up his approach to releasing an album. The video made its debut on YouTube. And having been an EMI artist since the Beatles signed with the company in 1962 (apart from a series of American releases on Columbia in the 1980s), he moved to Hear Music, hoping to draw on the eagerness and energy of an upstart label.

“Am I feeling like I’ve left the family home?” Mr. McCartney said, when asked if switching labels was traumatic. “I have left the family home, but it doesn’t feel bad. I hate to tell you — the people at EMI sort of understood. The major record labels are having major problems. They’re a little puzzled as to what’s happening. And I sympathize with them. But as David Kahne said to me about a year ago, the major labels these days are like the dinosaurs sitting around discussing the asteroid.”

Although Hear Music has collaborated with other labels on projects ranging from Ray Charles’s “Genius Loves Company” to a recent compilation of John Lennon tracks, Mr. McCartney is the first artist signed to it directly. To celebrate his album’s release Starbucks is having what it is calling a global listening event: The album will be played around the clock on Tuesday in more than 10,000 Starbucks stores in 29 countries. Based on its high-volume traffic — some 44 million customers a week — the company expects about six million people to hear the music that day. Starbucks’ channel on XM satellite radio will also be promoting the record heavily, and XM will devote another channel exclusively to Mr. McCartney’s music on the release day.

“We got a call saying that Paul McCartney was interested in talking to us,” said Ken Lombard, president of Starbucks Entertainment, “and after we picked ourselves up off the floor, we met with him in London and had a pretty in-depth conversation about who we are as a company, and about our commitment to music.” He added that the company told Mr. McCartney it could “bring more exposure to this album than to other projects he’s done.”

Hear Music is releasing “Memory Almost Full” simultaneously on CD and as digital downloads, through iTunes. Mr. McCartney has already made several songs available online, both through iTunes (his Live8 performances, for example) and on his own Web site, paulmccartney.com. But this is his first full album available digitally, and iTunes is offering a version with bonus tracks, and, for preorders, the “Dance Tonight” video.

But for all that his defection from EMI is less than it seems: these days, he works album by album and can take any project where he chooses. He also controls his recordings, including his back catalog, so if he returns to EMI, he could take “Memory Almost Full” with him. On the other hand, if his success with Hear Music is such that he decides to stay with the label, he could bring his back catalog to it. For the moment he is leaving the older recordings in the custody of EMI, which has just announced a plan to reissue his complete back catalog through iTunes and on remastered CDs.

Mr. McCartney said the Beatles catalog would make its way to iTunes also, but he would not say when, other than to quote the early Beatles song “It Won’t Be Long.”

With his past work about to flood into the latest music distribution pipeline, it is probably fitting that his new album has a nostalgic quality. The title, “Memory Almost Full,” touches both aspects: a computer term, it also hints at the well of remembered experience that Mr. McCartney draws on here.

“There is quite a bit of retrospective stuff,” he acknowledged, “and looking at that, I thought, ‘Whoa, I wonder if there’s any particular reason?’ But then I thought, when I was writing ‘Penny Lane,’ that was me in my early 20s writing about when I was 15, 16. That’s retrospective. It’s a natural thing, I think, for a creative artist. Because the past, in a way, is all you have.”

“I didn’t sit down to write a personal album,” he added, “but sometimes you can’t help what comes out. I’m a great believer in that.”

By personal he means that several of the songs — particularly those in a suite near the end of the 13-song disc — look back at earlier times in his life, starting with his childhood in “That Was Me,” a tour of a family photo album. But personal has limits for him. He’s not singing about his marital woes, so don’t expect “Memory Almost Full” to be Mr. McCartney’s version of “Across a Crowded Room,” Richard Thompson’s searing 1985 divorce album, an example of the genre at its venomous best.

Mr. McCartney’s specialty has been the opposite: love songs, of which there are several on the new album. When advance copies leaked on the Internet last month, some listeners interpreted songs that mix love and nostalgia — “Vintage Clothes,” “See Your Sunshine” and “Gratitude” — as hymns to Linda, who has attained sainted status in the McCartney myth, much as John Lennon has in the Beatles’ legend.

“Funny that, isn’t it?” Mr. McCartney said. “ ‘Gratitude’ is just me being grateful for the good stuff in my life, past and present. That’s the thing about me, when I talk about love, it’s often general, it’s not always specific. If people think these songs are specific to Linda, that wouldn’t be true. But they’re pertaining to Linda, or my children, or other things in life for which I feel grateful. So she’s certainly in there.

“I don’t really mind how people interpret my songs. But I don’t want to have to say, ‘Yes, you’re right.’ I’d more gladly say, ‘Yeah, you’re partially on the button, but it means a whole bunch of other things.’ ”

At 64 (he turns 65 on June 18) Mr. McCartney is reportedly a billionaire and could easily settle into retirement at his Sussex estate. If anything, he seems to be expanding his one-man entertainment franchise.

In recent years he has published a poetry book, “Blackbird Singing,” and written an illustrated children’s book, “High in the Clouds,” with Geoff Dunbar and Philip Ardagh. He has mounted shows of his paintings and continued the series of classical works that began with “The Liverpool Oratorio” in 1991. And he is working on a guitar concerto.

Even so, he has been thinking about mortality. In “The End of the End” he imagines his death and sings, “on the day that I die, I’d like jokes to be told, and stories of old to be rolled out like carpets.” The idea, he said, came to him after he read a quotation that began, “On the day that I die”; he thought that taking on mortality so directly was brave.

Oddly, given that the album was recorded over four years with a long hiatus, partly with a band and partly by Mr. McCartney on his own, “Memory Almost Full” has a consistent sound and feel. It has a simplicity that gives it a rougher, rockier, more homespun sound than most of his recent albums.

That he overdubbed all the instruments on seven of the songs might have something to do with it; but it doesn’t explain why the six songs recorded with his band sound of a piece with them. That was something Mr. McCartney hoped for, but he isn’t sure how it worked out that way.

“I’m not that analytical,” he said. “I just do what feels right. And there’s a lot of crossing fingers: ‘I hope this works.’ ”

What does he mean when he says he hopes it works?

“That it sounds absolutely wonderful, and that I’m thrilled to listen to it. And that the feedback I get is great. Occasionally the feedback isn’t great.

“I’ve worked quite a lot in the past with Richard Rodney Bennett, who’s a great composer and orchestrator, and he once said to me that his greatest fear is ‘being found out.’ So many artists I know have that essential lack of confidence. Perhaps it’s what drives them. So for me it’s always a pleasant surprise when it works.”
http://www.nytimes.com/2007/06/03/ar...ic/03kozi.html





Why They Booed Her in Mexico
Marc Lacey

NOWHERE in the United States Constitution is there any mention of Miss U.S.A. She has no authority to declare war. She does not build border walls or round up undocumented immigrants. Those things are left to others, none of whom wear a sash.

But that fact seemed to get lost during the recent Miss Universe pageant, when Mexicans greeted Rachel Smith, Miss U.S.A., with one chorus after another of boos. Pageant officials said Ms. Smith, 22, was rattled by the denunciations, which echoed other booing she had received during her monthlong stay in Mexico, notably when she showed off a sleek, white Elvis outfit as her national costume on a runway on one of Mexico City’s grand avenues.

On pageant night, the wrath continued. As Ms. Smith was chosen for the final five, despite an awful fall in her evening dress, the crowd grew more boisterous, especially because Miss Mexico, Rosa María Ojeda Cuen, had been eliminated. Donald Trump, who owns the pageant, said he was nervous the audience might storm the stage. “The level of hostility was amazing,” he said, comparing it to the fury on display at the end of a disputed prizefight.

Mario López, the TV actor who was the show’s host, did his best to calm the crowd during a commercial break. “I said in Spanish: ‘Hey, listen, Mexico, the world is watching. Let’s show the world we’re really good hosts,’ ” he recalled.

The problem was that this was no simple matter of bad manners toward a guest, but an upwelling of a national angst, many Mexicans will tell you.

Mexicans admire the United States and loathe it in the very next breath. Well-heeled Mexicans struggle to get their little ones into American schools. Down-and-out Mexicans risk their lives to cross the border. Yet all still refer to those from El Norte as “gringos,” a term that dates back to the days when American troops were on Mexican soil.

“This is a symptom of Mexico’s schizophrenia when it comes to the United States,” said Jorge G. Castañeda, a former foreign minister of Mexico who is now a professor at New York University. “We are on the one hand more linked than ever to the United States — sometimes for better and sometimes for worse — and at the same time we are now more irreverent, discourteous and inhospitable, which is an un-Mexican sentiment.”

It is not easy to live life attached to a behemoth, Mexicans chronically complain. One’s culture is often eclipsed. One has to stand by as people who live in the former Mexican territories of Texas, California, New Mexico and Arizona speak ill of Mexico.

So Mexicans miss no chance to stick it to the States.

The last time they hosted the Miss Universe pageant, in 1993, the same thing occurred. Miss Mexico did not make the semifinals. Mexicans took out their anger by booing Kenya Moore, that pageant’s Miss U.S.A.

Three years ago, Mexican soccer fans began shouting “Osama! Osama!” when the United States soccer team faced Mexico in an Olympic qualifying match. When Mexico won, the revelry was intense.

So, Mexicans say, the booing at this pageant was never about Miss U.S.A. herself. It was those letters on her sash.

“This was about immigration and so many things,” said Nicolas Corte, 23, a student who was in the crowd when Miss Smith was booed in her Elvis outfit. “She represented the United States and many people are thinking negative things about the country right now.”

Mr. Corte and others said the complaints included arrogance by the Bush administration and frustration over American immigration policy, the war in Iraq and the historical grievances Mexico harbors against its neighbor.

The Woodrow Wilson Center in Washington will hold a conference this week titled, “The United States and Mexico: Strategic Partners or Distant Neighbors?” It will bring together officials from both countries, who will no doubt agree that the answer to the question is both.

Perhaps they ought to invite Ms. Smith, an aspiring journalist, who is a bit down on Mexico right now. She said in an interview that she had vacationed in Mexico several times before the pageant but would wait a good while before going back.

“I knew it wasn’t about me, a 22-year-old girl from a small town in Tennessee who just wants to help the world,” she said by phone. “But you can’t help but take it personally.”

She may have missed that there was applause mixed in with the booing when she picked herself back up from her fall, which some Mexicans pointed to as a reflection perhaps of the other side of the story — the admiration and respect that many Mexicans had for Ms. Smith and, alongside their frustration, for her country.

“I was embarrassed that my countrymen were booing,” said Javier Razo, 57, a businessman who was in the rowdy auditorium. “If it was a speech by a politician, I could understand it. But this was a pageant. I hope she knows it wasn’t about her at all.”
http://www.nytimes.com/2007/06/03/we...w/03lacey.html





"Pirates," "Knocked Up" Lead Box Office
Dean Goodman

"Pirates of the Caribbean: At World's End" led the North American box office for a second weekend, while the new pregnancy comedy "Knocked Up" delivered a surprisingly large bundle of joy.

According to studio estimates issued on Sunday, the third installment in Walt Disney Co.'s buccaneering franchise sold $43.2 million worth of tickets in the three days beginning June 1. Its 10-day total stands at $216.5 million. But the second Pirates film, "Dead Man's Chest" had earned $258 million after the same time last year.

Worldwide, the Johnny Depp adventure has earned $625.3 million, and will soon pass the $653 million total of the first Pirates film -- 2003's "The Curse of the Black Pearl." "Dead Man's Chest" topped out at $1.1 billion.

"Knocked Up" squeezed out $29.3 million, equivalent to its production budget. The film's distributor, Universal Pictures, had hoped the acclaimed comedy would open in the same $21 million range as director Judd Apatow's previous film, 2005's "The 40-Year-Old Virgin." The Steve Carell comedy went on to make $109 million domestically.

The new film stars Katherine Heigl ("Grey's Anatomy") as an entertainment journalist impregnated during a drunken one-night-stand with a slacker, played by Seth Rogen.

Exit surveys provided by Universal indicated that 57 percent of the audience was female, and 44 percent were under 30. It is also the best-reviewed wide release so far this year with raves from 92 percent of critics, according to Rotten Tomatoes (http://www.rottentomatoes.com), a Web site that tabulates reviews.

Amid a proliferation of family-friendly sequels, the film was "absolutely a breath of fresh air for the target audience," said Nikki Rocco, Universal's president of domestic distribution.

Universal, a unit of General Electric Co.'s NBC Universal Inc., has largely been missing in box office action this year. The studio and Carell will return on June 22 with "Evan Almighty," whose reported $175 million budget makes it the costliest comedy of all time.

DreamWorks Animation SKG Inc.'s "Shrek the Third" slipped one to No. 3 with $26.7 million, taking its three-week haul to $254.6 million -- about $100 million off the pace of 2004's "Shrek 2."

Also new was the Kevin Costner thriller "Mr. Brooks," which opened at No. 4 with $10 million, in line with the modest expectations of its closely held distributor, Metro-Goldwyn-Mayer.

Costner plays a man held in high esteem by the community, who is a serial killer and is in turn pursued by a stalker. Critics were scathing, and exit surveys were not good.

"Spider-Man 3" fell two places to No. 5 with $7.5 million. The superhero franchise has earned $318.3 million after five weeks. The worldwide total stands at $844 million, surpassing the $821 million haul of 2002's "Spider-Man," the previous record-holder in the franchise. "Spider-Man 2," released in 2004, finished with $784 million worldwide. The series was released by Columbia Pictures, a unit of Sony Corp..
http://www.reuters.com/article/enter...34343720070603





Russia Demands Halt to Unlicensed Production of its Weapons

The production of Russian weapons in eastern Europe without licenses causes direct damage to the country's economy and interests, a first deputy prime minister said Friday.

"It is not a secret that such production is carried out in a number of eastern European countries, including NATO members," Sergei Ivanov said adding that the illegal production should be stopped.

He said this is "intellectual piracy," adding that these countries supply Russian weapons to world markets at dumping prices.

"Despite our appeals to these countries, we have received no reasonable answer," he said.

Russia says it suffers major losses from the counterfeit manufacture of Kalashnikov assault rifles in Bulgaria. The armies of 47 countries use the AK-47 assault rifle, known as the Kalashnikov after its designer, Mikhail Kalashnikov.

About 100 million AK-47s and modified versions are believed to circulate around the world, but many of them are produced illegally.

Bulgaria's Arsenal, whose license to produce Kalashnikov rifles expired a long time ago, displayed a wide range of counterfeit rifles at the DSA 2006 arms show in Malaysia.

Ivanov said earlier that the annual sales of unlicensed small arms on the international market totaled about $2 billion, with counterfeit Kalashnikov assault rifles accounting for 80-90% of the volume.

Kalashnikov producer Izhmash said that Russia accounts for only 10-12% of the million Kalashnikov rifles sold globally every year, with the rest being unlicensed copies.

It said there is no single licensing agreement conforming to international legal norms that specifically protects Russia's intellectual property rights in small arms and light weapons production.

Almost half of all NATO member countries have yet to sign intellectual property rights protection agreements with Russia, including Lithuania, Latvia, Canada, Iceland, Luxembourg, the Netherlands and Norway.
http://en.rian.ru/russia/20070316/62123005.html





Harvard Is Licensing More Than 50 Patents to a Nanotechnology Start-Up
Barnaby J. Feder

George M. Whitesides, a Harvard University chemist, is a renowned specialist in nanotechnology, a field built on the behavior of materials as small as one molecule thick. But there is nothing tiny about the patent portfolio that Harvard has amassed over the last 25 years based on work in his lab.

Today, Harvard and Nano-Terra Inc., a company co-founded by Professor Whitesides, plan to announce the exclusive licensing for more than 50 current and pending Harvard patents to Nano-Terra. The deal could transform the little-known Nano-Terra into one of nanotechnology’s most closely watched start-ups.

“It’s the largest patent portfolio I remember, and it may be our largest ever,” said Isaac T. Kohlberg, who has overseen the commercialization of Harvard’s patent portfolio since 2005. Nano-Terra, based in Cambridge, Mass., said that the patent filing and maintenance costs alone top $2 million.

Terms of the deal were not disclosed, but Harvard said that it would receive a significant equity stake in Nano-Terra in addition to royalties.

The patents cover methods of manipulating matter at the nanometer and micron scales to create novel surfaces and combinations of materials.

A nanometer is a billionth of a meter (proteins and the smallest elements in many microprocessor designs are measured in nanometers); a micron is 1,000 times larger (pollen and many single-celled animals are measured microns). Such technology could lead to products to make better paints and windows, safer and cleaner chemicals, and more-efficient solar panels.

The patents cover virtually all nonbiological applications of work performed by Professor Whitesides and dozens of doctoral students over the last decade. The biology related research — mostly in health care — had previously been licensed to other companies involving Professor Whitesides, including Genzyme, GelTex (sold to Genzyme for $1.2 billion in 1993), Theravance, and two privately held start-ups, Surface Logix and WMR Biomedical.

Nano-Terra, though, is selling no products. It is just offering manufacturing and design skills in realms where flexibility and low cost are crucial.

The best known patents cover soft lithography, Professor Whitesides’s method of depositing extremely thin layers of material onto a surface in carefully controlled patterns. It can work over larger surfaces than photolithography, which is widely used to make microchips. Perhaps even more intriguing, soft lithography can work on highly irregular or rounded surfaces where photolithography is all but impossible.

But while nanotechnology’s promise remains immense — the potential advances in energy, medicine and information technology have attracted billions of dollars in government and private investment in recent years — it is not yet clear which patents will prove valuable.

“You can’t just go to market with a huge patent portfolio and a promising pipeline but no revenues,” said Stephen B. Maebius, a patent lawyer in Washington and a nanotechnology expert. “That was the lesson of Nanosys,” he said, referring to the aborted 2004 public offering of a company based in Palo Alto, Calif., that was the highest-profile nanotechnology start-up backed by venture capital.

Nano-Terra was founded in 2005 with the goal of creating a home for the Whitesides patents. Its management team includes the vice chairman, Carmichael Rogers, a former student of Professor Whitesides’s and a partner with him in two other companies; the chief executive, Myer Berlow, a former AOL Time Warner marketing executive; and the president, Ueli Morant, a former market executive at I.B.M. and Philips Consumer Electronics.

Nano-Terra is part of a growing segment of nanotechnology start-ups. Other prominent academic researchers who have started nanotech companies include Chad A. Mirkin of Northwestern University (Nanosphere and NanoInk) and the late Richard E. Smalley of Rice University (Carbon Nanotechnologies). Other leading Harvard professors whose research has led them and the Harvard patent office into entrepreneurial nanotechnology include Thomas Rueckes (Nantero) and Charles M. Lieber and Hongkun Park (Nanosys).
http://www.nytimes.com/2007/06/04/te...gy/04nano.html





Patent Ruling Strikes a Blow at Qualcomm
Matt Richtel

Millions of new mobile phones containing certain Qualcomm semiconductors could be barred from import into the United States under a ruling issued Thursday by a federal government agency in a patent dispute.

Qualcomm said the ruling by the United States International Trade Commission, if it withstands an appeal, could prevent the importation into the United States of tens of millions of new mobile handsets designed for the Verizon, Sprint and AT&T Wireless networks.

The agency ruled that Qualcomm, a semiconductor company based in San Diego, had infringed on a key patent belonging to Broadcom, a competing chip company based in Irvine, Calif., that is used in the design of chips made for advanced 3G, or third-generation, smart cellphones. Qualcomm said that it planned to appeal immediately to the federal court to block the ruling. The company also said that it planned to appeal to President Bush, whose trade representative, Susan C. Schwab, has 60 days within which to veto the ruling. The company said it sought “to avoid irreparable harm to U.S. consumers” and injury to the economy.

Nancy Stark, a spokeswoman for Verizon Wireless, said that company would ask the White House to void the ruling and the federal appeals court to stay it. “It’s bad for the industry and bad for the wireless consumer,” she said. “It’s going to freeze innovation.”

Tim Luke, a telecommunications industry analyst with Lehman Brothers, said that while the I.T.C. ruling is “big,” it probably does not mean any disruption in handset supplies. Investors “will be looking for a settlement between Qualcomm and Broadcom,” he said, adding that Qualcomm also might be able to find a technological fix so that it does not use the technology covered by the Broadcom patent.

Qualcomm’s “fundamental business is really strong and they’ll have to think of a way to work around this,” Mr. Luke said.

The I.T.C. ruled that a patent governing power management in cellphone chips that is held by Broadcom was violated. Broadcom asserts that Qualcomm is using the power management technology without paying licensing royalties.

“This is tremendously significant,” said David Rosmann, vice president of intellectual property litigation for Broadcom. “Qualcomm is either going to need to take a license or they will not be able the provide the next generation of handsets.”

The I.T.C. ruling affects only new models of handsets. Under the ruling, Qualcomm would still be permitted to deliver models that already are on the market as of June 7, whether or not they use the patented technology.

Qualcomm officials said in a conference call with investors late Thursday that it had been negotiating with Broadcom to establish royalty rates. But Qualcomm officials said the rates currently under discussion were so prohibitive that, if met, would undermine Qualcomm’s business model.

Paul E. Jacobs, chief executive of Qualcomm, said the I.T.C. overstepped its statutory authority and the decision had the potential to disrupt the supply of handsets in a way that, he said, could hurt wireless carriers, and consumers.

The issue ruled on by the I.T.C. is part of a broader dispute between Qualcomm, one of the biggest makers in the world of chips for mobile phones, and Broadcom, which makes chips for many digital devices and is trying to gain more business in the mobile phone market. Last week, Broadcom prevailed in a jury trial in United States District Court in Santa Ana, Calif., in which it claimed Qualcomm had infringed three additional patents. The technology covered in those patents includes methods for transmission of high-speed data over mobile phones.

Broadcom has also sued Qualcomm over the patent covered in the I.T.C. ruling, but a federal court delayed ruling on the case until the I.T.C. acted. Four commissioners supported the decision, while two others recommended a more limited penalty be imposed on Qualcomm. Their opinions will be made public after both companies remove any confidential business information from them.
http://www.nytimes.com/2007/06/08/business/08phone.html





New Firm Eager to Slap Patents on Security Patches

Security researchers, are you tired of handing your vulnerability discoveries over to your employer, as if that were what you're paid to do? Helping vendors securing their products—for free—so that their users won't be endangered by new vulnerabilities? Showing your hacking prowess off to your friends, groveling for security jobs or selling your raw discoveries to middlemen for a fraction—a pittance—of their real value?

Take heart, underappreciated, unremunerated vassals, for a new firm is offering to work with you on a vulnerability patch that they will then patent and go to court to defend. You'll split the profits with the firm, Intellectual Weapons, if they manage to sell the patch to the vendor. The firm may also try to patent any adaptations to an intrusion detection system or any other third-party software aimed at dealing with the vulnerability, so rest assured, there are many parties from which to potentially squeeze payoff.

Intellectual Weapons is offering to accept vulnerabilities you've discovered, as long as you haven't told anyone else, haven't discovered the vulnerability through illegal means or have any legal responsibility to tell a vendor about the vulnerability.

Also, the vulnerability has to be profitable—the product must be "highly valuable," according to the firm's site, "especially as a percentage of the vendor's revenue." The product can't be up for upcoming phaseout—after all, the system takes, on average, seven years to churn out a new patent. The vendor has to have deep pockets so it can pay damages, and your solution has to be simple enough to be explained to a jury.

Because goodness, you will be looking at juries and lawyers, you can count on that. Intellectual Weapons says this isn't for everybody. The firm says it "fully [anticipates] major battles."

"We need people who have the emotional stability and the tenacity to persevere with each project—from describing the vulnerability, and helping develop the fix, through to generating and enforcing the IP," the firm states on its site.

Patenting may be a new twist, but the idea of profiteering from vulnerabilities is nothing new. iDefense Labs has its Vulnerability Contributor Program, and TippingPoint has its Zero Day Initiative. Even the Mozilla Foundation tried it, although of course the open-source software project dedicated funds to bugs found in only its own code.

The blogosphere is frothing.

"Nice. The race to the bottom started by [TippingPoint parent company] 3Com and [iDefense] is now complete. I for one hope that Matasano is able to use this idea in regards to a TippingPoint vulnerability," wrote Chris_BJune in a response to a blog from security firm Matasano's Thomas Ptacek.

According to Ptacek, the reasons why nobody should care about Intellectual Weapons includes the fact that the time required to complete a patent filing is over seven years. Add on to that the years it will take to "initiate, litigate and prevail in a patent claim, especially against an established software vendor," Ptacek said. "Presuming you do prevail; you likely won't."

Intellectual Weapons has plans to deal with these inconveniences, however. The company says that it may try to use a Petition to Make Special in order to speed up the examination process when filing a U.S. patent. Another strategy the firm proposes using is to go after a utility model rather than a patent—a utility model being similar to a patent but easier to obtain and of shorter duration—typically six to 10 years.

"In most countries where utility model protection is available, patent offices do not examine applications as to substance prior to registration," the company says. "This means that the registration process is often significantly simpler, cheaper and faster. The requirements for acquiring a utility model are less stringent than for patents."

Ptacek calls utility models "patents-lite." Other nicknames are "petty patent," "minor patent" and "small patent." Such patent workarounds are available in some EU countries and other countries including Argentina, China, Malaysia, Mexico, Morocco, Philippines, Poland, Russia, South Korea and Uzbekistan.

"Would it be [possible] for an outfit like 'Intellectual Weapons,' exploiting the services of contingency-fee lawyers, to get an injunction against a Microsoft security fix in the Republic of Moldova? Anything's possible," Ptacek said.

He doesn't believe it will happen, however, given that international patents have to be fought jurisdiction by jurisdiction. "In this case, you'd be slogging through those fights for a shot at a tiny sliver of the revenue generated by the products you're targeting. This is nothing like NTP vs. RIM, where NTP's claims enabled RIM's entire product."
http://securitywatch.eweek.com/patch...ilities.h tml





Administration Seeks Overhaul of Patent System
Steve Lohr

The Bush administration wants to reform the nation’s patent system by requiring better information from inventors and allowing public scrutiny of applications, according to the director of the government’s patent office.

The goal, said Jon W. Dudas, director of the United States Patent and Trademark Office, is to improve the quality of patents, which should curb the rising wave of patent disputes and lawsuits. The legal wrangling is often over broad descriptions of ideas or activities, so-called business methods, or software that contains only incremental changes over prior work.

“There ought to be a shared responsibility for patent quality among the patent office, the applicants and the public,” Mr. Dudas said in an interview yesterday. “If everything is done right at the front end, we’ll have to worry a lot less about litigation later.”

Some steps to improve patent quality will require changes in the law, said Mr. Dudas, who will present his views to the Senate Judiciary Committee today. Both the Senate and the House have introduced patent-reform legislation this year, amid concerns that the current overburdened, litigation-choked system is hampering innovation rather than encouraging it.

One key change, Mr. Dudas said, would be a legal clarification of what is required of patent applicants. Under current law, an inventor is required to explain why a new product is sufficiently original to deserve the exclusive rights that patent protection conveys. But the applicants have a lot of discretion. The supporting information, Mr. Dudas said, ranges from “almost nothing” to what he called “malicious compliance,” which he described as boxes and boxes of background information intended mainly to obscure the nugget of an invention in the patent application.

Reform legislation, he said, should require the applicants to conduct a thorough search of related patents and technical journals, and then explain why the patent being sought represents a significant innovation beyond previous ideas in the field.

Mr. Dudas said the reform legislation should also make sure the search and information disclosures do not put an unfair burden on inventors who are not wealthy. Personal income, number of patents filed and other measures, he said, could be used to determine who would be exempt from certain requirements. “For the truly small inventor, we might do the search for them,” he said.

The patent office is experimenting with the concept of opening the examination process to outsiders, inviting public peer reviews. On June 15, Mr. Dudas said, the patent office will begin a pilot project for open reviews of software patents. The patents in the pilot program will be posted on a Web site, and members of the public with software expertise will be allowed to send the patent office technical references relevant to the patent claims.

But the pilot project applies only to patent applications in the field of information technology, and only with the approval of patent applicants. Legislative changes would be required to have public peer views without an applicant’s approval, and thus to extend the concept to other fields.

The patent office has been putting its own quality initiatives in place in recent years. And it has hired more patent examiners, adding 1,200 examiners last year to bring its staff to more than 5,000. The percentage of patent applications approved in the first quarter of this year was 49 percent, down from 72 percent in 2000. “We’ve taken steps, and the result has been that a lot more patents are rejected,” Mr. Dudas said. “But those numbers also tell you there are a lot of bad patent applications.”

The patent office, said Josh Lerner, a professor at the Harvard Business School, has made a real effort to improve patent quality in the last few years. But Mr. Lerner questioned whether Mr. Dudas’s current proposals amounted to relying too much on getting better information from applicants.

Mr. Lerner said inventors are instinctive optimists who tend to believe that what they are doing is unique. Yet even discounting any self-serving bias, he said, the growing complexity of technology makes it more difficult for a single person — applicant or examiner — to assess the innovative merit of a patent claim.

“That’s why I think really opening the examination process to public peer review is so important,” Mr. Lerner said. “While the patent office has shown a willingness to experiment with openness, I would put that at the center.”
http://www.nytimes.com/2007/06/06/bu...nd-patent.html





Inside the Black Box
Saul Hansell

THESE days, Google seems to be doing everything, everywhere. It takes pictures of your house from outer space, copies rare Sanskrit books in India, charms its way onto Madison Avenue, picks fights with Hollywood and tries to undercut Microsoft’s software dominance.

But at its core, Google remains a search engine. And its search pages, blue hyperlinks set against a bland, white background, have made it the most visited, most profitable and arguably the most powerful company on the Internet. Google is the homework helper, navigator and yellow pages for half a billion users, able to find the most improbable needles in the world’s largest haystack of information in just the blink of an eye.

Yet however easy it is to wax poetic about the modern-day miracle of Google, the site is also among the world’s biggest teases. Millions of times a day, users click away from Google, disappointed that they couldn’t find the hotel, the recipe or the background of that hot guy. Google often finds what users want, but it doesn’t always.

That’s why Amit Singhal and hundreds of other Google engineers are constantly tweaking the company’s search engine in an elusive quest to close the gap between often and always.

Mr. Singhal is the master of what Google calls its “ranking algorithm” — the formulas that decide which Web pages best answer each user’s question. It is a crucial part of Google’s inner sanctum, a department called “search quality” that the company treats like a state secret. Google rarely allows outsiders to visit the unit, and it has been cautious about allowing Mr. Singhal to speak with the news media about the magical, mathematical brew inside the millions of black boxes that power its search engine.
Google values Mr. Singhal and his team so highly for the most basic of competitive reasons. It believes that its ability to decrease the number of times it leaves searchers disappointed is crucial to fending off ever fiercer attacks from the likes of Yahoo and Microsoft and preserving the tidy advertising gold mine that search represents.

“The fundamental value created by Google is the ranking,” says John Battelle, the chief executive of Federated Media, a blog ad network, and author of “The Search,” a book about Google.

Online stores, he notes, find that a quarter to a half of their visitors, and most of their new customers, come from search engines. And media sites are discovering that many people are ignoring their home pages — where ad rates are typically highest — and using Google to jump to the specific pages they want.

“Google has become the lifeblood of the Internet,” Mr. Battelle says. “You have to be in it.”

Users, of course, don’t see the science and the artistry that makes Google’s black boxes hum, but the search-quality team makes about a half-dozen major and minor changes a week to the vast nest of mathematical formulas that power the search engine.

These formulas have grown better at reading the minds of users to interpret a very short query. Are the users looking for a job, a purchase or a fact? The formulas can tell that people who type “apples” are likely to be thinking about fruit, while those who type “Apple” are mulling computers or iPods. They can even compensate for vaguely worded queries or outright mistakes.

“Search over the last few years has moved from ‘Give me what I typed’ to ‘Give me what I want,’ ” says Mr. Singhal, a 39-year-old native of India who joined Google in 2000 and is now a Google Fellow, the designation the company reserves for its elite engineers.

Google recently allowed a reporter from The New York Times to spend a day with Mr. Singhal and others in the search-quality team, observing some internal meetings and talking to several top engineers. There were many questions that Google wouldn’t answer. But the engineers still explained more than they ever have before in the news media about how their search system works.

As Google constantly fine-tunes its search engine, one challenge it faces is sheer scale. It is now the most popular Web site in the world, offering its services in 112 languages, indexing tens of billons of Web pages and handling hundreds of millions of queries a day.

Even more daunting, many of those pages are shams created by hucksters trying to lure Web surfers to their sites filled with ads, pornography or financial scams. At the same time, users have come to expect that Google can sift through all that data and find what they are seeking, with just a few words as clues.

“Expectations are higher now,” said Udi Manber, who oversees Google’s entire search-quality group. “When search first started, if you searched for something and you found it, it was a miracle. Now, if you don’t get exactly what you want in the first three results, something is wrong.”

Google’s approach to search reflects its unconventional management practices. It has hundreds of engineers, including leading experts in search lured from academia, loosely organized and working on projects that interest them. But when it comes to the search engine — which has many thousands of interlocking equations — it has to double-check the engineers’ independent work with objective, quantitative rigor to ensure that new formulas don’t do more harm than good.

As always, tweaking and quality control involve a balancing act. “You make a change, and it affects some queries positively and others negatively,” Mr. Manber says. “You can’t only launch things that are 100 percent positive.”

THE epicenter of Google’s frantic quest for perfect links is Building 43 in the heart of the company’s headquarters here, known as the Googleplex. In a nod to the space-travel fascination of Larry Page, the Google co-founder, a full-scale replica of SpaceShipOne, the first privately financed spacecraft, dominates the building’s lobby. The spaceship is also a tangible reminder that despite its pedestrian uses — finding the dry cleaner’s address or checking out a prospective boyfriend — what Google does is akin to rocket science.

At the top of a bright chartreuse staircase in Building 43 is the office that Mr. Singhal shares with three other top engineers. It is littered with plastic light sabers, foam swords and Nerf guns. A big white board near Mr. Singhal’s desk is scrawled with graphs, queries and bits of multicolored mathematical algorithms. Complaints from users about searches gone awry are also scrawled on the board.

Any of Google’s 10,000 employees can use its “Buganizer” system to report a search problem, and about 100 times a day they do — listing Mr. Singhal as the person responsible to squash them.

“Someone brings a query that is broken to Amit, and he treasures it and cherishes it and tries to figure out how to fix the algorithm,” says Matt Cutts, one of Mr. Singhal’s officemates and the head of Google’s efforts to fight Web spam, the term for advertising-filled pages that somehow keep maneuvering to the top of search listings.

Some complaints involve simple flaws that need to be fixed right away. Recently, a search for “French Revolution” returned too many sites about the recent French presidential election campaign — in which candidates opined on various policy revolutions — rather than the ouster of King Louis XVI. A search-engine tweak gave more weight to pages with phrases like “French Revolution” rather than pages that simply had both words.

At other times, complaints highlight more complex problems. In 2005, Bill Brougher, a Google product manager, complained that typing the phrase “teak patio Palo Alto” didn’t return a local store called the Teak Patio.

So Mr. Singhal fired up one of Google’s prized and closely guarded internal programs, called Debug, which shows how its computers evaluate each query and each Web page. He discovered that Theteakpatio.com did not show up because Google’s formulas were not giving enough importance to links from other sites about Palo Alto.

It was also a clue to a bigger problem. Finding local businesses is important to users, but Google often has to rely on only a handful of sites for clues about which businesses are best. Within two months of Mr. Brougher’s complaint, Mr. Singhal’s group had written a new mathematical formula to handle queries for hometown shops.

But Mr. Singhal often doesn’t rush to fix everything he hears about, because each change can affect the rankings of many sites. “You can’t just react on the first complaint,” he says. “You let things simmer.”

So he monitors complaints on his white board, prioritizing them if they keep coming back. For much of the second half of last year, one of the recurring items was “freshness.”

Freshness, which describes how many recently created or changed pages are included in a search result, is at the center of a constant debate in search: Is it better to provide new information or to display pages that have stood the test of time and are more likely to be of higher quality? Until now, Google has preferred pages old enough to attract others to link to them.

But last year, Mr. Singhal started to worry that Google’s balance was off. When the company introduced its new stock quotation service, a search for “Google Finance” couldn’t find it. After monitoring similar problems, he assembled a team of three engineers to figure out what to do about them.

Earlier this spring, he brought his squad’s findings to Mr. Manber’s weekly gathering of top search-quality engineers who review major projects. At the meeting, a dozen people sat around a large table, another dozen sprawled on red couches, and two more beamed in from New York via video conference, their images projected on a large screen. Most were men, and many were tapping away on laptops. One of the New Yorkers munched on cake.

Mr. Singhal introduced the freshness problem, explaining that simply changing formulas to display more new pages results in lower-quality searches much of the time. He then unveiled his team’s solution: a mathematical model that tries to determine when users want new information and when they don’t. (And yes, like all Google initiatives, it had a name: QDF, for “query deserves freshness.”)

Mr. Manber’s group questioned QDF’s formula and how it could be deployed. At the end of the meeting, Mr. Singhal said he expected to begin testing it on Google users in one of the company’s data centers within two weeks. An engineer wondered whether that was too ambitious.

“What do you take us for, slackers?” Mr. Singhal responded with a rebellious smile.

THE QDF solution revolves around determining whether a topic is “hot.” If news sites or blog posts are actively writing about a topic, the model figures that it is one for which users are more likely to want current information. The model also examines Google’s own stream of billions of search queries, which Mr. Singhal believes is an even better monitor of global enthusiasm about a particular subject.

As an example, he points out what happens when cities suffer power failures. “When there is a blackout in New York, the first articles appear in 15 minutes; we get queries in two seconds,” he says.

Mr. Singhal says he tested QDF for a simple application: deciding whether to include a few news headlines among regular results when people do searches for topics with high QDF scores. Although Google already has a different system for including headlines on some search pages, QDF offered more sophisticated results, putting the headlines at the top of the page for some queries, and putting them in the middle or at the bottom for others.

GOOGLE’S breakneck pace contrasts with the more leisurely style of the universities and corporate research labs from which many of its leaders hail. Google recruited Mr. Singhal from AT&T Labs. Mr. Manber, a native of Israel, was an early examiner of Internet searches while teaching computer science at the University of Arizona. He jumped into the corporate fray early, first as Yahoo’s chief scientist and then running an Amazon.com search unit.

Google lured Mr. Manber from Amazon last year. When he arrived and began to look inside the company’s black boxes, he says, he was surprised that Google’s methods were so far ahead of those of academic researchers and corporate rivals.

“I spent the first three months saying, ‘I have an idea,’ ” he recalls. “And they’d say, ‘We’ve thought of that and it’s already in there,’ or ‘It doesn’t work.’ ”

The reticent Mr. Manber (he declines to give his age), would discuss his search-quality group only in the vaguest of terms. It operates in small teams of engineers. Some, like Mr. Singhal’s, focus on systems that process queries after users type them in. Others work on features that improve the display of results, like extracting snippets — the short, descriptive text that gives users a hint about a site’s content.

Other members of Mr. Manber’s team work on what happens before users can even start a search: maintaining a giant index of all the world’s Web pages. Google has hundreds of thousands of customized computers scouring the Web to serve that purpose. In its early years, Google built a new index every six to eight weeks. Now it rechecks many pages every few days.

And Google does more than simply build an outsized, digital table of contents for the Web. Instead, it actually makes a copy of the entire Internet — every word on every page — that it stores in each of its huge customized data centers so it can comb through the information faster. Google recently developed a new system that can hold far more data and search through it far faster than the company could before.

As Google compiles its index, it calculates a number it calls PageRank for each page it finds. This was the key invention of Google’s founders, Mr. Page and Sergey Brin. PageRank tallies how many times other sites link to a given page. Sites that are more popular, especially with sites that have high PageRanks themselves, are considered likely to be of higher quality.

Mr. Singhal has developed a far more elaborate system for ranking pages, which involves more than 200 types of information, or what Google calls “signals.” PageRank is but one signal. Some signals are on Web pages — like words, links, images and so on. Some are drawn from the history of how pages have changed over time. Some signals are data patterns uncovered in the trillions of searches that Google has handled over the years.

“The data we have is pushing the state of the art,” Mr. Singhal says. “We see all the links going to a page, how the content is changing on the page over time.”

Increasingly, Google is using signals that come from its history of what individual users have searched for in the past, in order to offer results that reflect each person’s interests. For example, a search for “dolphins” will return different results for a user who is a Miami football fan than for a user who is a marine biologist. This works only for users who sign into one of Google’s services, like Gmail.

(Google says it goes out of its way to prevent access to its growing store of individual user preferences and patterns. But the vast breadth and detail of such records is prompting lust among the nosey and fears among privacy advocates.)

Once Google corrals its myriad signals, it feeds them into formulas it calls classifiers that try to infer useful information about the type of search, in order to send the user to the most helpful pages. Classifiers can tell, for example, whether someone is searching for a product to buy, or for information about a place, a company or a person. Google recently developed a new classifier to identify names of people who aren’t famous. Another identifies brand names.

These signals and classifiers calculate several key measures of a page’s relevance, including one it calls “topicality” — a measure of how the topic of a page relates to the broad category of the user’s query. A page about President Bush’s speech about Darfur last week at the White House, for example, would rank high in topicality for “Darfur,” less so for “George Bush” and even less for “White House.” Google combines all these measures into a final relevancy score.

The sites with the 10 highest scores win the coveted spots on the first search page, unless a final check shows that there is not enough “diversity” in the results. “If you have a lot of different perspectives on one page, often that is more helpful than if the page is dominated by one perspective,” Mr. Cutts says. “If someone types a product, for example, maybe you want a blog review of it, a manufacturer’s page, a place to buy it or a comparison shopping site.”

If this wasn’t excruciating enough, Google’s engineers must compensate for users who are not only fickle, but are also vague about what they want; often, they type in ambiguous phrases or misspelled words.

Long ago, Google figured out that users who type “Brittany Speers,” for example, are really searching for “Britney Spears.” To tackle such a problem, it built a system that understands variations of words. So elegant and powerful is that model that it can look for pages when only an abbreviation or synonym is typed in.

Mr. Singhal boasts that the query “Brenda Lee bio” returns the official home page of the singer, even though the home page itself uses the term “biography” — not “bio.”

But words that seem related sometimes are not related. “We know ‘bio’ is the same as ‘biography,’ ” Mr. Singhal says. “My grandmother says: ‘Oh, come on. Isn’t that obvious?’ It’s hard to explain to her that bio means the same as biography, but ‘apples’ doesn’t mean the same as ‘Apple.’ ”

In the end, it’s hard to gauge exactly how advanced Google’s techniques are, because so much of what it and its search rivals do is veiled in secrecy. In a look at the results, the differences between the leading search engines are subtle, although Danny Sullivan, a veteran search specialist and blogger who runs Searchengineland.com, says Google continues to outpace its competitors.

Yahoo is now developing special search formulas for specific areas of knowledge, like health. Microsoft has bet on using a mathematical technique to rank pages known as neural networks that try to mimic the way human brains learn information.

Google’s use of signals and classifiers, by contrast, is more rooted in current academic literature, in part because its leaders come from academia and research labs. Still, Google has been able to refine and advance those ideas by using computer and programming resources that no university can afford.

“People still think that Google is the gold standard of search,” Mr. Battelle says. “Their secret sauce is how these guys are doing it all in aggregate. There are 1,000 little tunings they do.”
http://www.nytimes.com/2007/06/03/bu.../03google.html





The Superfabulous World of Rufus Wainwright
Melena Ryzik

A few weeks ago, on his way to an appearance at the Union Square Barnes & Noble to promote his new album, “Release the Stars,” Rufus Wainwright decided his all-black outfit was a little dour for a meet-and-greet. “Fashion emergency!” he said. So he dashed down to the antique-jewelry shop below his apartment and picked up a 1920s Czech glass necklace, which sparkled atop his black T-shirt.

Good thing too: Mr. Wainwright’s fans expect a little flash. Around 800 of them came, some lining up as early as noon for a 7 p.m. appearance, to see him perform and autograph their CDs. Afterward they lingered, snapping photos of him and discussing their devotion.

“I’ve learned a lot about myself listening to his music,” one man gushed.

Does Rufus Wainwright know he’s fabulous?

“I do feel like I live a fabulous life,” he said over afternoon dumplings at a Japanese teahouse near the Gramercy Park home he shares with his new boyfriend, Jörn Weisbrodt, an arts administrator. “And I know that’s why a lot of the critics get so mad at me sometimes, because they’re just really jealous.”

He must be expecting an onslaught, because Mr. Wainwright, 33, the singer-songwriter-rhinestone-lover, has been superfabulous lately. His re-creation of Judy Garland’s 1961 Carnegie Hall concert garnered praise and awe, and he received a commission from the Metropolitan Opera. “Release the Stars” immediately became a best seller in Britain when it arrived last month and has been a critical hit in the United States. He did five sold-shows at the Old Vic in London, and tomorrow he will begin a run of four sold-out nights at the Blender Theater at Gramercy in Manhattan.

In New York he was in the midst of a publicity — and fashion — blitz: The night after Barnes & Noble he appeared on “Late Show With David Letterman” wearing lederhosen he had custom-made in Austria by a 25th-generation artisan who also fitted the Porsche family.

But the glam life is not without pitfalls. Watching that show with a few friends from a private dining room at the boutique Hotel on Rivington, Mr. Wainwright had an epiphany: “It’s all commercials. It’s horrifying how many commercials there are and how it just ruins the experience.”

But, he was quick to add, “I’d still love to be on ‘Oprah’ and, you know, have them visit my crib or whatever.”

Scale has always been a tricky issue for Mr. Wainwright. Though he casually refers to himself as a superstar — in a tone that’s a few notes short of irony — his last few albums were more like cult hits. In the United States “Release the Stars,” which had its debut at No. 23 on the Billboard charts with just over 24,000 copies sold, was his highest ranking; by contrast the new album from another indie favorite, Wilco, came in at No. 4 that week. And it was only recently that he became too well known to have a profile posted on a gay cruising site. (The administrators took it down, thinking it was fake.)

In the United States and Britain his most loyal audience tends to be gay men, teenagers and mother-daughter fans. (Several sets turned up at Barnes & Noble.) “There’s a tinge of sadness to their devotion,” he said. “It relates with the alienation that I bring up. So I still feel somewhat subversive, which is nice.”

To promote that approach “Release the Stars” was meant to have an underground feel, recorded in Brooklyn with a Berlin detour to “go totally electroclash, get a weird haircut and maybe take up drug addiction again or something,” Mr. Wainwright said. Instead, when he got to Germany, “this kind of wave of, like, romanticism and grandiosity and sort of high culture really took hold of me,” he said.

His critics — the jealous ones — might suggest he has always been in thrall to ostentation, so Mr. Wainwright enlisted Neil Tennant of the Pet Shop Boys to rein him in. It worked, sort of. “Remarkably, Mr. Wainwright infuses ‘Release the Stars’ with enough honest emotion to overcome the grandiosity, or at least undercut it a bit,” the critic Nate Chinen wrote in The New York Times.

Mr. Wainwright would like to make a solo piano record, and several albums with his musical family. (His parents are the folkies Kate McGarrigle and Loudon Wainwright III; his sisters, Martha Wainwright and Lucy Wainwright Roche, often join his tours.) But he talks most excitedly — and most often — about his opera, a tale of a (fabulous) day in the life of a diva.

“I really believe that opera’s a language,” he said. “I think it’s a whole parallel, separate world where all those characters exist. And once a composer of opera realizes that or discovers who those people are, as I have with this character who I’m writing about, it’s your mission to breathe life into this other being.”

So Mr. Wainwright is in no danger of deflating. What would make his life more fabulous? “I’d love to play Madison Square Garden,” he said, “and get hounded and lose all sense of dignity.”
http://www.nytimes.com/2007/06/04/ar...ic/04wain.html





Saatchi Gets the Boot Over the Use of a Dead Rock Star in Advertising
Eric Pfanner

Dead celebrities are hot. From Einstein to Elvis and Gene Kelly to Orville Redenbacher, they keep popping up as posthumous pitchmen for everything from cars to cola. But when Saatchi & Saatchi London recently featured images of Kurt Cobain and other dead rock stars in ads for Dr. Martens footwear, the agency and its client got burned.

Courtney Love, Cobain's widow, went ballistic when she heard about the ads, which ran in a small British music magazine called Fact. One of the ads shows Cobain, who was the lead singer of the band Nirvana, sitting on a cloud in the sky, draped in robes and shod in Dr. Martens boots.

"She thinks it's outrageous that a company is allowed to commercially gain from such a despicable use of her husband's picture," a spokeswoman for Love told People magazine.

It was hardly the first time that Cobain's image had been used for commercial gain. According to Forbes magazine, which compiles a list of the earning power of dead celebrities, Cobain came in first last year, just ahead of Elvis Presley, reeling in $50 million for his estate.

In this case, however, what started out as a legitimate use of the photo of Cobain, along with images of Sid Vicious of the Sex Pistols, Joey Ramone of the Ramones, and Joe Strummer of the Clash, went wrong because of a series of missteps and the border-hopping power of the Internet.

Saatchi said it found the images in the Corbis photo library and obtained copyright clearance to use them in Britain.

The trouble began when an employee - disobeying instructions, Saatchi insisted - submitted the images to www.adcritic.com, a U.S.-based ad industry Web site. In the United States, the estates of dead celebrities are allowed to control the use of their images, unlike in Britain, where, lawyers say, no approval is needed.

A spokeswoman for Saatchi, Eleanor Conroy, said the employee who was responsible for the breach had been dismissed.

"While we believe the creative is a beautiful tribute to four legendary musicians, the individual broke both agency and client protocol in this situation by placing the ads on a U.S. advertising Web site and acting as an unauthorized spokesperson for the company," Kate Stanners, executive creative director at Saatchi & Saatchi London, said in a statement.

Sending ads to sites like AdCritic is common, particularly when an agency or ad executive is trying to "seed" it so that it can spread "virally" on the Internet. Creative types like to do this in order to generate chatter about their ads, which is helpful when awards season rolls around. Clients rarely complain, because they get free advertising.

In this case, however, Airwair International, the British company that makes Dr. Martens, was not impressed. It canceled its contract with Saatchi & Saatchi, reportedly worth £5 million, or $9.9 million, over three years.

David Suddens, chief executive of Airwair, said Saatchi first approached his company's marketing department in February with sketches outlining the idea for the ads.

"We said firmly, 'no way,' " he said.

But dead celebrities are trendy in advertising at the moment. Gene Kelly appeared in a recent British ad for Volkswagen; a digitally generated likeness of Orville Redenbacher starred in U.S. ads for the eponymous popcorn brand.

Suddens said Saatchi wanted to give the idea another try, developing the sketches into proposed ads, and showed them to Airwair in April. Agency executives said they wanted to have the images published at least once so that they could be submitted for awards, he added. An Airwair executive finally agreed to allow the ads to be used in Britain only, and only in Fact magazine, Suddens said.

The company has apologized to Love, and Suddens said he was unmoved by Saatchi's argument that the use of the ads was legitimate in Britain. "Enough people said it was offensive for us to consider it offensive," he said.

Airwair has focused its marketing on a campaign called "Freedm," featuring a Web site that invites would-be rock stars and other artists to post their creations.

Many marketing executives have been wary of user-generated content, fearing the loss of the editing function that ad agencies provide. In this case, however, it was the pros who got their clients in trouble.
http://www.iht.com/articles/2007/06/...iness/ad04.php





Networks Try New Ways to Keep Eyes on Ads
Louise Story

To many people, the commercial break is when you use the restroom, change the channel or grab a bite to eat.

But television networks, under pressure from advertisers who want their commercials to be seen, are trying to change that.

Fixing the commercial break is the most pressing topic of discussions this spring between the U.S. networks and advertisers as they negotiate television ad sales deals for the next year. Pressure to keep viewers tuned in is greater because of the rising number of digital video recorders, which allow people to fast-forward through commercials.

All of the networks say they are experimenting with ways to hold viewers' attention during commercials. But Viacom, the media conglomerate that owns cable networks like MTV, VH1, Comedy Central and Spike, is describing more specifics than most. Viacom says it has come up with several formats that could help keep people tuned in, and its creative teams have been involved in some of the production.

The changes include creating commercials that last for an entire break, integrating stars from programs into the ads and developing story lines that run through the ads. MTV and VH1 have decided to change from two commercial breaks per half-hour to three, so that each one can be shorter.

Viacom executives said that their networks' youthful audiences have made it acutely important for them to innovate.

"They're younger, they're more quickly engaged and more quickly dismissive if they're not engaged," said Judy McGrath, chairman and chief executive of MTV Networks, which includes MTV, Logo and Comedy Central.

In one example from February, a Verizon Wireless ad ran during a live telecast of a concert on MTV, but the camera never stopped showing the concert in the background. As the band Fall Out Boy continued to play, an image of a Verizon Wireless phone appeared in the center of the screen; viewers did not have to miss a beat of the concert and could still hear the crowd cheering.

"The whole goal here is to blur the line between content and advertising message," said Hank Close, executive vice president for ad sales at MTV Networks.

Logo, the lesbian and gay network, is currently showing a two-minute commercial for Subaru that tells the story of a couple of women who like to take extreme athletic trips together. Only at the end of the ad does the Subaru logo appear.

MTV Networks has been involved creating these ads, and in many cases the advertisers are able to run the ads elsewhere if they choose, Close said. The beverage company Dr Pepper, for instance, liked an ad that VH1 created for it so much that it posted the commercial on its Web site.

One of the problems the networks face is letting viewers know about the new content during commercials, Close said. Viacom has been promoting some commercials before it shows them.

Last November, for example, six Viacom networks showed the "Spider-Man 3" trailer at the same time. It was the first time that the trailer had appeared anywhere, and leading up to its airing, actors promoted it during content time on the networks.

Kirsten Dunst, an actress starring in the movie, did a spot on MTV encouraging viewers to tune in to see the premiere of the movie trailer.

Similarly, MTV was the first to air an extended version of the first Nintendo Wii commercial during a block of programming in November called Gamer's Week 2.0. After the ad, a host of Gamer's Week 2.0 told the audience that they had just seen a world premiere of the Wii commercial.

"The commercial was so integrated into the context of the messaging of the program that it felt very synonymous," said John Shea, executive vice president of integrated marketing for MTV Networks Music Group and for the Logo Group. "Whenever we can turn what has been thought of as a break into an entertaining moment, we're doing that."

On a new Logo show this month, "The Big Gay Comedy Sketch Show," Amp'd Mobile, a U.S. cellphone company, sponsored an ad that stars an actor from the show "The Gay Werewolf." In the commercial, which filled the entire two-and-a-half-minute commercial break, the actor played his character from the show, a man who turns gay whenever there is a full moon.

Not coincidentally, the werewolf in the ad likes to call his mom on his Amp'd Mobile phone.

MTV last summer created a new character, a woman named Parker, to turn commercials for Herbal Essences hair care products into more entertaining content. There were five serial ads that led up to the network's Video Music Awards. First Parker sneaked into the theater where the awards show would be held, and then she was mistaken for a presenter and pushed onto the stage.

Finally, for all the television viewers who had been following the Herbal Essences ad story line, Parker appeared on the red carpet during the awards show - and waved to viewers as if she were a real celebrity rather than the star of a few television commercials.
http://www.iht.com/articles/2007/05/...ess/adco22.php





Nielsen: DVRs Behind Viewer Drop
Paul J. Gough

Nielsen Media Research said Thursday that the impact of digital video recorders is a leading cause of why television viewing dropped precipitously this year.

Many of the top shows -- from ABC's "Grey's Anatomy" to Fox's "American Idol" to CBS' "CSI" -- saw their ratings drop in the spring.

Prompted this month by questions from NBC, Nielsen began an investigation into the factors that could have led to the slide. Nielsen's probe is almost complete, but in the meantime the company has discovered several things.

"DVRs appear to be the largest factor in that," said Pat McDonough, Nielsen senior vp planning policy and analysis.

But there are other factors in Nielsen's early findings, among them the difference between an Olympic year (2006) and a non-Olympic year (2007) as well as a higher number of repeat programs this spring than in previous years.

Nielsen also said that, ahead of this year's upfront negotiations, it would offer average commercial minute ratings in an electronic file that includes shows from April 30 forward and the six streams that includes live, live-plus-same-day DVR playback and live-plus-1, 2, 3 and 7 DVR playback.

The ratings measure the audience for commercials during a given program, excluding the program itself as well as any public service announcements, promotions or other noncommercial time. TV networks and ad agencies traditionally use program ratings to approximate commercial ratings, but now with the weekly data they'll be able to come closer than ever before, said Sara Erichson, executive vp client services at Nielsen.

Nielsen also released data that said NBC's "The Office" topped the week of April 30 for commercial viewing on DVRs compared with live viewing. Also scoring higher were Fox's "Family Guy" and "Bones," the CW's "Smallville" and "Grey's."

Nielsen found that 58% of broadcast prime viewing during that week among adults 18-49 in DVR households was live, compared with 85% for cable primetime and 84% for syndicated. The majority (95%-99%) of viewing of all types was done within the first 75 hours of original air, which is covered by the live-plus-3 system.

The Hollywood Reporter and Nielsen Media Research are both owned by the Nielsen Company.
http://www.hollywoodreporter.com/hr/...ed3e283c6b581b





TV Advertising Sound Levels

Since assuming responsibility for TV advertising regulation, the ASA has received hundreds of complaints from viewers objecting to what they have found to be “noisy ads”. In many of those instances, the ASA has found that the ads have been broadcast at levels that are acceptable according to the current stipulations of the BCAP TV Advertising Standards Code. Concerned at the apparent incongruity between viewer perception and the realities of TV ad broadcasting, BCAP has been monitoring the ad sound levels issue and now proposes that the current rule on sound levels be changed. But what does it all mean for the viewer?

Sound level standards – the technicalities (you may wish to skip to the next section!)
Sound levels in TV broadcasting have traditionally been measured using Peak Programme Meters (PPMs), which indicate the peak sound levels (the loudest part or parts) of ads’ soundtracks. Currently, the BCAP TV Advertising Standards Code states that: “ads must not be excessively noisy or strident. Studio transmission power must not be increased from normal levels during advertising.”

To comply with the rule, broadcasters are told that the peak sound level at the studio output should not exceed a level of 6 on the PPM or, as a lot of advertisers today use sound compression techniques, a maximum level of 4 on the PPM is imposed on ads with a highly compressed soundtrack. Compressing an ad’s soundtrack is akin to levelling out the ‘peaks and troughs’ of the sound waves so that the peak levels of the compressed version would be the same as those of the “natural” version but the ‘troughs’ are raised.

The overall effect will be that the ad is subjectively louder. Also an ad break may occur during a quiet moment in a programme, therefore increasing the perceived loudness of the ads that follow.

What is to be done?
BCAP’s proposed rule encourages broadcasters to measure the subjective loudness levels of ads, rather than simply measuring peak sound levels. Subjective loudness is based on:

*the peak levels of sounds
*the length of time sound levels are maintained
*the different frequencies, or ‘pitches’, contained in the soundtrack.

Under the proposed rule, broadcasters will be told that “A consistent subjective loudness must be maintained between individual advertisements and between the advertisements and programme and other junction material.”

The proposed new rule gives broadcasters much more technical guidance on how they can ensure greater consistency between ad and programme sound levels. The International Telecommunications Union (ITU) has drawn up recommendations on measuring subjective loudness levels; those recommendations have been incorporated into the proposed rule. Broadcasters will now have the option to carry out their testing in accordance with the following ITU recommendations:

Algorithms to measure audio programme loudness and true-peak audio level (ITU-R BS1770)

and

Requirements for loudness and true-peak indicating meter (ITU-R BS1771)

What should it all mean?
If the proposal is accepted, BCAP considers that broadcasters should be better able to match the sound levels of ads with the sound output of the whole channel. That means there should be less of a perceived imbalance between ad and programme sound levels, leading to less viewer irritation and fewer complaints to the ASA.

What now?
BCAP is consulting on its proposed rule. The purpose of the consultation is not simply to ask if TV ads are too loud: BCAP already acknowledges that some viewers perceive that they are. Instead, the purpose of the consultation is to encourage technically informed responses as to whether the proposed rule will give broadcasters enough guidance to ensure that there will be less of a perceived disparity between the sound levels of TV ads and programmes.

The consultation will close on Friday 3 August 2007 at 5pm. Any responses received will be considered and then a final decision made. BCAP will announce its decision later in the year.

Further information on the consultation can be here.
http://www.asa.org.uk/asa/focus/back...und+Levels.htm




Why Music Really is Getting Louder
Adam Sherwin

Dad was right all along – rock music really is getting louder and now recording experts have warned that the sound of chart-topping albums is making listeners feel sick.

That distortion effect running through your Oasis album is not entirely the Gallagher brothers’ invention. Record companies are using digital technology to turn the volume on CDs up to “11”.

Artists and record bosses believe that the best album is the loudest one. Sound levels are being artificially enhanced so that the music punches through when it competes against background noise in pubs or cars.

Britain’s leading studio engineers are starting a campaign against a widespread technique that removes the dynamic range of a recording, making everything sound “loud”.

“Peak limiting” squeezes the sound range to one level, removing the peaks and troughs that would normally separate a quieter verse from a pumping chorus.

The process takes place at mastering, the final stage before a track is prepared for release. In the days of vinyl, the needle would jump out of the groove if a track was too loud.

But today musical details, including vocals and snare drums, are lost in the blare and many CD players respond to the frequency challenge by adding a buzzing, distorted sound to tracks.

Oasis started the loudness war and recent albums by Arctic Monkeys and Lily Allen have pushed the loudness needle further into the red.

The Red Hot Chili Peppers’ Californication, branded “unlistenable” by studio experts, is the subject of an online petition calling for it to be “remastered” without its harsh, compressed sound.

Peter Mew, senior mastering engineer at Abbey Road studios, said: “Record companies are competing in an arms race to make their album sound the ‘loudest’. The quieter parts are becoming louder and the loudest parts are just becoming a buzz.”

Mr Mew, who joined Abbey Road in 1965 and mastered David Bowie’s classic 1970s albums, warned that modern albums now induced nausea.

He said: “The brain is not geared to accept buzzing. The CDs induce a sense of fatigue in the listeners. It becomes psychologically tiring and almost impossible to listen to. This could be the reason why CD sales are in a slump.”

Geoff Emerick, engineer on the Beatles’ Sgt. Pepper album, said: “A lot of what is released today is basically a scrunched-up mess. Whole layers of sound are missing. It is because record companies don’t trust the listener to decide themselves if they want to turn the volume up.”

Downloading has exacerbated the effect. Songs are compressed once again into digital files before being sold on iTunes and similar sites. The reduction in quality is so marked that EMI has introduced higher-quality digital tracks, albeit at a premium price, in response to consumer demand.

Domino, Arctic Monkeys’ record company, defended its band’s use of compression on their chart-topping albums, as a way of making their music sound “impactful”.

Angelo Montrone, an executive at One Haven, a Sony Music company, said the technique was “causing our listeners fatigue and even pain while trying to enjoy their favourite music”.

In an open letter to the music industry, he asked: “Have you ever heard one of those test tones on TV when the station is off the air? Notice how it becomes painfully annoying in a very short time? That’s essentially what you do to a song when you super-compress it. You eliminate all dynamics.”

Mr Montrone released a compression-free album by Texan roots rock group Los Lonely Boys which sold 2.5 million copies.

Val Weedon, of the UK Noise Association, called for a ceasefire in the “loudness war”. She said: “Bass-heavy music is already one of the biggest concerns for suffering neighbours. It is one thing for music to be loud but to make it deliberately noisy seems pointless.”

Mr Emerick, who has rerecorded Sgt. Pepper on the original studio equipment with contemporary artists, admitted that bands have always had to fight to get their artistic vision across.

He said: “The Beatles didn’t want any nuance altered on Sgt. Pepper. I had a stand-up row with the mastering engineer because I insisted on sitting in on the final transfer.”

The Beatles lobbied Parlophone, their record company, to get their records pressed on thicker vinyl so they could achieve a bigger bass sound.

Bob Dylan has joined the campaign for a return to musical dynamics. He told Rolling Stone magazine: “You listen to these modern records, they’re atrocious, they have sound all over them. There’s no definition of nothing, no vocal, no nothing, just like – static.”

Studio sound

— The human ear responds to the average sound across a piece of music rather than peaks and crescendos. Quiet and loud sounds are squashed together, decreasing the dynamic range, raising the average loudness

— The saturation level for a sound signal is digital full scale, or 0dB. In the 1980s, the average sound level of a track was -18dB. The arrival of digital technology allowed engineers to push finished tracks closer to the loudest possible, 0dB

— The curves of a sound wave, which represent a wide dynamic range, become clipped and flattened to create “square waves” which generate a buzzing effect and digital distortion on CD players
http://entertainment.timesonline.co....cle1878724.ece





Converters Signal a New Era for TVs
Jacques Steinberg

At midnight on Feb. 17, 2009, the rabbit ears and the rooftop antennas that still guide television signals into nearly 1 of every 5 American homes will be rendered useless — unless they are tethered to a new device, including two versions unveiled yesterday, that the government will spend as much as $80 a household to help families buy.

The V-shaped rabbit ears, which have stood sentry in some living rooms and dens since the early 1950s, risk going the way of the eight-track tape player or Betamax in 20 months because that is when local television stations will cease sending their signals over the analog airwaves, and instead begin transmitting their programming exclusively over the more modern digital spectrum.

The change, which was set in motion by Congress and the Federal Communications Commission in the mid-1990s, is being made at least partly to give viewers a better picture and to make it easier for stations to broadcast their signals in high definition.

“The moment coming is the end of something that has been around for 60 years — conventional television — and it has been a wonderful era,” said Richard E. Wiley, a former chairman of the F.C.C. who led a government advisory panel on what was then known as “advanced television” from 1987 to 1995.

“With that ending will come this new digital world, this much greater world,” Mr. Wiley said, “but many people aren’t yet ready or haven’t gotten the word.”

Those families still using antennas on their roofs or atop their sets to watch David Letterman or “Desperate Housewives” — nearly 20 million homes, according to government figures — will eventually be unable to see their favorite programs, at least not without a digital-ready television or a converter that will serve to translate the new signals for old TVs and their antennas. (Those viewers who already get their television from satellite or cable providers are not expected to have much disruption.)

That is where the government vouchers come in. Yesterday, the National Association of Broadcasters, the powerful trade lobby representing the nation’s television networks and stations, lifted the curtain on two prototypes for those basic digital converters — one made by LG, the other by Thomson, which is distributed under the RCA brand — that will start appearing in electronic and department stores in January, at an expected cost of about $50 to $70.

To ensure that uninterrupted access to free, over-the-air television does not pose a financial hardship for viewers, a government agency with a name that sounds as if it was borrowed from the old Soviet Union — the National Telecommunications and Information Administration — will issue $40 gift cards to consumers who want to buy the converters so they are not left behind when television as we have always known it goes dark in early 2009.

Beginning in January, consumers may apply for up to two coupons each, for a total of $80. (More information on the program is available at an F.C.C. Web site, www.dtv.gov, or the broadcasters’ site at www.dtvanswers.com.)

All told, the government has set aside $1.5 billion to help viewers pay for the converters, although it expects to recoup that cost — and more — by later auctioning off the portion of the broadcast spectrum being vacated by the TV stations.

While some of the unused spectrum will be given to public safety agencies like police and fire departments — because those frequencies are useful at passing through buildings and walls — much of it will be bought by cellular and other wireless companies seeking to expand their services.

The legislation establishing the $40 coupons was passed by Congress in late 2005, with the support of telecommunications and software companies. At least some of those companies expected either to manufacture the digital converters or to bid for the older frequencies being returned by the stations.

Consumer groups, however, have expressed concern that some families will have neither the means to buy the converters nor the awareness to successfully obtain the vouchers.

The broadcasters’ association said yesterday that it was embarking on a public service campaign intended to ensure that some viewers know they have to update their equipment or risk losing the ability to watch TV.

When the value of the advertising time being donated by the stations is taken into account, the broadcasters estimate the value of their awareness campaign at $100 million.

“Our No. 1 goal,” said Shermaze Ingram, a spokeswoman for the broadcasters’ association, “is that no one loses TV reception because of a lack of information.”

Among the advantages of digital television being promoted by the broadcasters is that the signal required is so compressed, or efficient, that a station may be able to send out four streams of programming where once it had only one. WETA, a public television station in Washington, has already begun transmitting a supplemental digital channel aimed solely at families, in addition to its regular programming.

That the digital signal provides a far superior image on the screen than its analog forebear was made evident yesterday morning, when representatives of the broadcasters’ association set up a makeshift experiment in a conference room at the Hearst building in Manhattan.

On one side was a television receiving a traditional over-the-air signal from a local Fox affiliate, its picture grainy and chattering with static. Next to it was a television tuned to the same station, this one with its antennas connected to the converter prototype made by LG, which was not much bigger than a cigar box.

The image on its screen was as clear as if being sourced from a DVD player.
http://www.nytimes.com/2007/06/07/te...07digital.html





Away From the Set but Not Away From HDTV
Stephen C. Miller

It’s summertime, and catching your favorite TV shows is not so easy when you are on vacation or a weekend outing. A cheap, portable TV is one option, but what if you’ve developed a taste for high definition? Plextor has come out with its Mini Digital HDTV Receiver, which connects to a laptop’s U.S.B. port. The receiver, which sells for $99 and is available at www.plextor.com, has an external antenna and software that scans for over-the-air HD signals.

Plextor’s receiver works only with computers running Windows XP or Vista. The more computing horsepower you have — both memory and graphics — the better your viewing experience. Underpowered systems can produce a picture that occasionally stutters.

The software, called ArtecMedia from Ultima, allows you to switch channels and record and play back shows. You can also set a schedule of shows to record. Of course, the PC and the software must be running to time-shift shows this way. The receiver draws its power from the laptop, so you may want to plug in before you veg out.
http://www.nytimes.com/2007/06/07/te...gy/07hdtv.html





Everything You Ever Wanted to Know About Video Codecs
David Field

Back in the Paleolithic age – just after man had discovered stone tools and how to load CD-ROMs into caddies – stunning pixelated video played back from our mighty 486s. In short, we have video codecs to thank for that and everything that evolved from it.
In the many years since that early time, video quality has increased thanks to ever-expanding storage capacities and processing power – the two building blocks of high quality digital video. At the same time we also developed a taste for better quality from physical media and video, with a bitrate small enough to be streamed over our measly bandwidth limits.

All codecs (a contraction of code/decode), whether they be for video, audio or some other type of data, exist to change a long bit series into a shorter bit series and back again, or at least into something that closely resembles the original. This magic process is called ‘compression’ (or ‘decompression’ in the latter case).

It’s taken its sweet time, but high definition video is finally here in the form of HD-DVD and Blu-ray. Although they’re both a huge jump up from DVDs, there’s very little difference between the two formats: They provide 1920 x 1080 (or 1080p) resolution, have considerably more space and higher transfer speeds than DVDs and both use the same codecs to decrease the size of the original video so it can be shoehorned onto the disc.

If you are wondering why optical discs still need to use compression to hold a movie when they can now store anywhere from 15 to 50GB and transfer data at a minimum of 36Mb/s, ask this question instead: How much space do you need for true high definition video?

Movies in space
Here’s a fun and jaw-dropping fact about digital video: At a post-production house, an uncompressed two-hour film in digital cinema resolution and quality will clock in at about 12 terabytes, less 9 to 18 gigabytes for the accompanying 16 channels of 48 or 96kHz audio.

Some of this can be explained away when you consider digital cinema’s 4096 x 2160 (or 4K) resolution, but the data rate is still monstrous – far too high for commercial cinemas to read and project, let alone store. This is why digital films are perfectly – or ‘losslessly’ – compressed to no more than 500GB, resulting in visually identical footage that requires a bit of decoding processor muscle.

Even after you account for the drop in resolution from 4K to 1080p, it’s still clear that no consumer format has enough space to deliver this kind of perfectly reproduced image quality. And that’s just the film – we haven’t even thought about the space needed for the extra features we’ve come to expect from our discs yet. This is where ‘lossy’ codecs come into play. They’re much more complex than lossless codecs, and we’ll examine them after we’ve looked at the basics of compression.

Slice and dice
Compression in general exploits patterns that exist in data sequences. If lengthy patterns can be replaced with a more concise placeholder, the sequence will become smaller without any information being lost.

If you’re thinking of ZIP and RAR files right now, you’re on the right track. Conceptually, lossless video codecs resemble a RAR archiver designed for video. There’s a hard limit on how much data you can remove, which is governed by the laws of information entropy. To sidestep these hard mathematical limits, we have to be prepared to sacrifice some data first.

Lossy codecs not only use straight compression, they exploit the way we perceive video and the way it is constructed. They predict what will happen from one frame to the next, then deliberately leave data out, then take a mathematical guess at what to fill in the blank spaces with later on. It sounds scary, but in practice all they do is throw away information from the picture that is hard for your eyes to perceive and easy for your brain to miss.

What’s in a frame?
At the most basic level, a video is just a series of images (in this case, frames) displayed one after the other at a constant rate. The rate determines the look and smoothness of the video; 24fps for a movie, 25fps for PAL and 29.97fps for NTSC.

Like a bitmap, every frame of video is made up of a grid of pixels arranged along the X and Y axes. Think of a video as thousands of frames positioned sequentially down the Z axis that are displayed one at a time, every 1/24, 1/25 or 1/30 of a second. Video sequences differ slightly from computer images as the pixels aren’t made up of RGB values – they are stored in a YCrCb colour space. This meshes a full resolution greyscale (luma, or Y) layer with two layers of colour (chroma, or the red component Cr and the blue component Cb).

Generally, video contains a full resolution luma image and half resolution chroma images, which are scaled up and layered over the full resolution luma image. This effectively colours in four pixels of a detailed greyscale image with one pixel of colour. The technique is called chroma subsampling, and it works because our eyes are much more sensitive to the brightness of a signal than its colour. This little biological quirk not only came in handy when colour was added to the first black and white TV signals, but it’s also used to reduce the bandwidth of a video stream.

All MPEG formats use 4:2:0 chroma subsampling to halve the video bandwith of a high quality master source. Not only is it very hard to tell the difference between a half bandwidth 4:2:0 signal and a full bandwidth 4:4:4 signal, material encoded with any MPEG codec (which encompasses most of what we watch) is subsampled at 4:2:0. You literally won’t know what you’re missing unless you see a film shot and projected digitally with fearsome gear like the Arriflex D20 and Sony’s Cinealta projector.

And yes, we’re purposely sweeping the whole film thing under the rug.

The Moving Picture Experts Group, or MPEG, is easily the biggest player in the lossy video codec game. It’s been around for the last 18 years, has a lot of imitators and a legacy of codecs used in all facets of the industry. MPEG-1 drove VCDs. MPEG-2 drove DVDs. If it hasn’t already, MP3 should win the award for the most publicly recognisable IT acronym in the world.

The MPEG-4 specification is more of a container standard than a codec. It’s comprised of 23 parts, not all of which are dedicated to video. Part 2 is the standard video codec – the same compression scheme that both DivX and XviD are based on. Part 10 is a more advanced video codec – known formally as H.264/MPEG-4 AVC. H.264 is one of the three codecs that drives both HD-DVD and Blu-ray.

Aside from throwing away half the colour information from a high quality master to save space, codecs (MPEG-based and others) use other techniques to approximate data from a video file before compressing it. Here are the most common techniques:

Spatial compression
Using a technique similar to that of JPEG to compress still images, the size of a single image can be reduced. This does reduce the image size, but not greatly, mostly relegating spatial compression to the domain of lossless codecs. It still serves a fairly major purpose in the process of lossy encoding though, as you’re about to find out.

Temporal compression
Apart from some epilepsy-inducing Japanese music clips, all video is continuous. From one frame to the next, most of the detail stays the same: Think of the background, the walls or, if he’s been cast, Keanu Reeves’ expression. Why waste precious storage space to store tens of thousands of images of Keanu Reeves if he looks exactly the same in 90 percent of them?

This is exactly what temporal compression does. A keyframe (known in the MPEG world as an I-frame) that contains all the important data is drawn, and in the following frames whatever doesn’t change is left out and filled in with data from the keyframe. Temporal compression literally means compression in time (as opposed to spatial compression, which is literally compression in space) and it can slice away massive amounts of data.

Keyframes are reference points in a movie made of a spatially compressed image that is used by subsequent frames to fill in parts of the image that stay the same from frame to frame, letting them be replicated instead of repeatedly encoded. As such, the best position for a keyframe in a sequence is when the sequence suddenly changes, such as a cut between a dim indoor game of cards and an outdoor shot of a car about to run into the house where said game is being played.

Motion compensation
Temporal compression has some limits, in that it doesn’t deal with camera pans very well. When an object moves from one location in a frame to another, the scene has to be updated even if the element itself doesn’t change. MPEG-4 introduced motion compensation, which uses vectors to push static elements around a frame in time instead of redrawing them.

Motion compensation and temporal compression are the keys to reducing data, and the reason why when you jump to a point in a video on an old machine it takes a second to play. The codec has to render the keyframe first then recreate all the frames between it, as well as the frame you requested.

Discrete Cosine Transformation
Although you’ll need a degree in computer science to fully understand it, the DCT is the heart of a lossy encoder. In short, it’s a process of approximating numbers with a degree of accuracy. During an encode, every number in a series is simply halved and the remainders thrown away. When it comes time to decode, they are doubled, resulting in values that closely resemble the original.

The technique works better on bright areas than dark ones, which is why compressed blacks don’t always look as smooth as their non-compressed counterparts, although codecs can control this.

Macroblocks
It’s inefficient, especially when video is stored in a YCrCb colour space, to refer to each pixel individually. Macroblocks are groups of (usually) 8 x 8 pixels that have their values converted to a formula. Because neighbouring pixels normally look very similar, the formula determines the average brightness of the pixels and how they change relative to the average value of the macroblock.

The quantisation matrix
Here’s another tricky concept deeply rooted in hardcore mathematics. Think of a quantisation matrix as a palette of values that controls how the pixels in a macroblock are converted from pixels to a formula. Modifying it modifies the way pixel information is thrown away.

Multipass encoding & variable bitrates
Thanks to the world of audio codecs, you should already be familiar with the concept of variable bitrates. In short, they let you open the bandwidth taps during the more visually intensive scenes and use less bandwidth for the more placid scenes.

Multipass encoding is an extension of this; it lets the codec plot where to position keyframes and how to allocate its bandwidth resources as the footage progresses.

New school

x264
Reverse-engineering industry standards is a great open source tradition, and x264 is the result of the video industry’s new wonder codec, H.264, being given the treatment. H.264 is a joint venture between the video arm of the telecommunications standardisation sector (think video-conferencing) and the Moving Picture Experts Group, and it’s encapsulated in the behemoth that is the MPEG-4 standard. It’s very different to and much more processor intensive than standard MPEG-4, which is why it’s used with HD-DVD and Blu-ray, some hard drive-based HD cameras and, perhaps most importantly, broadcasting.

Like H.264, x264’s strength lies in the way it treats macroblocks. It can work with standard 8 x 8 pixel macroblocks as well as 4 x 4 blocks and rectangular macroblocks if a frame is detailed enough to warrant it. It also supports cut detection for better I-frame placement and motion compensation. If you’re game, you can even toy with custom quantisation matrices. x264’s roots may be deeply entrenched in H.264’s heart, but its slight differences make it unsuitable for broadcast. We included it in our test because it’s better suited to PC-based archival and we already have a broadcast codec in the roundup.

WMV9
We couldn’t let Windows Media 9 slip underneath the radar for a few reasons. Its SMPTE-approved identical twin VC-1 is driving the images on HD-DVD and Blu-ray, and it uses the same fundamental coding mathematics (DCT) as MPEG without, er, ‘borrowing’ MPEG theory. One such technique involves using what are known as zig-zag tables to reorder the data into one of 13 preset patterns. One of these patterns will yield better compression.

WMV and VC-1 aren’t as demanding as x264 and H.264, but in comparison to the other codecs available, they’re both highly asymmetrical – they take a lot more time to encode something than decode it. They also need a lot of processing power to draw a series of frames based on the encoded data. This is why, when H.264 found its way onto the iPod in an effort to reduce the video file size but maintain the quality level, a dedicated decoder chip found its way into the circuitry to tackle the encoding.

Old school

DivX
Originally created from the ashes of an early build of a Microsoft MPEG-4 based codec, DivX has evolved into a container format and has developed a reasonably friendly user interface. It’s maintained in-house by DivX Networks, a company that earns a multi-million dollar crust by licensing out its codec for use in DVD players and providing a high quality alternative to YouTube.

XviD
Before DivX went commercial, it launched Project Mayo, an open(ish) source multimedia project. It was eventually scrapped, at which point some of the contributors to Project Mayo took their work and retooled it into XviD. It too is based on MPEG-4.

XviD uses global motion compensation to handle camera tilts and pans as well as quarter-pixel block motion compensation for movement of fine detail. It works best on two-pass encoding and is quite refined thanks to its popularity with users and project contributors.

Really old school

Cinepak
The geriatric of the bunch, Cinepak arrived on the scene in 1992, just after the dawn of QuickTime. Cinepak was the first non-Apple codec ever added to Apple’s multimedia framework, and in 1993 went on to be the first non-Microsoft codec to be added to Video for Windows – which predated DirectX and DirectShow. It was also responsible for FMV in early games, both on PCs and consoles like the Sega Saturn.

Unlike most DCT-based codecs, Cinepak uses the simpler vector quantisation combined with motion keyframes to reproduce blocky images, albeit with processor requirements from the early ‘90s. It works by dividing the image into sections and assigning each its own 256-colour palette. These sections are then subdivided into blocks of 4 x 4 pixels, which instead of being assigned colours are given references to a section’s colour palette and told how to assign colour to the pixels in the block.

Originally, it provided 320 x 240 video at bitrates within the reach of x1 CD-ROM drives, but has been improved since then. Although we’ve included it in out tests for reference, you’re crazy if you use it for anything serious.

It’s been well and truly eclipsed.

To The Labs!
We picked a few choice DVDs from our collection that we felt would provide the codecs with a good spectrum of different material to work with. We cropped out 40 frames, exported them as an uncompressed RGB sequence, then exported this with VirtualDub Mod into different codecs with different settings. And we did this over and over and over again. We used the default codec settings, but tweaked any choices between quality and render time to maximum quality, as well as setting all the codecs to two-pass variable bitrate with a specified target.

We wound up with three sequences encoded in 300, 700, 1500 and 3000Kb/s in five different codecs. DivX and XviD were chosen because the similarities in their lineage allow us to see the work of the open source movement pitted against proprietary software. Windows Media 9 (WMV9) is almost exactly the same as VC-1 – Microsoft’s HD-DVD/Blu-ray codec – only with more options. It’s not based on an MPEG standard either; so in it went. x264 (H.264’s open source equivalent) was thrown into the mix too, as it represents the new wave of computer-based open source codecs. Cinepak was added just for laughs and to see what a difference 15 years makes in codec land.

We dropped every encoded sequence back into VirtualDub and picked one frame that we felt demonstrated the encoder’s prowess, checked that none of the sequences used that particular frame as a keyframe and then exported the frame as an image. After all, we wanted to see what the encoders were doing, not what they were relying on.

One small caveat: The WMV encodes weren't done with VirtualDubMod; they were done with Adobe Premiere Pro 2. We also used Premiere to export the WMV frames used in the comparison tables.

Author's note: Good news everyone! I've just invented a lossless entropy encoded container into which I've put all the sample images I used over the course of writing this article. Although in reality I didn't invent it; it's just a *.RAR file. You can use it when you need a closer look at any of the full images that became these swashes which were originally printed in issue 77 of Atomic. Off you go!

Futurama
Most animation is quite simple. To cut down on the time it takes to draw an animation, movement generally changes every two frames. They contain large expanses of solid colour and constant backgrounds, but can also contain sections of complex CGI, like the doctor’s head in this example.

XviD handled backgrounds very well, but had a problem with edge definition at lower bitrates. It overcame these problems at 1500Kb/s, but struggled with gradients until 3000Kb/s, where it looked mostly flawless.

DivX was blockier than we were expecting when it was faced with high motion and low bitrates, but settled down quickly when given real world bandwidth to play with.

x264 produced total rubbish at 300Kb/s, but the jump in quality it produced when given 700Kb/s was astounding. Its variable macroblock size handled edges and the crazy outline around Fry better than any of the other codecs. Strangely, at high bandwidth, lines seemed soft and almost blurry – but this was masked purely by the colours and style if the animation.

It may not look like it from the stills, but WMV performed incredibly well at low bitrates, reproducing gradients at the cost of line definition.

The blockiness that we had come to expect at low bitrates wasn’t as pronounced as the other codecs, and like x264, it too produced a soft encode.

Firefly
Many codecs don’t do dark footage justice because of the way they approximate colour values. You can see this in low bitrate YouTube videos that have large black gradients that have clumped together. We also needed something to test close-ups to see how the codecs dealt with detail in a human face, and a bit of fire (on screen, anyway) never hurt anybody.

XviD crunched a lot of the details in the blacks out, especially at low bitrates. At 700Kb/s, ugly gradient smearing was still visible when all the other codecs had dealt with it. Skin detail was still pixelated at 3000Kb/s, but the rest of the image (especially outlines) looked great.

DivX fared better than XviD overall, showing less blockiness and better gradients, but still lacked the skin detail that we’d come to expect from it. From experience, this is less of a problem in brighter scenes.

x264 didn’t fare too well at low bitrates, but stepping up to 700Kb/s corrected most of its problems except for detailed skin gradients and low lighting; although it did render the torch nicely. At 3000Kb/s, all is forgiven – it looks the closest to the original.

WMV hid its low bitrate compression artefacts under softness. The reflections of the fire from the side of the doctor’s face look too static at low bitrates, but this gets better as we ramp up the bitrate. Not as good as x264, but better than XviD/DivX.

Ultra fine detail, such as the curly hairs on the man’s head (we’ve dubbed him Patoli) show the fundamental ways in which codecs deal with information. DivX smooths out the grain of the film, then blocks information together. XviD keeps the grain, but the macroblocks become speckled, which lowers the detail. x264 tries to approximate how the data clumps together and does a reasonably good job of injecting information back into these areas. WMV uses gradients to give the illusion of depth. And Cinepak, as expected, remains a speckled mess.

Troy
This scene is a real killer. At low bitrates, the people walking around on the sand turned into sprites drifting across the frame. A lot of the poor results have to do with the source – an MPEG-2 stream from a DVD that does produce fine details, but struggles to do it well.

DivX was awful at low bitrates, only showing fluid movement in the sails of the ship. At 700Kb/s only a bit of finely detailed movement was visible, but it only became remotely watchable at 1500Kb/s. It had its act together at 3000Kb/s but we still saw blocky edges around small details.

Although XviD smeared less than DivX at low bitrates and did contain some fine movement, it still lacked too much detail for our liking.

Only under x264 did the people seem to be moving. For a laugh, we encoded it at 150Kb/s with x264 and still saw movement, but sections of the frame (most of which were in the distance) were completely blurred out. At high bitrates, it won out, but only just.

WMV didn’t provide the kind of detail in low bitrates as x264, but at the top end details in the shore that x264 had blurred out were visible.

It didn’t render movement in the distance as well as X.264, but did an admirable job.

Examine the frames closely and you can see that at low bitrates, all the codecs (bar Cinepak) have simply blurred out the ships in the distance.

This is the best visual representation of what codecs do. In the words of Tyler Durden, they have the ability to let that which does not matter truly slide.

Conclusion
We’ve covered a lot here but video technology is both a science and a black art beyond the scope of this Head to head – and one best approached with a degree in computer science. If you’re simply looking for a definitive answer as to what the best quality encoder is, you won’t find it here – or anywhere else for that matter. There is a chasm of optimisations you can specify during the encoding process that can vary the results dramatically, which is why we left all the options at their defaults.

Codecs work differently with the mathematics behind a certain visual style, and visual styles are as varied as the rows of tapes in the BBC’s archive are long. On top of this, the best encodes use the vast array of options available in a codec to fine tune its encoding mechanics and put it in harmony with the style of the source material.

So what you’ve read here is a primer, and what you’ve seen a guideline. If you see a characteristic you like from our tests, investigate. Run your own tests with the footage you normally work with. Become one with the options.

That is the way of the codec.
http://www.atomicmpc.com.au/article....=14&CIID=82898





Netflix Offers $1 Million Prize for a More Accurate System to Find What Online Movies Customers Want
Katie Hafner

Sometimes a good idea becomes a great one after it is set loose.

Last October, Netflix, the online movie rental service, announced that it would award $1 million to the first person or team who can devise a system that is 10 percent more accurate than the company's current system for recommending movies that customers would like.

About 18,000 teams from more than 150 countries - using ideas from machine learning, neural networks, collaborative filtering and data mining - have submitted more than 12,000 sets of guesses. And the improvement level to Netflix's rating system is now at 7.42 percent.

The competition is "three-quarters of the way there in three-quarters of a year," said Reed Hastings, the chairman and chief executive of Netflix, based in Los Gatos, California.

The race is very close, with no consistent front-runner. Each contestant is given a set of data from which three million predictions are made about how certain users rated certain movies. Netflix compares that list with the actual ratings and generates a score.

For several months, a team from the University of Toronto was in the lead, but a group from the Budapest University of Technology and Economics, calling themselves Team Gravity, surpassed the Canadians in January and remains in front.

Jim Bennett, vice president of recommendation systems at Netflix, said he knew little about the Hungarians but noted that they have a knack for pulling ahead.

"These guys, especially that team, they're deep into it," Bennett said. "Toronto would get slightly ahead of Gravity, and within hours, Gravity would make the next submission that would be just beyond them."

The entries have sometimes arrived when it was the middle of the night in Hungary, Bennett said.

Domonkos Tikk, a data mining expert who is a senior researcher at the university in Budapest, leads Team Gravity. He said that since October, his team, composed of three Ph.D. candidates and himself, has spent eight hours a day, seven days a week on the problem.

"One of the reasons of our current leading position is that we keep on trying to implement new tricks and ideas, and a reasonable portion of them works," he said.

The contest has also generated several academic papers, Bennett said. It has turned out to be more exciting than officials expected.

"It sits for the longest time and you think, 'Gee, maybe it's over,' " Bennett said. "And then - bam - someone takes it to the next step."

If no one wins within a year, Netflix will award $50,000 to the team that makes the most progress, and it will award the same amount each year until someone reaches the goal.
http://www.iht.com/articles/2007/06/...ss/netflix.php





Sony Cuts Price on New Blu-Ray Player
Peter Svensson

With dominance of the market for high-definition movie discs still up in the air, Sony said Monday it is including a small surprise with the new Blu-ray disc player it is shipping this week: a price tag $100 lower than previously announced.

When Sony announced the BDP-S300 player in February, it put the price at $599, but it has now set a list price at $499.

That means the new player costs half of what the company's first Blu-ray player cost when it launched just six months ago — probably one of the fastest price declines in the consumer electronics industry. The new player has essentially the same capabilities as the older BDP-S1 but is smaller.

The price cut is due to falling production costs and the growing demand for Blu-ray products, according to Chris Fawcett, vice president of Sony Electronics' home products division.

Sony has been undersold in the market for high-definition disc players by Toshiba, which created the rival HD DVD format. Its players are now selling for less than $300, 14 months after Toshiba's first player appeared in U.S. stores.

Neither Blu-ray nor HD DVD players have caught on strongly with consumers, who have been waiting for the market to settle on one of the formats. But dropping prices for players and HDTV sets in more homes mean a big showdown between the discs may be looming this holiday season.

Hollywood studios are split on the issue, but Blu-ray has the strongest support.

Most people buying Blu-ray discs are apparently buying them for their Sony PlayStation 3 game consoles. The cheapest version of the console costs $499, but its game-oriented wireless controller and relatively loud fan makes it a less than ideal movie player.
http://www.usatoday.com/tech/product...rice-cut_N.htm





As Music Labels Struggle, Bands Thrive in Games
Lisa Baertlein

It is a dark time for record labels and mainstream radio, but the people who pick music for video games say there has never been a better time to be an aspiring rock star.

"There are more opportunities than ever before. I would much rather be a young band right now than 10 years ago," said Steve Schnur, referring to a time when record companies and radio station owners held the keys to what got heard.

The worldwide executive of music at Electronic Arts Inc., which is the biggest video game publisher, put a once unknown Southern California band called Avenged Sevenfold in multiple games including EA's "Need for Speed: Most Wanted" racing game and its perennially popular "Madden" football game, which is considered prime real estate.

The band, also known as A7X, has since gotten a Warner Bros Records contract and its songs are now familiar to millions of gamers.

Scottish indie rock band Franz Ferdinand crossed the pond, winning U.S. fans after its music was in games like "Madden NFL 2005," soccer game "FIFA 2005" and racing title "Burnout 3: Takedown," said Schnur, who also led the music selection in "NBA Live 2003" -- the only video game soundtrack to go platinum.

"We're a new medium that delivers music in a new and interesting way," said Alex Hackford, artist and repertoire manager for Sony Computer Entertainment America.

Hackford has worked with bands in all stages of development, including Stab the Matador, a young band from upstate New York.

He put them in baseball game "MLB 06: The Show." From there, he said, the band got a booking agent and a national tour.

"You have almost a completely level playing field," said Hackford.

These days, artists market themselves on the Internet from their bedrooms or depend on a Madison Avenue marketing firm to get the word out.

"If you're entrepreneurial, you can do it yourself," Hackford said. "It's easy to give over control of your career to a multinational corporation and blame somebody when it doesn't go right. Really driven people aren't going to cede control," he said.

Music-based games such as Konami's various karaoke titles, Activision Inc.'s "Guitar Hero" and Sony's "SingStar" are also an outlet for established groups from the Rolling Stones and Queen to Lynyrd Skynyrd and Deep Purple.

Crooner Frank Sinatra, country star Johnny Cash and the Doors were headliners in Activision's "Tony Hawk Underground 2."

"It really underscores the fact that there is no longer a magic bullet that sells a record," said Celia Hirschman, founder of Downtown Marketing, a music marketing consulting company in San Francisco.

Video games are a perfect way for consumers to discover new music and for bands, especially those of the post-modern punk, hip hop, funk and heavy metal variety, to reach their typically rabid fan base, she said.

"The music fit with the lifestyle of what they were selling in the game. It was a no-brainer," Hirschman said.

Nick Beard, bassist for Circa Survive, said having the band's music appear in a video game would be like fulfilling a childhood dream.

"I've been playing video games since I was 5. It would just be sweet," Beard said.
http://www.reuters.com/article/techn...29551120070607





Putting the We Back in Wii
Martin Fackler

If there is a secret to the smash success of Nintendo’s Wii video game console, it may be this: even the creative loner can benefit from having friends.

Nintendo is known for turning out hits with memorable characters like Donkey Kong and the Super Mario Bros., but it has had a reputation for cold-shouldering game software developers because it preferred to make both its hardware and software internally.
The company, based in Kyoto, Japan, certainly produced innovative designs like the GameCube or the touch-screen on the portable Nintendo DS, but it was perennially outclassed and outsold by the more powerful Sony game machines. Sony’s PlayStation 2 outsold the GameCube six to one.

Contrast that with the success of the Wii. The Wii and Sony’s technology-packed PlayStation 3 went on sale in the United States in November, a year after Microsoft rolled out its Xbox 360. As of the end of April, Nintendo has sold 2.5 million Wii consoles in the United States, almost double PlayStation 3’s sales of 1.3 million and closing in on Xbox 360’s 5.4 million sales, according to the NPD Group, a market research firm.

What changed? The secretive company is coming out of its shell. It has made a concerted effort to woo other makers of game software as part of a broader change in strategy to dominate the newest generation of video game consoles.

The new Nintendo surprised employees at the software maker Namco Bandai Games when during a routine meeting at Namco Bandai’s Tokyo headquarters a year and a half ago, Nintendo’s usually aloof executives made a sudden appeal for their support.

The Nintendo group had come to demonstrate a prototype of the Wii, which had not then been released. They handed Namco Bandai employees the unique wand-like controllers and as the developers tested a fly fishing game, the Nintendo team urged them to build game software for the console, listing arguments about why Wii would be a chance for both companies to make money.

“I had not seen that attitude from them before,” said Namco Bandai’s chief operating officer, Shin Unozawa, who was at the meeting. “Nintendo was suddenly reaching out to independent developers.”

With its new approach, Nintendo hopes to avoid the disappointments of its previous home game console, GameCube, which placed a distant third in the United States against Sony’s PlayStation 2 and the Xbox of Microsoft, say analysts and game developers. It also promises to change the famously secretive corporate culture of Nintendo, though only slightly; Nintendo refused repeated requests for interviews with its executives.

Nintendo’s new strategy is two-pronged. Making the Wii cheaper and easier to play than its rivals attracts a broader range of new customers, including people who never bought a game machine before. With Wii, Nintendo has avoided one mistake it made with GameCube, which was competing with its wealthier rivals on expensive technology-driven performance. While Wii lacks the speed and graphics of PlayStation 3 and Xbox 360, Wii sets itself apart with novel ideas like its wireless motion-sensor controller that gets game players off the couch and jumping around.

The other thrust of Nintendo’s new strategy is to enlist software developers like Namco Bandai to write more games for Wii than they did for previous Nintendo machines. Nintendo’s hope is that this will help erase one of Sony’s biggest past advantages: the far greater number of game titles available for its machines. The more games a machine has, the industry theory holds, the more gamers want to play it.

In March, Nintendo’s star game designer, Shigeru Miyamoto, even goaded software companies to devote their top people to developing games for Wii. That is a big change from Nintendo’s previous strategy, which was to write most of its own software. Game developers say Nintendo has been more forthcoming with providing the permissions and codes needed to write games for its consoles.

“The relationship is warmer and more active than before,” said Jeff Brown, the spokesman for Electronic Arts, the giant game developer based in Redwood City, Calif. The push appears to be bringing results. Analysts say one reason for Wii’s popularity has been its larger number of available game titles. At present, there are 58 games on sale in the United States for Wii, versus 46 for PlayStation 3, according to the Sony and Nintendo Web sites. That is a huge contrast with the previous generation of game consoles: to date, PlayStation 2 has 1,467 titles, overwhelming GameCube’s 271 titles.

Nintendo, which was founded in 1889 as a maker of playing cards and made its first video game in 1975, is also opening up in other ways. In March, Nintendo announced that it had licensed its Super Mario Bros. characters to another software maker for the first time, signing a deal allowing Sega to use them in a sports game to appear ahead of the 2008 Beijing Olympics.

“Nintendo is determined not to repeat past mistakes,” said Masashi Morita, a games analyst at Okasan Securities in Tokyo. “It is taking a whole new approach with Wii.”

In Japan, the home market of Nintendo and Sony, Wii’s success has been even more striking than in the United States. Through the end of May, 2.49 million Wii consoles were sold, 50 percent more than the combined sales of the PS3 and the Xbox, according to Enterbrain, a market research firm in Tokyo. Aided by the Wii’s popularity, Nintendo’s net profit jumped 77 percent in the most recent fiscal year, ended March 31, from the year before to $1.47 billion, on sales of $8.13 billion. Its shares, traded in the United States as an American depository receipt, have doubled in the last year.

Wii’s success stands in marked contrast with Nintendo’s performance in the earlier generation of game consoles, when it shipped just 21.6 million GameCube machines worldwide compared with Sony’s total shipments of 117.9 million PlayStation 2s, according to Sony and Nintendo. Nintendo’s turnaround has been so startling that there is now talk of the end of the era of Sony’s dominance, with the more than $25 billion global game market now increasingly likely to be split more evenly among the three big rivals.

“Wii’s success shows that from now on, we are looking at a divided market,” said Yoichi Wada, chief executive of Square Enix, one of Japan’s biggest game developers. “We can no longer afford to focus our resources on writing games for just one manufacturer.”

While Square Enix made far more games for PlayStation 2 than for GameCube, it has been developing equal numbers for PlayStation 3 and Wii, the company said. It has so far announced plans to release three games for both new consoles, most of them variants of its popular Final Fantasy series.

The Wii’s simplicity is also the selling point for software makers. Mr. Wada said developers had been slower to write games for PlayStation 3 because of the greater complexity of the console’s main processor, the high-speed multi-core Cell Chip. He said PlayStation 3’s production delays had also made Sony slow to provide developers with the basic codes and software needed to write games for the new console.

At Namco Bandai, Mr. Unozawa said PlayStation 3 was so complex, with its faster speeds and more advanced graphics, that it might take 100 programmers a year to create a single game, at a cost of about $10 million. Creating a game for Wii costs only a third as much and requires only a third as many writers, he said.

But Mr. Unozawa also said Nintendo’s promotional visit in late 2005 helped make Namco Bandai more willing to write games for Wii. When he saw the Wii prototype, and then later saw a PlayStation 3 prototype, he and his colleagues decided the Wii might have more potential than the expensive and difficult-to-operate Sony machine.

“The Wii just looked more fun,” Mr. Unozawa said. “It changed our thinking.”

As a result, on the day Wii rolled out in Japan, Namco Bandai had three games ready for it, including a version of its Gundam robot combat game. By contrast, when PlayStation 3 came out, Namco Bandai had two games ready.

Mr. Unozawa said Nintendo’s more open and cooperative attitude also helped make Nintendo appear a little less intimidating. That helped lower what he and other game developers called one of the biggest hurdles in the past to creating software for Nintendo: fear of Nintendo itself. The company was so good at writing games for its consoles that few wanted to compete against it.

Now, game developers and analysts say, Nintendo is showing itself more willing to be a partner and not just a rival.

“Being cool toward other game developers didn’t work,” said Masayuki Otani, an analyst at Maruwa Securities in Tokyo. “Nintendo has learned that it pays to be friendly.”
http://www.nytimes.com/2007/06/08/te...8nintendo.html





He’s 9 Years Old and a Video-Game Circuit Star
Bruce Lambert

Victor M. De Leon III has been playing video games on the professional circuit for five years now, racking up thousands of dollars in prizes and endorsements at tournaments around the country. He has a national corporate sponsor, a publicist and a Web site, with 531 photos chronicling his career. A documentary filmmaker has been following him for months.

Victor weighs 56 pounds and likes to watch SpongeBob SquarePants at his home here on Long Island. He celebrated his 9th birthday last month with a trip to a carnival and a vanilla cake. He gets above-average marks in the third grade, where he recently drew a dragon for art class.

The appropriately named Victor — better known to cyber rivals and fans as Lil’ Poison — is thought to be the world’s youngest professional gamer; Guinness has called about listing him in its book of World Records. Starting on Friday, he is set to be among 2,500 competitors in the three-day Major League Gaming Pro Circuit Event at the Meadowlands in New Jersey, battling for titles like the titan on the Xbox game Halo 2 and prizes up to $20,000.

Asked what he thinks about the fuss over his virtual exploits, Victor shrugged with shy indifference. Pressed, he mumbled: “I don’t know. I didn’t think about it.”

What was it like to be featured by “60 Minutes” as one of “the seven most amazing youngsters”? Victor’s only reaction was that he “looked small” on television because he had grown a bit during the lag between the taping and the broadcast.

Victor’s aptitude for video games surfaced at age 2, as he begin mimicking his father’s play. Mr. De Leon, 31, who markets and sells warehouse equipment, was an early adopter himself, having started at 8 with such quaint games as Pac-Man.

But Halo is a violent, shoot-’em-up game, the type that has stirred much debate about effects on youngsters since the 1999 massacre at Columbine High School, where the killers were frequent players of the computer game Doom.

Many researchers caution that excessive gaming displaces exercise, socializing and creative play, and that video games like Halo can promote aggressive feelings and actions. “It’s not enough,” said Joanne Cantor, a professor emerita at the University of Wisconsin at Madison, for “a parent to just tell a child that the video violence is not real.”

Anna Akerman, a developmental psychologist at Adelphi University on Long Island who specializes in media and children, said that it was not that simple to disentangle cause and effect, and that some violent people might be drawn to gory games because they are already predisposed to violence.

To critics who suggest that he is ruining Victor’s childhood, Mr. De Leon shrugs like his son, and notes that when not training for a specific competition, his Xbox time averages about two hours a day. Away from the screen, he said, Victor is a typical third grader who likes to bike and swim and plays the violin.

“If they don’t live here, they don’t know what we do,” Mr. De Leon said at his home here. “I’m not overdoing it, and he’s not overdoing it.”

Although Mr. De Leon helps manage his son’s career and accompanies him to contests around the nation, he insisted he is not the digital version of the archetypal stage mother. Victor’s mother, Maribel De Leon, runs a day care center and shares custody with his father.

Before Victor enters a competition, his father said he always asks, “Do you want to do it?”

Mr. De Leon said he never pushed his son to play video games in the first place, but welcomed his interest. Mr. De Leon’s brother Gabriel, a Halo aficionado known online as Poison, also served as a mentor.

“He copied me, and he was real good,” the father recalled. “He liked to help me finish games and found glitches, which is pretty hard to do.”

Soon Victor bested his father. “He kind of passed me when he was 4,” Mr. De Leon said. “I just couldn’t keep up with him. I became sort of a coach, but every time I told him something, he’d say, ‘I know, Daddy.’”

That year, Victor joined a team with his father and two uncles at a New York Halo contest, winning fourth place. At age 5, he entered the Major League Games and ranked in its top 64 players internationally. By the time he was 7, Victor competed in Chicago against more than 550 contestants, placing second — behind Uncle Gabriel.

Besides prizes and product endorsements, Victor has a deal worth about $20,000 annually, plus expenses for trips to tournaments, from his sponsor, 1UP Network, a division of Ziff Davis Game Group, owners of gamer magazines and Web sites. Mr. De Leon declined to specify how much his son has accumulated, but said that it was almost enough to cover a private college education.

Matthew S. Bromberg, chief executive officer of Major League Gaming, one of several groups that sponsor competitions, said Victor had been a “phenomenon” for some time. But while Victor earns money for playing, he is not yet a full-fledged pro by the league’s strict definition, since he would have to rank higher — and be at least 15 years old.

Victor plays video games in the corner of the basement of his home. Dwarfed by a 60-inch video monitor, he settled into a big chair on Tuesday evening, barefoot and wearing a black jersey with Lil’ Poison emblazoned across the back. His gaze locked on the screen, his tiny thumbs jabbed away at the controller, causing virtual mayhem of gunfire, explosions, blood splatters and cyber corpses in the outer-space battle.

Mr. De Leon said he took care to use parental controls to block excessive gore and offensive language. “Our family, we’re very old fashioned,” he added, noting that they belong to the Baptist Church, “and there’s no cursing.”

The father said he also counseled Victor about what is real versus pretend. “You can’t jump off a building and come back to life, or reach out and stop a truck,” Mr. De Leon gave as examples.

Victor reaps some extras from his gaming career, like a side trip to Disneyland during a competition in Los Angeles, and a visit to a rodeo while in Texas — his favorite excursion.

Victor said he has no plans about what to do when he grows up. For now, he seems preoccupied with Star Wars toys, fried chicken, jujitsu, guitar music, basketball, his hamster and his dog, Rocky.

“I like to ride my bike every day,” he said. Asked if he ever gets bored with video games, he said, “Sometimes, yeah.”
http://www.nytimes.com/2007/06/07/nyregion/07gamer.html
JackSpratts is offline   Reply With Quote