|Peer to Peer The 3rd millenium technology!|
||Thread Tools||Search this Thread||Display Modes|
|25-02-09, 09:09 AM||#1|
Join Date: May 2001
Location: New England
Peer-To-Peer News - The Week In Review - February 28th, '09
"Netbooks prove that we finally know what PCs are actually for. Which is to say, not all that much." – Clive Thompson
"I want to highlight the creativity within the brain of a cripple, and while not attempting to hide the crippledom I want instead to filter all sob-storied sentiment from his portrait and dwell upon his life, his laughter, his vision, and his nervous normality. Can we ever see eye-to-eye on that schemed scenario?" – Christopher Nolan
"It’s not about 14-year-olds who want to listen to their favourite star from Idol, but rather those who make the material accessible. Anyone who has read the bill knows that." – Beatrice Ask
"To stop file sharing a police state is required where all internet traffic is under surveillance. Is it worth it?" – Lage Rahm
February 28th, 2009
Web Censorship Plan Heads Towards a Dead End
The Government's plan to introduce mandatory internet censorship has effectively been scuttled, following an independent senator's decision to join the Greens and Opposition in blocking any legislation required to get the scheme started.
The Opposition's communications spokesman Nick Minchin has this week obtained independent legal advice saying that if the Government is to pursue a mandatory filtering regime "legislation of some sort will almost certainly be required".
Senator Nick Xenophon previously indicated he may support a filter that blocks online gambling websites but in a phone interview today he withdrew all support, saying "the more evidence that's come out, the more questions there are on this".
The Communications Minister, Stephen Conroy, has consistently ignored advice from a host of technical experts saying the filters would slow the internet, block legitimate sites, be easily bypassed and fall short of capturing all of the nasty content available online.
Despite this, he is pushing ahead with trials of the scheme using six ISPs - Primus, Tech 2U, Webshield, OMNIconnect, Netforce and Highway 1.
But even the trials have been heavily discredited, with experts saying the lack of involvement from the three largest ISPs, Telstra, Optus and iiNet, means the trials will not provide much useful data on the effects of internet filtering in the real-world.
Senator Conroy originally pitched the filters as a way to block child porn but - as ISPs, technical experts and many web users feared - the targets have been broadened significantly since then.
ACMA's secret blacklist, which will form the basis of the mandatory censorship regime, contains 1370 sites, only 674 of which relate to depictions of children under 18. A significant portion - 506 sites - would be classified R18+ and X18+, which is legal to view but would be blocked for everyone under the proposal.
This week Senator Conroy said there was "a very strong case for blocking" other legal content that has been "refused classification". According to the classification code, this includes sites depicting drug use, crime, sex, cruelty, violence or "revolting and abhorrent phenomena" that "offend against the standards of morality".
And last month, ACMA added an anti-abortion website to its blacklist because it showed photographs of what appears to be aborted foetuses. The Government has said it was considering expanding the blacklist to 10,000 sites and beyond.
Xenophon said instead of implementing a blanket mandatory censorship regime the Government should instead put the money towards educating parents on how to supervise their kids online and tackling "pedophiles through cracking open those peer-to-peer groups".
Technical experts have said the filters proposed by the Government would do nothing to block child porn being transferred on encrypted peer-to-peer networks.
"I'm very skeptical that the Government is going down the best path on this," said Xenophon.
"I commend their intentions but I think the implementation of this could almost be counter-productive and I think the money could be better spent."
The policy has attracted opposition from online consumers, lobby groups, ISPs, network administrators, some children's welfare groups, the Opposition, the Greens, NSW Young Labor and even the conservative Liberal senator Cory Bernardi, who famously tried to censor the chef Gordon Ramsay's swearing on television.
This week, a national telephone poll of 1100 people, conducted by Galaxy and commissioned by online activist group GetUp, found that only 5 per cent of Australians want ISPs to be responsible for protecting children online and only 4 per cent want Government to have this responsibility.
A recent survey by Netspace of 10,000 of the ISP's customers found 61 per cent strongly opposed mandatory internet filtering with only 6.3 per cent strongly agreeing with the policy.
An expert report, handed to the Government last February but kept secret until December after it was uncovered by the Herald, concluded the proposed scheme was fundamentally flawed.
Even Labor has previously opposed ISP-level internet filtering when the Howard Government raised it as a method for protecting kids online.
"Unfortunately, such a short memory regarding the debate in 1999 about internet content has led the coalition to already offer support for greater censorship by actively considering proposals for unworkable, quick fixes that involve filtering the internet at the ISP level," Labor Senator Kate Lundy said in 2003.
Norwegian Minister Wants to Legalize File-Sharing
The trial of The Pirate Bay has not gone by unnoticed in Sweden’s neighbor country, Norway. The IFPI has ordered the largest ISP in the country to block the site, while on the other hand Norway’s Minister of Education is critical of the music industry, and wants to legalize (illegal) file-sharing
Earlier this week the music industry, headed by the IFPI, gave Norway’s largest Internet provider an ultimatum; block access to The Pirate Bay within 14 days or we will take you to court.
ISPs have criticized IFPI’s move, and Pirate Bay’s spokesman Peter Sunde said that “the crazy people behind IFPI should be stopped.” Bård Vegar Solhjell, Minister of Education and Research in Norway sides with Peter in this assessment, as he vouches for the legalization of file-sharing.
In a recent blog post, the minister who is a member of the Socialist Left Party (SV), said that file-sharing is genius, and a great way to discover and access music. “You and I can get access to all the world’s music when we want. Fantastic!” Solhjell wrote on his weblog.
The music industry should embrace the Internet instead of fighting it, according to the minister. “All previous technology advances have led to fears that the older format would die. But TV did not kill radio, the Web did not kill the book, and the download is not going to kill music,” Solhjell wrote.
The music industry fears new technologies according to Norway’s Minister of Education. He believes that if radio had been invented today the record labels would have tried to shut that down too. “But just as the radio and cassettes haven’t killed music, it is a preposterous claim to say that file-sharing does,” Solhjell told VG nett.
Instead of fighting file-sharing and the Internet, the industry should be looking for a system that works for consumers and artists. Spotify is one example according to the minister, who has put ‘legal file-sharing’ on the agenda of his party.
In their new party program they describe it as follows: “SV will explore the possibility of legalizing non-commercial file-sharing of music performed by private persons, in combination with a licensing solution for payment to the licensees,” and Solhjell believes that in the end, both consumers and artist will benefit from an open market.
Swedish Parliament Passes Copyright Bill
As expected, the Riksdag passed a controversial new measure on Wednesday designed to make it easier to investigate suspected cases of illegal file sharing.
The vote came following a spirited debate between Sweden’s Minister of Justice, Beatrice Ask, and detractors of the file sharing bill, which is based on the European Union's Intellectual Property Rights Enforcement Directive (IPRED).
The government contends that the law is necessary to protect the rights of filmmakers, authors, and artists by allowing them to earn a living from their creations.
But opponents from the Left and Green Parties claim the measure is a threat to democracy and personal integrity because it gives companies and copyright holders too much power to investigate and demand compensation from individuals for alleged copyright infringement.
“To stop file sharing a police state is required where all internet traffic is under surveillance. Is it worth it?” asked the Green Party’s Lage Rahm, according to the Svenska Dagbladet (SvD) newspaper
“We think copyright is important, but the problem is that it’s not right to criminalize people for what they do for private use.”
Rahm also voiced his concern that the new law could give rise to “blackmail situations” in which individuals feel forced to pay fines despite not breaking the law just to avoid having to be taken to court.
Ask, however, dismissed the Green Party critics, accusing them of rhetorical posturing.
“It’s easy to say that you are in favour of copyright protection, but then not following up with any proposals whatsoever,” she said, adding that the law addresses both file sharing and piracy.
“Those who create films and music within the law will have a better chance to take action against copyright violations,” she said
The Social Democrats supported the bill, but voiced a number of concerns about potential privacy violations.
The party suggested adding a provision for a privacy ombudsman to protect people who feel they’ve been unduly investigated, but the amendment was rejected by the government.
The Social Democrats also fear that the fast pace of development of file sharing technology will render the law obsolete by the time it comes into force on April 1st.
As an alternative, the party would like to see the launching of a copyright commission charged with conducting a wider examination of the file sharing phenomenon.
“The upcoming generation has another view of copyright protection. The legislation must be viewed as legitimate,” said Social Democratic Riksdag member Eva-Lena Jansson.
While the Left Party is opposed to the measure, the party’s Kent Persson acknowledged the government had sufficient support in the Riksdag to pass the measure.
Assuming the bill would pass, therefore, the Left Party, which believes file sharing for private use should remain legal, reluctantly supported the Social Democrats’ suggestion for a copyright commission.
In her defence of the bill, the justice minister stressed that the file sharing measure upheld the rule of law by requiring that a court decide on which information could be released for file sharing investigations.
In addition, the measure requires evidence that a particular IP-address was being used for file sharing before an investigation takes place.
The court would also make a determination regarding the relative severity of the offence.
“That means that information will primarily be released when it’s a matter of making material accessible on the internet, not when someone downloads an individual work, that is to say, copying,” said Ask.
“It’s not about 14-year-olds who want to listen to their favourite star from Idol, but rather those who make the material accessible. Anyone who has read the bill knows that.”
Pirate Bay Prosecution Hires Hypocrite Pirate Author for PR
In a desperate move to amp up her case against The Pirate Bay, prosecuting lawyer Monique Wadsted has asked authors for quotes and support in preparation for her closing arguments next Monday. Unfortunately for her, the friendly request backfired as a befriended author turned out to be a fanatical Pirate Bay supporter.
Movie industry lawyer Monique Wadsted thought she’d learned from the Pirate Bay’s support gathering mechanism via social networking sites, and decided she had what it takes to pull off a similar stunt. She asked her friend and novelist Carina Rydberg for help, who then posted a call-to-arms on a Facebook group for Swedish authors.
“My friend Monique Wadsted, who represents the movie and gaming industry in the trial against The Pirate Bay, needs comments from creators and authors on these issues. She is currently preparing her closing arguments and would like to end it with a message from Swedish authors. It can’t be long - only 30 seconds - so we’re talking one-liners here.”
“Since I know that we the authors are affected by file-sharing, I think this is an excellent chance to take a stand. […] I’ll try to write something and would like to encourage members to do the same. […] Furthermore, Monique would love to see us coming to the court in person. As things look now, the whole situation is dominated by the pirates,” Carina added.
Now, perhaps this is nothing unusual. As the digital society has progressed, not all authors have recognized the marketing opportunities of file-sharing. What is interesting, however, is that Carina Rydberg’s real stance on file-sharing differs dramatically from her Facebook post. Swedish blogger projO published postings from Carina Rydberg from earlier discussions in the same Facebook group, where she confessed that she was a registered user at The Pirate Bay. So why is she a member there?
“Because I want to watch movies that can neither be rented anymore nor bought on the Internet. I want to read books that are out of print and will cost you 750 British pounds on eBay. For that reason, I want The Pirate Bay to stay. At the moment, I’m trying to download John Schlesinger’s ‘The Day of the Locust’; it takes time and it’s not even certain I’ll get a copy that is watchable - but at the same time I have no idea how to get the damn flick any another way…”
Further on in the same discussion thread, she doesn’t spare her praise:
“The Pirate Bay is an invaluable source for content that publishers, record labels and movie studios for some reason can’t or won’t offer. If someone on The Pirate Bay chose to download the book I wrote in 1989 I would have no objection to that. That novel is practically impossible to get hold of and as an author I want to be read.”
As panic over her hypocrisy increased, Carina Rydberg quickly edited the posts on Facebook to cover her tracks. However, she made a comment to a torrent on The Pirate Bay in November where she repeated a similar statement, that she encouraged the making available of her out-of-print novels.
In recent posts to the authors’ Facebook group, several enraged members have demanded that the person who leaked this information from the group be expelled. However, the founder of the group stated that there are no rules about the contents having to be kept private, and that leaks like this are something you must take into account when posting to a Facebook group.
He added: “I think Carina’s post was somewhat offensive since it presupposes that all authors agree on what is obviously a subjective opinion.”
Meanwhile, Carina Rydberg has come out all guns blazing in running errands for her friend Monique. Despite her earlier support for The Pirate Bay she has forwarded the request to The Swedish Writer’s Union. “They absolutely don’t want to support the pirates,” she wrote in a another Facebook post.
At the time of writing, it is unclear whether the authors were to be paid for their work, or if Monique Wadsted expected to get user generated content for free. One thing is sure though, an anti-Pirate Bay quote from Carina and friends wont be worth much in court now.
Layoffs Confirmed at RIAA, Estimates Approach 30...
The RIAA has now reduced a substantial amount of staff, according to information confirmed by the organization to Digital Music News. Earlier this week, sources pointed to "significant" layoffs in the range of 30, and chops across various regional, anti-piracy units.
The Washington, DC-based RIAA did not initially respond to inquiries, though representative Cara Duckworth eventually confirmed details by email early Thursday morning. "Can't confirm number but I can confirm there were layoffs," Duckworth relayed. "As you can imagine, the music community is not immune from the impact of these tough economic times." Some staffers may have also been released from headquarters.
RIAA Staff Cuts May Be Far Deeper Than Reported
There is no doubt that major staff reductions and changes are underway at the RIAA. But one seemingly knowledgeable but unconfirmed source tells Hypebot that the cuts run much deeper than previously reported.
"It is about 90-100+ people across the US and global offices - anti-piracy, coordinated IFPI/BPI etc - trust me it's a bloodbath...
(Major label heads) Hands, Morris are squeezing the ____ out of these guys after the ISP failure and a major budget cut. (The) RIAA as you know it is probably history by Tuesday of next week, a formal announcement is being drafted for drop next week.
The new group is a aggregate of IFPI + remaining pieces of BPI + RIAA - (a) new leaner, coordinated group...DC offices are getting closed except for one part of one floor on Conn. Ave., just for the address."
Hypebot has asked the RIAA to comment
Music-Swapping Sites to be Blocked by Internet Providers
Irish internet users are to be blocked from accessing music swapping websites, as internet service providers bow to pressure from the music industry. Eircom, the country’s biggest internet provider, is to start blocking its internet customers from accessing music swapping.
The country’s other internet providers have been told by the Irish Recorded Music Association (Irma) to follow suit or face legal action. If the music industry is successful, Ireland will become the first European country to completely block access to hundreds of file-sharing websites.
Irma, which represents major music groups EMI, Sony-BMG, Warner and Universal, is to begin compiling lists of websites that it claims are damaging its business. It will then apply for a court order, requiring Eircom and other internet providers to block access to these sites.
Under the terms of an agreement between Eircom and Irma, Eircom will not oppose any court application, meaning that the orders will be automatically granted. A spokesman for Eircom confirmed that Eircom ‘‘will not oppose any application [Irma] may make seeking the blocking of access from their network’’ to blacklisted websites.
The rest of the country’s internet providers, which include BT, UPC (which owns NTL and Chorus) and mobile operators, have yet to respond formally. The move was disclosed in a letter sent to internet providers last week, threatening legal action if they did not comply with Irma’s demands.
Irma has identified Pirate Bay, the world’s biggest file swapping website, as the first site that it will seek to have blocked. It will then move on to ‘‘similar websites’’.
Eircom: No Pirate Bay Blockade Until We Get a Court Order
Ireland's largest ISP, Eircom, stood up for truth, justice, and the Irish way today when it said it would not in fact block The Pirate Bay unless the labels could convince a judge it was necessary. While judicial oversight is certainly important, there's less here than meets the eye, since Eircom has already agreed not to oppose such requests.
It's been a rough year for Eircom, Ireland's main telco and largest ISP. In late January, a massive storm swept Ireland and caused extensive infrastructure damage and more than 22,000 telephone faults. It took a week to recover. The next week, Eircom announced that it was settling a lawsuit brought by the music labels and would voluntarily take part in a graduated response scheme meant to disconnect repeated copyright infringers from the 'Net. Then Eircom's CEO announced his resignation, saying that he planned to return to his native Australia.
This month, news broke that Eircom would block access to The Pirate Bay from all Internet subscribers, raising a howl of protest from users who didn't want their ISP choosing what to censor, absent some legal compulsion. And the company was just picketed by protesters from the Irish Anti-War Movement over a contract it signed with an Israeli firm.
But today brings news that Eircom won't be blocking The Pirate Bay after all—well, not immediately.
IDG News Service spoke with a company rep who confirmed that Eircom would not, in fact, block The Pirate Bay absent a court order. This might sound like a contradiction of previous reports that the blocking would start soon, and without a court order, but it's a bit less than it seems.
Eircom is now saying that it won't block access to websites without a court order, but such a court order might not be specially difficult to get. European courts in countries like Denmark have already granted such requests, and as part of its settlement agreement with the music labels, Eircom has agreed not to oppose any such requests made in court. The graduated response deal is still firmly in place.
According to a report in the Sunday Business Post this weekend, "Under the terms of an agreement between Eircom and Irma [the Irish major label association], Eircom will not oppose any court application [regarding sites to block], meaning that the orders will be automatically granted."
Because all of these agreements are so far voluntary, they apply only to Eircom and put the company at a competitive disadvantage to other Irish ISPs. But the Irish major label trade group doesn't intend to let the situation remain this way; it has already sent tough letters to other Irish ISPs, asking them to follow Eircom's example or risk a court case. IRMA issued a terse statement back in January when the Eircom deal was done, laying out the basic agreement, and sure enough, extending the measure to other ISPs was on the list.
"The record companies have agreed that they will take all necessary steps to put similar agreements in place with all other IPSs [sic] in Ireland," said the group.
Eircom's agreement with the labels, then, appears designed to give the company a bit of moral high ground—"we don't censor websites, and the labels will have to haul us to court to make that change!"—even as it ensures that getting such a court order is simple.
NZ Blogs in Copyright Law Blackout Demo
An internet blackout has gone ahead on Monday morning as the New Zealand online community protests against a new copyright law due to come into effect on Saturday.
Protesters say Section 92A of the Copyright Amendment Act forces the removal of material from websites following any accusation of breach of copyright, even if it was not proven.
Creative Freedom Foundation spokeswoman Bronwyn Holloway-Smith told NZPA thousands of sites, from political blogs to news sites such as Scoop and even a Shortland Street discussion board, had joined the blackout.
"There's quite a list of people taking part and they're all cutting off the usual access to their sites."
She said some sites would stay blacked out until the afternoon, others all day, "and some have been doing it since last week".
On Thursday, about 200 people protested at parliament and presented United Future leader Peter Dunne with an online petition signed by 10,000 people.
The new law was passed by the then Labour government last year. The clause which has sparked protests was removed by a select committee but then restored by the minister responsible for the bill, Judith Tizard, when it returned to the House.
Those promoting the clause say it will effectively police widespread copyright abuse on the internet.
In response to criticism, Commerce Minister Simon Power last week said a code of practice being developed by the internet community would help implement the law.
Ms Holloway-Smith said that was a draft policy which, in its current state, would not change the fact that internet service providers (ISP) would be asked to act in the place of the courts.
"We would like to see the government either deferring it so they can look at amendments which would actually address the issues, or repealing it."
She said a response from parliament was expected following a cabinet meeting on Monday.
John Key Delays Copyright Law
The government may suspend S92a if no agreement is reached
In a surprise announcement this afternoon, prime minister John Key says the government will delay the implementation of the controversial Section 92a of the amended copyright law.
Computerworld spoke to technologist Nat Torkington who attended Key's press conference this afternoon at 4pm.
Torkington says the government may suspend the controversial S92a until the 27 March if no agreement is reached between the parties on how to implement it.
Currently, New Zealand and representatives of overseas rights holders are negotiating with the Telecommunications Carriers Forum (TCF) on how to draft a code of practice for terminating the internet access for users accused of infringing copyrights.
Even if there is an agreement, Torkington says the government will monitor the first six months of the new regime and review the progress then.
InternetNZ welcomes the government's decision to defer the commencement of Section 92A and to suspend it if no agreement can be reached, says InternetNZ executive director Keith Davidson in a press release.
"New Zealanders can breathe a sigh of relief that their internet access is no longer under threat due to unproven allegations of copyright infringement," says Davidson. "Section 92A still needs to be fully repealed. It is disproportionate and unfit for purpose. But this deferral is a good start," he says.
Quebecor Opens Door to Canadian Three Strikes Policy
The CRTC's net neutrality hearing submissions have generated several comments that link net neutrality with copyright. As noted yesterday, CIRPA believes that content blocking of P2P sites should be considered. Quebecor, which owns Videotron, a leading Quebec ISP, goes even further. While ISPs in countries such as New Zealand are pushing back against "graduated response" policies that would create a three strikes and you're out policy terminating subscribers based on unproven allegations of copyright infringement, Quebecor argues that CRTC network management policies should account for the possibility of a Canadian three strikes model.
The Quebecor submission includes the following:
Certains participants à la présente instance ont déjà fait état de situations où le contrôle de contenus peut être bénéfique non seulement pour les utilisateurs de services Internet mais pour la société en général. On peut penser au contrôle des pourriels et des virus, ou à la pornographie infantile. À cette liste pourrait éventuellement s’ajouter des mesures de protection du droit d’auteur pouvant possiblement s’inspirer des modèles de riposte graduée déjà adoptés dans d’autres pays occidentaux.
Translated, Quebecor argues in favour of certain instances of ISPs controlling content, including anti-spam or child pornography blocking. Moreover, it suggests that copyright policies that build upon the graduated response policies in other countries should be added to the list of content controls that benefit society.
The Quebecor submission achieves a remarkable combination - arguing against net neutrality and for a three strikes approach that would terminate its own subscribers. That any ISP could demonstrate such hostility toward its own customers provides a clear indicator of the utter lack of broadband competition in Canada and serves as a warning that the New Zealand fight could eventually make its way here.
What Countries Made Nintendo's Rampant Piracy List This Year?
Every year, Nintendo documents the worst countries in the world in terms of rampant Nintendo game piracy, issuing a report to the U.S. Trade Representative requesting help. What countries made the list this year?
Nintendo issues the annual report to the Office of the U.S. Trade Representative as part of the Special 301 process, which asks for input from the public to underscore areas of concern. So where is piracy rampant this year? For the most part, the list contains the usual suspects. Brazil, China, Korea, Mexico, and Paraguay all return to the list this year, perhaps indicating that the government didn't do enough in those areas last year, instead focusing on less important things, like electing a new president, fighting an ongoing war, and dealing with the failing economy. Priorities, people!
So what has changed? Hong Kong, present on the list last year, has been removed completely, so apparently everything is okay there now. Good job! In its place? Spain. I freaking knew it. They've been way too quiet in Spain lately.
Check out Nintendo's country-by-country report below.
PEOPLE'S REPUBLIC OF CHINA: China continues to be the hub of production for counterfeit Nintendo video game products. The number of online shopping sites in China selling infringing Nintendo products is increasing, and help is needed by the government to curtail the growth of these illegal marketplaces. These products are sold both inside China and to the world, including our key market in the United States. Chinese customs officials must stop shipments of game copiers and other infringing products out of China, and China should work in the coming year to eliminate barriers to its enforcement laws.
REPUBLIC OF KOREA: Internet piracy in Korea continues to increase, as does the availability of devices that get around product security and allow for the play of illegal Nintendo software. A massive customs raid of 10 premises that resulted in the seizure of more than 75,000 game copiers at the beginning of 2009 is a positive sign the government is serious about enforcement. Nintendo is pleased with Korea's consistent customs seizures, and courts are now starting to hold distributors of circumvention devices, such as game copiers, accountable. The Korea-U.S. free trade agreement is important to all intellectual property rights holders.
BRAZIL: Federal anti-piracy actions are not reducing piracy in Brazil, and local enforcement efforts are weak. Efforts to prosecute for piracy are virtually nonexistent. Customs and border control agents failed to seize a single shipment of Nintendo video game products in Brazil in 2008. Internet piracy is increasing with no legal infrastructure in place to respond to the threat it poses to rights holders. High tariffs and taxes also constitute market barriers for legitimate video game products.
MEXICO: Anti-piracy actions by the Mexican government in 2008 were wholly inadequate. The Mexican government must recognize the seriousness of the piracy problem and start using existing enforcement tools. Mexico's participation in negotiating the Anti-Counterfeiting Trade Agreement is encouraging, but enforcement efforts need to move forward now. The willingness of Mexican customs and Mexican postal service workers to be trained by trademark owners was a positive sign in 2008.
SPAIN: The availability of game-copying devices in Spain is alarming. Internet sites offering game-copying devices and illegal Nintendo software are widespread and must be addressed. Nintendo asks that the Spanish government implement laws protecting the creative copyright industry and enact laws against Internet piracy. Nintendo considers education a priority in its fight against piracy in the European Union. Customs authorities play an important role in enforcing intellectual property rights, and Nintendo is seeing positive signs in this area. Nintendo is pleased about recent steps taken by the Spanish National Police against distributors of game copiers.
PARAGUAY: Corruption continues to hamper anti-piracy efforts. Nintendo's anti-piracy actions in Paraguay show that illegal goods are imported and also locally produced. Border controls are key to decreasing piracy, and the revised criminal code will increase penalties against those distributing circumvention devices in Paraguay.
U.S. Film Industry, Mexico Fights Movie Piracy
Cheap DVDs sold on streets cost Hollywood millions every year
Street vendor Amado Lopez was Hollywood's best cheerleader last week as he talked up the nominees for the Academy Awards, especially "Slumdog Millionaire," translated here as "I Want to Be a Millionaire."
"A great drama," Lopez said. "Realistic, exciting."
The movie has gotten such buzz, in fact, that all the DVD copies sold out this week at Lopez's stall in an open-air street market on the south side of Mexico City.
And there lies the problem for the U.S. film industry. The movie had not even reached Mexican theaters yet, much less the DVD aisle, except for the pirated copies that Lopez sells for about $1.50 apiece.
Far from the glamor of Oscar night, Hollywood is waging a ground war against movie piracy such as this, even sending undercover operatives to spot camcorders in Mexican movie theaters.
Movie piracy in Mexico cost the U.S. film industry $483 million in 2005, more than any other foreign country, according to the most recent data from the Motion Picture Association of America. The industry says it identified 32 movie releases that were illegally recorded in Mexican theaters in 2008, up from 12 in 2007.
But there is a disconnect between Hollywood, which sees movie piracy as a menace tied to organized crime, and Lopez's customers, who wonder why a poor Mexican family should care about putting another dollar in Clint Eastwood's pocket.
Ricardo Vargas, a soft-spoken retiree, doesn't look like a criminal as he and his wife pick up their weekly DVD haul from Lopez's stall and he waxes poetic about "the great old movies of quality, the German cinema, Charlie Chaplin, Gina Lollobrigida."
"We couldn't see all these movies in the theater," he said. "The sodas, the parking, the candy, the popcorn. How much would it all cost?"
Mexican authorities have started to denounce piracy of all forms as an illicit activity that bankrolls more sinister aspects of organized crime. The federal attorney general's office, which goes after drug traffickers and kidnappers, also reports raids of pirated movies, perfumes and medicine nearly every week.
Federal authorities report seizing more than 35 million DVDs in the first two years of President Felipe Calderon's term, about 70 percent more than was seized in the six years of the previous term.
John Malcolm, worldwide anti-piracy director for the Motion Picture Association of America, said the Mexican government has "tried to make a serious dent" in movie piracy.
Malcolm's association is lobbying for tougher Mexican laws, including a bill in Congress that would make recording a movie in a theater a punishable crime. Currently, prosecutors must show intent to distribute the film illicitly, Malcolm said.
Also, Mexican prosecutors need to receive a piracy complaint from the rights-holder before prosecuting, a burdensome step in the process, said Malcolm, a former federal prosecutor. Malcolm also wants Mexico to give customs officials greater discretion to seize products suspected of being fakes.
The Motion Picture Association of America has even dispatched dogs trained to sniff out large caches of DVDs at airports. Lucky and Flo already have been deployed in Malaysia and Britain, while Latin America might be a future destination.
The Mexican Film and Music Protection Association, funded by the U.S. film and music industries, sends undercover operatives into Mexican movie multiplexes to find which ones attract the most illicit camcorders. They then offer tips to authorities.
Investigators can tell which DVDs were illegally recorded in Mexican theaters because each distributed film has an embedded watermark that tells in which country it is being distributed.
But there would be no industry of pirated DVDs if there were no buyers.
Jaime Campos, director of the Mexican Film and Music Protection Association, said a "culture of illegality" exists in his country. His group estimates that 9 of every 10 movies sold in Mexico are pirated, a trend that discourages legitimate foreign investment.
A survey by the Alliance Against Piracy found that 75 percent of Mexicans have bought pirated DVDs, which are sold on subway cars and even in official neighborhood markets.
Hilda Castro, president of the alliance, made up of industries sick of having their products stolen, said young people must be educated that buying pirated DVDs is wrong.
Castro said her association recently helped get anti-piracy language into the civics curriculum for 5th graders in the next school year. A popular ad in movie theaters shows a youngster mocking a classmate whose father buys her pirated DVDs and hinting that he doesn't love her enough to buy the real thing.
Castro said the association is looking at future ads that might make a more direct link between DVD piracy and organized crime, which has terrorized Mexico and caused more than 6,000 deaths last year.
"The same person who would never imagine going to a mall and walking out with a DVD, not only are they not ashamed to buy a pirated product, they are proud they got a bargain," she said. "It's one of the few crimes that is viewed as an acceptable crime."
Montreal Man Convicted Under Canada's Anti-Camcording Law
MONTREAL, QUEBEC--(Marketwire - Feb. 20, 2009) - Louis Rene Hache was convicted on charges under the Criminal Code for the illegal reproduction of the film "Dan in Real Life" at a Montreal movie theatre.
Hache was sentenced to 24 months probation and will be required to complete 120 hours of community service. Under terms of the probation Hache is prohibited from entering a movie theatre, associating with anyone involved in movie piracy or owning any recording device. He is also required to forfeit the equipment used in the commission of the offence.
The judgment was issued today in Provincial Court in Montreal by Justice Lacerte Lamontagne. In handing down the sentence, Madame Justice Lacerte Lamontagne reinforced that this was not a victimless crime and that Mr. Hache's actions had caused extensive losses to the movie industry.
"We applaud the judge's ruling and we hope this sentence will send a strong message to others that camcording in theatres is a criminal activity that will not be tolerated," said Steve Covey, Deputy Director, North American Anti-Piracy Operations of the Canadian Motion Picture Distributors Association ("CMPDA"). "Before the law was enacted, law enforcement would not respond even when individuals were caught repeatedly camcording in theatres. With the new law in place, local police can now help prevent films from being stolen right off the screen."
Camcording in Canada
Camcord piracy represents the most significant threat facing film industries worldwide. A single camcord can lead to the production and distribution of millions of illegal copies and downloads around the world. Movie camcorders are often directly associated with so-called "release groups" who distribute illegal copies of movies, computer games and software over the Internet. A camcorded copy of a film can be used to produce unlimited numbers of DVDs, shipped around the world for distribution, and loaded onto the Internet triggering an avalanche of illegal downloads. Replication and distribution of illegal DVDs is highly lucrative and in many cases criminal networks use pirated DVD sales to support other kinds of criminal activity.
About The Canadian Motion Picture Distributors Association
The Canadian Motion Picture Distributors Association (CMPDA) serves as the voice and advocate of the major international distributors of movies, home entertainment and television programming in Canada. The CMPDA carries out investigations, provides support during criminal and civil litigation, and helps educate film lovers on the negative effects of piracy. The motion picture studios served by the CMPDA are: Walt Disney Studios Motion Pictures; Paramount Pictures Corporation; Sony Pictures Entertainment Inc.; Twentieth Century Fox Film Corporation; Universal City Studios LLLP; and Warner Bros. Entertainment Inc. On behalf of these studios, the CMPDA supports initiatives which further the health of the film and television industry and foster an environment of respect for creativity in Canada. A strong and vibrant production industry in Canada serves as an economic engine, with benefits to national and local economies across the country.
About Movie Piracy
Movie piracy and the trade of other counterfeit goods harms Canada's local economies, kills jobs and impacts everyone who is involved in the production and distribution of these goods, affecting a wide variety of artists, manufacturers, distributors, producers, retailers, exhibitors, employees, consumers and governments. In 2005 it was estimated that the annual Canadian consumer spending loss due to film piracy was $225 million (US) and that film piracy cost the Canadian government over $34 million (US) in lost tax revenue. As piracy increasingly becomes the province of sophisticated multi-national organizations with operations placed around the globe, it is essential that countries throughout the world understand and enforce intellectual property rights, both domestically and in co-operation with their international partners.
Oscars U.S. TV Audience Up Six Percent on 2008
The U.S. television audience for Sunday's song-and-dance Oscars rose by about six percent, lifting the glitzy film awards show from record low ratings in 2008, according to early ratings figures on Monday.
The three and one-half hour broadcast on ABC saw "Slumdog Millionaire" triumph with eight Oscars in a show given a new twist by Australian host Hugh Jackman. It posted an average household rating of 23.3 in the largest U.S. TV markets, according to audience tracker Nielsen Media Research.
When national figures are released later on Monday, that is likely to translate into a healthy increase from the 32 million U.S. viewers who tuned in in record low numbers in 2008.
The show also attracts millions of viewers around the world.
Jackman, who plays Wolverine in the X-Men movies, starred in the 2008 film "Australia" and is an award winning musical theater performer, was brought in to give the 81st Oscar show a new look after years of declining TV audiences.
The traditional, joke-filled opening monologue was cut, and Jackman performed two song-and-dance routines -- one with actress Anne Hathaway and a second alongside singer Beyonce and popular young stars Zac Efron and Vanessa Hudgens.
TV critics, however, were lukewarm about the Oscar show. Tom Shales in The Washington Post said Jackman was a "versatile and energetic talent" but called his opening medley of songs on the best picture nominees "pointless and flat."
Alessandra Stanley in The New York Times said Jackman was a "shrewd, even thrifty choice for a recession-era Oscar night -- the hosting equivalent of a value meal."
Vibrant Bollywood-style performances of Oscar-nominated songs from "Slumdog Millionaire" put a lively spin on the evening, which saw the main acting awards go to British actress Kate Winslet for "The Reader" and American Sean Penn for "Milk."
Heath Ledger, a star of 2008's No. 1 box office hit "The Dark Knight" was awarded a rare posthumous Oscar as best supporting actor and his family from Australia gave emotional acceptance speeches from the Kodak Theater podium.
Spain's Penelope Cruz took the best supporting actress award for her part in "Vicky Cristina Barcelona."
Los Angeles Times columnist Patrick Goldstein savaged the telecast. "I'm beginning to believe that saving the Oscars is a job for Iron Man or Hancock, a kick-ass superhero with the kind of unassailable powers that would allow them to radically overhaul what has become the year's stodgiest awards fest."
The Academy Awards show still ranks as the year's highest-rated entertainment spectacle on TV, and a cash cow for Walt Disney Co.'s ABC.
(Editing by Patricia Zengerle)
Broadcast TV Faces Struggle to Stay Viable
CBS, home to “60 Minutes,” the “CSI” franchise, “Two and a Half Men” and the new hit crime drama “The Mentalist,” is having a better year in prime time than any other network.
And yet, as at the other networks, profits have declined sharply at CBS.
For decades, the big three, now big four, networks all had the same game plan: spend many millions to develop and produce scripted shows aimed at a mass audience and national advertisers, with a shelf life of years or decades as reruns in syndication.
But that model, based on attracting enough ad dollars to cover the costs of shows like “Lost” and “ER,” no longer appears viable. Network dramas now cost about $3 million an hour.
The future for the networks, it seems, is more low-cost reality shows, more news and talk, and a greater effort to find new revenue streams, whether they be from receiving subscriber fees as cable channels do, or becoming cable networks themselves, an idea that has gained currency.
The last bastion of the big network audience is the Super Bowl and other live events like the Grammy Awards and the Academy Awards. The rub is that those have traditionally been viewed as promotional outlets for a network’s other shows, and rarely make money themselves.
Ratings over all for broadcast networks continue to decline, making it harder for them to justify their high prices for advertising. Cable channels are spending more on original shows, which bring in new viewers and dampen their appetites for buying repeats of broadcast shows.
For the networks, the crisis is twofold: cultural and financial. For viewers, the result is more low-cost reality shows, prime-time talk and news programs and sports from the institutions that once made “Hill Street Blues,” “All in the Family” and “Cheers.”
NBC’s decision to move Jay Leno to a Monday-through-Friday slot at 10 p.m. eliminates the chance of the network developing another “ER” for that hour, but it will save the network tens of millions of dollars.
The network television landscape is scattered with other examples that speak to a broken business model.
The CW, a lower-profile network owned by CBS and Time Warner, contracted out part of its prime-time schedule to an outside supplier, but shut down the deal after just three months because of low ratings and production problems. MyNetworkTV, a unit of the News Corporation, said it would essentially stop being a broadcast network and instead be a “program service,” supplying shows, some of them reruns of series like “Law and Order: Criminal Intent,” to affiliates. The networks have already lost much of their cultural cachet to cable, which is spending more to develop original programs. For the first time, the winning drama at last year’s Emmy Awards was on basic cable: “Mad Men,” which is on AMC. (“The Sopranos” was the first cable show to receive an Emmy for best drama series, but it was shown on HBO, a premium cable channel).
Financially, the networks are on shaky ground, partly because they rely almost solely on advertising. CBS reported that for the fourth quarter of last year, as the recession deepened, operating income in its television segment declined 40 percent, even though it was by far the most-watched network. In the second week of February, CBS had 12 of the top 20 shows, according to Nielsen Media Research.
News Corporation, which owns Fox, reported operating income of $18 million in broadcast television, compared with $245 million a year ago. And Disney’s broadcasting business had a 60 percent drop in operating income.
For years the major networks raised their ad rates, despite the shrinking audience, because they still offered advertisers a larger audience than anyone else.
“More dollars are chasing fewer eyeballs,” said Gary Carr, director of broadcast services at TargetCast tcm, a media and marketing company.
Lately, the recession has forced down the cost of prime-time commercials on network television, TargetCast said. In the fourth quarter, the average cost for a 30-second prime-time spot declined 15 percent, to about $122,000, the company said.
But advertisers will still pay large premiums for a big audience, particularly for live events like Fox’s “American Idol,” which can command $700,000 for a 30-second spot, according to Adweek. A top network hit of several years ago, NBC’s “Friends,” brought in an estimated $450,000 per 30-second spot.
These circumstances upend the traditional business model of developing comedies and dramas that can live, quite lucratively, for years in syndication.
“Prime-time television has been so expensive,” said Tim Spengler, president of Initiative U.S.A., an agency that is part of Interpublic. “The price premium is getting out of whack, and I think you’ll see some pullback.”
Within the industry, the identity crisis is evident in the debate about the future of the business among network executives.
Jeff Zucker, the chief of NBC Universal, has been more pessimistic, saying, “broadcast television is in a time of tremendous transition, and if we don’t attempt to change the model now, we could be in danger of becoming the automobile industry or the newspaper industry.”
Recently, Robert A. Iger, the chief of the Walt Disney Company, surprised Wall Street when he acknowledged in a conference call with analysts and reporters that some of the company’s businesses were experiencing profound change as competition for people’s time increased and consumers were confronted with an abundance of choices. “This clearly has had an impact on broadcast television,” he said.
One dissenter is Leslie Moonves, the chief of CBS, who defended network television at a media conference in December, saying, “I’m here to tell you — the model ain’t broken.”
ABC had a bit of a resurgence with “Desperate Housewives” and “Lost,” but their ratings are not what they once were. And in the case of “Lost,” it is doubtful that the show would be scheduled today, because of its high cost — its two-episode pilot was said to cost more than $10 million — and its format as a serial, which does not do as well in reruns as self-contained shows.
Broadcast television, for decades an oligarchy of three networks, was once the locus for most of the nation’s shared cultural moments — almost 83 percent of households in the United States watched Elvis Presley’s appearance on “The Ed Sullivan Show” in September 1956, which is said to be the largest audience when measured by that metric. In terms of number of viewers, the final episode of “M*A*S*H,” in 1983, set the record with about 106 million viewers.
The networks have also had deep ties to local communities through affiliate and owned-and-operated stations. Along the way, they minted money.
“It was a license to steal,” said Fred Silverman, the former president and chief executive of NBC who as a programmer was behind the hit shows “The Waltons” and “All in the Family.”
In the last three months of 2008, broadcast networks lost nearly three million viewers, or about 7 percent of their total audience. Overall television viewing is up, however, and some big cable networks, like USA and TNT, are attracting new viewers.
Broadcast networks still bring in the largest audiences, but now they are facing a deep advertising recession that is hitting both the networks and their local stations. Cable networks have also been affected by the ad slump, but those businesses are propped up by subscriber fees.
“That’s why the architecture needs to change,” said Michael Nathanson, an analyst at Sanford C. Bernstein & Company.
Is Hulu Changing Its Distribution Strategy?
Earlier this week, Hulu announced decisions to pull out of TV.com and Boxee. According to sources close to the company, Hulu’s managers were motivated by very different reasons in each case. Regardless of the reasons, these announcements deliver a blow to online video. Another potential blow came yesterday when we heard how cable companies are fighting to limit online content.
Hulu has more than 100 content providers, but some don’t want their content on Hulu. In the case of Boxee, Hulu executives asked Boxee to remove content because some content partners (possibly NBC Universal and News Corp.) didn’t want their material to appear on the service. If the film studios and TV networks also pull out, it will make it more difficult for those of us who love online content to acquire shows and films online.
It doesn’t appear that the battle is solely against online video, however. What’s interesting about this is that News Corp. and NBC Universal handed Netflix an advantage over Hulu in the streaming video market: Just as Netflix is branching out to Xbox, the Roku Player, and LG televisions, Hulu is shrinking its distribution.
Could it be that the big cable companies are pressuring TV networks and film studios to scale back the content they provide Web services? Peter Kafka from All Things Digital certainly thinks so. Although it’s purely speculation, it’s believable, especially given the financial incentive cable networks and operators have to preserve the current cable TV business model. In fact, Glenn Britt, CEO of Time Warner Cable blamed part of the company’s $8.16 billion loss in the fourth quarter on Web video services, which have been luring customers away from cable companies.
Some people in the tech sector are speculating that Hulu might be scaling back the number of sites in order to take advantage of a walled-garden approach. At this point, it’s hard to say, but it’s worth noting that many other outlets, including Yahoo! and MSN continue to offer Hulu’s movies and shows. So where will Hulu go from here? And what will happen to the future of online video? These are just some of the questions that we’re left to ponder.
Vudu Offering HD Films for Download to Own
Vudu has become the first on-demand service to offer high-definition movies for download to own rather than just rent.
Under agreements with such independent film companies as Magnolia Pictures, FirstLook Studios and Kino, Vudu will begin by offering 50 HD titles, including best documentary Oscar winner "Man on Wire," for purchase.
Vudu already offered a library of more than 1,400 HD movies for rental. Vudu's new collection, being unveiled Tuesday, will be available for rental as well as purchase in both instant HD and Vudu's HDX format.
In addition to Magnolia's "Man on Wire," other titles include FirstLook's "Transsiberian" and "War, Inc."
High-definition titles purchased from Vudu can be stored on the consumer's Vudu box or in the Vudu Vault, a free online storage option for movies and TV shows that enables consumers to free up disk space while retaining access to all their purchased titles. Movies are priced from $13.99 to $23.99.
(Editing by Sheri Linden at Reuters)
The Future of Netflix is All About Streaming
Daniel A. Begun
A great debate is raging as to what the future of movie distribution will look like. On one side of the debate are those who claim that physical discs like DVDs, Blu-ray, and whatever format will eventually supplant Blu-ray, will always deliver a superior viewing experience than anything that will be available via streaming or on-demand content. Pundits on the other side of the debate--and this is the side that appears to gaining the most momentum--say that as broadband's footprint continues to expand, throughput speeds continue to increase, advances continue to improve in compression technologies, and more consumer electronics devices gain access to streaming content, the inevitable future of video distribution will increasingly depend on online access.
In an interview with Bloomberg.com, Netflix CEO, Reed Hastings, is siding firmly with the latter camp and it would even appear that Netflix is gearing up to move all of its eggs from the mail-distribution basket to the online streaming basket. Hastings indicated that perhaps as soon as later this year or sometime in 2010, Netflix might start offering online-streaming-only subscription plans (beyond just its current Starz plan--see below). The Bloomberg report states:
"The company's success hinges on its ability to transition to online video from DVDs, Hastings said yesterday in an interview in San Francisco. Netflix faces a challenge similar to the one AOL had as it lost subscribers who shifted from Internet service via a telephone connection to high-speed access, he said."
It is this "generational evolution" that Hastings warns can make investors wary. And in a sort of chicken-and-egg scenario, Hastings claims that in order for Netflix to be able to add more available online streaming content, Netflix needs to pay for that additional content with revenue generated from new customers. Many of these new customers will be ones who sign up in order to receive their videos via Netflix's traditional DVD-by-mail distribution. But some of these new customers will also be attracted to Netflix's offerings as the company continues to improve its "online-streaming technology for computers and [gets] its software embedded in consumer devices."
Netflix has over 10 million paying subscribers, with 718,000 of them signing up in just the fourth quarter of 2008. Netflix also expects that it "will add a record number of subscribers this quarter after doing so last period"--already having "added more than 600,000 subscribers since the beginning of the year."
As to Netflix's current library of 100,000 DVD titles available for rental-by-mail, only about 12,000 of those titles are available via online streaming--and most of the available streaming titles, unfortunately, are not recent or A-list films, despite content from CBS Television Network, Disney-ABC Television Group, and Starz Entertainment. Customers who subscribe to any of Netflix's "Unlimited" disc rental plans (which starts at $8.99 per month for one disc out at a time), also have unlimited access to Netflix's Watch Instantly streaming feature. For those who are interested exclusively in Netflix's streaming options, Netflix also currently offers a "Starz Play Only" plan for $7.99 per month, which offers unlimited access to "Starz Play and live Starz Play TV channel" content, but not the CBS, Disney-ABC, or other streaming content.
Hastings didn't comment on how many customers have opted for this steaming-only option, but this plan represents the beginnings of where Netflix is headed. In fact, the Bloomberg story ends with this quote from Hastings: "We've got one singular objective, which is 'Be successful in streaming'... If we do that, that's a homerun." In order to succeed, however, Netflix will need to expand its available streaming offerings. As such, Bloomberg reports that "Netflix is seeking to make licensing deals with channels like Time Warner Inc.'s HBO and CBS Corp.'s Showtime."
The other linchpin to this success will be the availability of Netflix streaming on devices other than just computers. Netflix streaming is currently available on the Xbox 360, TiVo HD DVR, Roku Digital Video Player, and on LG and Samsung Blu-ray players; and while Netflix doesn't currently support streaming on the Sony PS3, a number of third-party solution make it possible. Xbox 360 access has been especially successful, as Netflix reported earlier this month, "1 million Xbox LIVE Gold members have downloaded and activated the groundbreaking Xbox LIVE application from Netflix since the alliance launched last November."
While watching streaming video from Netflix is not available (yet) on any mobile devices (other than laptops), the Mobile Manager for Netflix Windows Mobile application can play Netflix video previews on Windows Mobile devices. This is pure speculation, but perhaps the ability to stream entire movies might not be far behind. And while this might be pie-in-sky, wishful thinking, the ability to watch Netflix streaming movies on an iPhone might just be the killer app.
Britons Prefer Amazon for Digital Media
Amazon.com Inc has established a clear lead in user preference for digital media download sites in Britain, despite a relatively late entry into the market, a survey published by Strategy Analytics said.
Amazon was the most preferred brand across music, video and games downloads preferences, ahead of Apple's iTunes, Microsoft's MSN and eBay.
Martin Olausson, head of Digital Media Research at the research firm said Amazon was still some distance behind Apple's iTunes in actual market share, but it was in good position to gain share.
"Our survey results suggest that Amazon's dominance and brand strength in traditional online retailing put it in a strong position to lead the UK's fast-growing premium digital media sector in the years ahead," he said.
The survey polled 515 broadband users.
(Reporting by Tarmo Virki; Editing by Erica Billingham)
EMI, Apple Unveil iTunes Pass
EMI unveiled a new feature on iTunes Tuesday called iTunes Pass, which allows Depeche Mode fans access to the band's upcoming album and other selected goodies.
Apple chose to let EMI make the announcement for iTunes Pass, a new service that will gradually release tracks until the middle of June from the album Sounds of the Universe along with exclusive remixes and videos for $18.99. This is a separate offering from the album itself, which is scheduled to be released on April 21 and can be preordered for $9.99.
At the moment, it appears EMI and Depeche Mode are the only ones trying out iTunes Pass. This appears to be an outgrowth of Apple's decision to allow variable pricing in the iTunes Store for the first time, allowing record companies and bands to offer the digital version of a special-edition CD with extra videos and songs than the regular CD for a premium price.
The Depeche Mode tracks will be DRM-free--Apple's other major change to the iTunes Store this year--and the $18.99 pass won't cost you any more than it would have to buy all the contents of the pass separately, Apple said, although it seems there will be tracks or videos available to pass holders that won't be offered to the general public.
CBS Beams ‘Star Trek’ Episodes to iPhones
CBS is taking the iPhone where no iPhone has gone before.
Today, it released an iPhone application for its TV.com site that can play full episodes of TV series, ranging from “C.S.I.” to the original “Star Trek.” While Hulu, a rival video site owned by NBC and Fox, has kept its content only for computers connected to the Web, CBS has been far more open with its own content.
All sorts of video have been on the iPhone from the beginning by way of its YouTube application and its ability to play podcasts (which now can be downloaded over a wireless connection). The application for the Joost service can play some episodes from CBS and other producers’ series. NBC tried a special Web site set up to stream episodes of a few of its series for the iPhone.
But the TV.com application appears to be the first with a lot of mainstream network content that can gain access to full episodes over both the cellular network and Wi-Fi. My test of watching Spock confront a strange cube-like object in space was perfectly acceptable in quality while I was riding on the bus in New Jersey.
As a commuter, I’ve seen a huge increase in the number of people watching video on various types of portable devices over the last year or two, particularly on their way home from work. So I wouldn’t be surprised if the TV.com service finds an audience, particularly if CBS can promote it widely.
I think this is an important and provocative development. Companies like Qualcomm have been developing special services for watching video on cellphones, like MediaFlo, using dedicated cellular bandwidth. If people can watch what they want using regular data services, it will make those extra fee services a much harder sell than they already are.
Of course, if streaming video becomes common, it may put pressure on the capacity of the wireless networks, forcing the carriers to consider how they price data service and whether they continue to offer unlimited bandwidth. But once people start using a service, it will be hard for carriers to cut it off.
Pressure will also be applied to NBC, Fox and ABC if the CBS programming becomes popular on the iPhone. As much as the networks are afraid of losing control of their distribution and cutting out their affiliate stations, they are equally afraid of falling behind rivals and losing potential audience.
Why the Japanese Hate the iPhone
Brian X. Chen
Apple's iPhone has wowed most of the globe — but not Japan, where the handset is selling so poorly it's being offered for free.
What's wrong with the iPhone, from a Japanese perspective? Almost everything: the high monthly data plans that go with it, its paucity of features, the low-quality camera, the unfashionable design and the fact that it's not Japanese.
In an effort to boost business, Japanese carrier SoftBank this week launched the "iPhone for Everybody" campaign, which gives away the 8-GB model of the iPhone 3G if customers agree to a two-year contract.
"The pricing has been completely out of whack with market reality," said Global Crown Research analyst Tero Kuittinen in regard to Apple's iPhone prices internationally. "I think they [Apple and its partners overseas] are in the process of adjusting to local conditions."
Apple's iPhone is inarguably popular elsewhere: CEO Steve Jobs announced in October that the handset drove Apple to becoming the third-largest mobile supplier in the world, after selling 10 million units in 2008. However, even before the iPhone 3G's July launch in Japan, analysts were predicting the handset would fail to crack the Japanese market. Japan has been historically hostile toward western brands — including Nokia and Motorola, whose attempts to grab Japanese customers were futile.
Besides cultural opposition, Japanese citizens possess high, complex standards when it comes to cellphones. The country is famous for being ahead of its time when it comes to technology, and the iPhone just doesn't cut it. For example, Japanese handset users are extremely into video and photos — and the iPhone has neither a video camera nor multimedia text messaging. And a highlight feature many in Japan enjoy on their handset is a TV tuner, according to Kuittinen.
What else bugs the Japanese about the iPhone? The pricing plans, Kuittinen said. Japan's carrier environment is very competitive, which equates to relatively low monthly rates for handsets. The iPhone's monthly plan starts at about $60, which is too high compared to competitors, Kuittinen added.
Cellphones are also more of a fashion accessory in Japan than in the United States, according to Nobi Hayashi, a journalist and author of Steve Jobs: The Greatest Creative Director. And carrying around an iPhone in Japan would make you look pretty lame, he said.
Hayashi's cellular weapon of choice? A Panasonic P905i, a fancy cellphone that doubles as a 3-inch TV. It also features 3-G, GPS, a 5.1-megapixel camera and motion sensors for Wii-style games.
"When I show this to visitors from the U.S, they're amazed," Hayashi told Wired.com. "They think there's no way anybody would want an iPhone in Japan. But that's only because I'm setting it up for them so that they can see the cool features."
Kuittinen said he's predicting Apple's next iPhone will have better photo capabilities, which could increase its odds of success in Japan. However, he said the monthly rates must be lowered as well.
Otherwise, Apple might as well say sayonara to Japan.
State to Start Charging Sales Tax on Online Digital Purchases Oct. 1
Wisconsin will collect sales taxes on Internet downloads of music, games, books, ring tones and other video entertainment - a decision that angers some who will find the 5% tax added to their credit-card bills after Oct. 1.
On Thursday, Gov. Jim Doyle signed into law a package of tax-law changes that included extending the sales tax to so-called digital downloads.
The District of Columbia and 15 states have similar laws, although none of those states borders Wisconsin.
The change will require vendors to add the tax when the product is sold and remit it to the state treasury. One of the most popular sellers of songs, CDs and other digital products, iTunes, already collects sales taxes on those sales for states that charge it, said Susan Lundgren, a spokeswoman for Apple.
It is expected to cost Wisconsin consumers about $6.7 million a year - a number that suggests it's a $134 million annual industry here. Also, national experts estimate that downloads are growing by as much as 20% a year, which means the amount of sales tax in that area will grow substantially.
Doyle has been fighting for the change for years. He and other state officials say it is a matter of fairness: Internet vendors shouldn't have a tax-exempt advantage over Wisconsin's brick-and-mortar retail stores.
"This is applying the sales tax in the same way to the same products," said Doyle spokesman Lee Sensenbrenner. "This change protects Main Street businesses."
Some digital downloaders don't see it that way, however.
"I don't feel very good about it," said Cyntha Hammerel, 40, of South Milwaukee, who downloads songs and CDs several times a week. "I'm not buying the product in Wisconsin."
"I am being taxed to death," said David Vogt, 46, of Milwaukee. "Where in the world does it stop?"
Once a week, Vogt downloads a movie from a large chain, which gives him the right to watch it on his computer or his big-screen TV. Each movie download costs him about $4.
A smoker, Vogt is also upset about Doyle's proposal to raise the state tax on a pack of cigarettes from $1.77 to $2.52. "Now, I'm thinking about quitting," Vogt said.
'Complex, but necessary'
Milwaukee lawyer Andrew Franklin, who studies Internet taxation issues, backed Doyle.
Charging the sales tax on downloads "is tricky and complex, but necessary," Franklin said.
"The statistics are staggering. Billions of dollars in revenue are being lost, a number that is growing exponentially every year," he said. "It's a revenue gap that will certainly grow in such harsh economic times where only the best retailers with the lowest overhead will survive, and the rest will be left struggling.
"This is not a shift in taxation. It does not rob consumers of a benefit otherwise available to them. Rather, it merely collects a tax that in every way mirrors the tax that would be collected if one were to leave a house, go to a store and make a purchase in person."
More than entertainment
But Republican Rep. Jeff Fitzgerald of Horicon said the tax on digital products will be charged on many non-entertainment products.
"This new tax affects far more than kids downloading songs; it raises costs on all types of digital media from photos to clip art to computer games," said Fitzgerald, who voted against the tax-increase package. "Graphic artists, photographers, printers and Web designers just saw their costs go up for producing content."
Jamie Armata, 32, of Wauwatosa, who downloads music about twice a month, wondered how he'll end up paying the sales tax on his downloads. If the company selling the product adds the sales tax every time he makes a purchase, Armata said, "I have absolutely no problem with that."
But Armata and others said they don't want to self-report what they owe in sales taxes on downloads or get a notice from state tax collectors saying they owe a specific amount in unpaid sales taxes.
State Department of Revenue officials say they can't talk about specific tactics they might use to collect the sales tax on digital downloads.
But they will launch an extensive educational campaign before Oct. 1 to make sure vendors are aware of the change and their requirement to add sales taxes when a digital product is downloaded.
"As with any change to the sales tax law, the Department of Revenue will work with sellers to ensure they are correctly collecting and remitting sales tax on digital goods," said department spokeswoman Jessica Iverson.
Iverson said the sales tax compliance rate is high, and state officials expect Wisconsin residents to pay it on digital products.
"Effective October 1, consumers will pay sales tax on digital goods as part of the total purchase price, just as they would with any taxable goods and services," she said.
'Skin Taxes' Tempt, But Face 1st Amendment Issues
It's enough to make you blush: Some politicians want a bigger taste of the economy's naughty side, pushing for special taxes on dirty magazines, racy movies, sex toys and strip clubs.
In Washington state, a half-dozen cash-strapped legislators recently endorsed a huge sales tax increase on explicit movies, magazines and other sex-themed products.
New York officials recently acknowledged that Gov. David Paterson's proposed "iPod tax" on Internet downloads also would apply to online porn purchases, along with tamer diversions such as pop music and computer software.
And in Texas, state lawyers are fighting to preserve the "pole tax," a $5 cover charge on strip clubs that's being challenged by business owners.
In the past five years, lawmakers from Tennessee to Kansas to California have pitched special taxes on porn, escort services, exotic dance clubs and other adult businesses. A U.S. senator even toyed with the idea of an Internet porn tax on the federal level.
Most of the proposals—call them skin taxes—have stalled, often because of conflicts with First Amendment protections of free expression. Washington's proposed porn tax earned little support, despite the need to close an $8 billion budget deficit.
But even with serious constitutional problems, lawmakers haven't stopped trying to capitalize on the fact that sex sells, especially when facing big budget shortfalls and weary voters who aren't likely to stomach an across-the-board tax hike.
"Why do they do it? Because they can," said Phyllis Heppenstall of Peekay Inc., which operates adult stores in Washington and California. "It makes them look good to their constituents. Or at least they think it does."
It's easy to see why targeting sex businesses seems like a political slam-dunk. Singling out taboo behavior for extra taxation is part of the political drive that has led to "sin" taxes on tobacco and alcohol.
And in the U.S., where public attitudes toward sex are more buttoned-up than in Europe and elsewhere, few are likely to stand up and defend porn or nude dancing against additional taxes.
On purely economic grounds, a pornography tax is a decent idea because consumer demand would probably remain strong, University of Texas economist Daniel Hamermesh said. Some believe adult entertainment to be a multibillion dollar industry, although the size is difficult to gage.
"In that sense, it's not a bad way to raise money," Hamermesh said. "You're not going to discourage people. But if you want to raise money, why not?"
But sex-themed taxes have still attracted opposition. Even some social conservatives have resisted, arguing that the government legitimizes naughty behavior by profiting from it.
Adult businesses, of course, also have pushed back. Their trump card has been the First Amendment, which protects entertainment products from taxes based solely on their content.
Some restrictions on adult entertainment are generally allowed, such as zoning laws that regulate where a strip club can operate. That's because such laws are aimed at secondary factors, such as a business' effect on surrounding property values, UCLA law professor Eugene Volokh said.
But tying a tax strictly to a product's content is different, Volokh said—you can't tax Playboy, for instance, unless you also hit Newsweek and National Geographic.
Officials in Texas have run into that problem with the state's "pole tax," the special entry fee for strip clubs that serve alcohol.
The Texas Legislature approved the fee in 2007, hoping to spend the money on sexual assault and health insurance programs, but a state judge tossed out the fee as an unconstitutional infringement on free speech. State lawyers have appealed the decision.
New York state could avoid free speech problems with Paterson's proposed tax on Internet downloads because it would treat all entertainment products the same, regardless of content.
Washington state's would-be porn tax sought an additional 18.5 percent sales tax on a wide range of "adult entertainment materials and services," including "paraphernalia."
The tax could have added about $20 to the $109 sale price of a top-selling "Gigi" vibrator at Babeland, an adult store in Seattle.
Analysts said the tax could have netted the state about $17.8 million for the upcoming two-year budget.
But after heavy criticism from editorialists and sex-shop customers alike, Democratic Rep. Mark Miloscia, who sponsored the bill, now acknowledges it will fail this year.
That's a relief for Babeland, which sells sex toys, DVDs, magazines and other erotica at stores in Seattle and New York City. A porn tax might have caused the company to reconsider doing business in the state, Chief Operating Officer Rebecca Denk said.
"The adult industry is this big mystery," Denk said. "They think it's the Larry Flynts of the world and very deep pockets and a multibillion-dollar industry, when in fact it's small businesses."
Associated Press writers Valerie Bauman in Albany, N.Y., and Kelley Shannon in Austin, Texas, contributed to this report.
App Store Grows, But Apps are Seldom Used
IPhone users have short attention spans.
At least that's the conclusion from data collected by Pinch Media, a company that helps developers track the use of their iPhone applications.
Pinch found that of the users who download free applications from the App Store, only 20 percent use the app the next day, and far fewer do as the days pass. For paid applications, the return rate is only slightly better: 30 percent of people use the application the day after they buy it. The drop-off rate for paid applications is about as steep as for free applications after the first day.
Generally, 1 percent of users who download an application turn into long-term users of it, Pinch found. Pinch has noticed some differences based on the kind of application. For example, sports applications get more use than others in the short term, while entertainment applications tend to keep users for longer than others.
Pinch has discovered, or at least confirmed, some other interesting usage trends as well. Developers have a far greater success rate once they rise to the top of the store, which Apple ranks based on popularity. Once applications hit the top 100, the number of daily new users increases by 2.3 times, Pinch said.
Also, free applications tend to get more use than those that cost. Users run free applications, on average, 6.6 times as often as paid applications, Pinch said.
The findings might surprise and disappoint developers, many of whom regard the iPhone's application ecosystem as the first real opportunity to build a business around wireless applications. Prior to the launch of the easy-to-use App Store, few phone users ever downloaded new applications to their phones. That meant that the best way for developers to offer their applications was to convince operators to preload the applications on phones -- an expensive, time-consuming and challenging proposition.
Pinch Media collected data from "a few hundred" applications in the App Store that use its hosted analytics product. Applications that use the analytics offering include those that have been the number-one paid and free applications available in the store, Pinch said. The store currently has more than 15,000 applications, and users have downloaded applications more than 500 million times.
The data from Pinch might be valuable for developers who are also considering building applications for other stores that have been planned following the success of the App Store. Stores for Android, Windows Mobile, BlackBerry and Palm Pre applications have either been announced or are already open.
Face Recognition: Clever or Just Plain Creepy?
New photo programs from Apple and Google include revolutionary face-spotting technology.
Simson Garfinkel and Beth Rosenberg
We have more than 25,000 digital photographs stored on our computer hard drives--most of them of people. Until now, our sole means of tracking down a familiar face was to search manually: by date, EXIF data, "tags," or the brute force of our own memory. Now computers can do the searching, thanks to the nifty face-recognition feature that Apple and Google have put into the latest versions of their photo-management systems.
Face recognition was one of those brilliant but technically iffy and ethically tricky counterterrorism technologies deployed as a result of the September 11 attacks. The idea was to automatically screen out terrorists as they walked through security checkpoints--only it didn't work out that way: at a test in Tampa, for example, airport employees were correctly identified just 53 percent of the time. Civil-liberties groups also raised concerns about false positives--people being mistakenly identified as terrorists, and possibly arrested, just because of their looks. And so, without a demonstratable benefit, face recognition largely dropped off the public's radar.
That's the public's radar, mind you. Many countries, including the United States, quietly revised their requirements for passport photos to make them friendlier to face-recognition software. The National Institute of Standards and Technology, which had been testing the technology since 1994, conducted large-scale face-recognition tests in 2002 and 2006. Oregon and some other states began using face recognition to detect when one person tries to obtain a license under different names. And all the time, the technology kept getting better. Much better.
In order to have a functioning face-recognition system, a computer must first be able to detect the face--that is, given a photograph, it must be able to find the faces in it. Technically, this is easier and more reliable than identifying a particular person. This technology was pretty much perfected just after September 11, 2001. The result: face-detection systems began appearing in digital cameras and camcorders a few years ago. These algorithms generally work by searching for objects that look like eyes, a nose, and perhaps something that's kind of round. They identify boxes where faces are likely to be, and then tell the autofocus system what part of the photo needs to be in focus. After all, everybody hates it when Grandma's eyes are blurry, right?
So face recognition starts with face detection. The face is then rotated so that the eyes are level and scaled to a uniform size. Next, one of three different technical approaches kicks in. Each of these approaches is, of course, covered by its own set of patents and bundled into various vendor offerings. One approach transforms the face into a mathematical template that can be stored and searched; a second uses the entire face as a template and performs image matching. And a third approach attempts to create a 3-D model based on the face, and then performs some kind of geometric matching. Based on our experience with the software, we believe that Apple's system is using a landmarks approach, while the Google system is doing some kind of image matching. But we could be wrong. Neither company has publicized which algorithms it is using.
We tested the face recognition in Apple's iPhoto '09 by applying it to two different databases of 17,000 and 10,000 photos, stored on our own hard drives. Google's Picasa only works with previously uploaded Web albums; we tested it on roughly 500 photos there. The verdict: both of these systems mostly work, are extremely cool, and are also kind of creepy.
iPhoto '09 is certainly the friendlier of the two. The first time you run iPhoto, it searches out all the faces in your photo library; this took about four hours on a dual-core iMac. Next, you click on a photo of someone you know; click "Name" and fill in the text box beneath your subject's face. iPhoto will run through your photo library looking for other photos of the same person. (The recognition seems to be based on features inside a recognition box that is bounded by the left and right temples, eyebrows, and chin.)
Overall, iPhoto does a surprisingly good job finding a bunch of photos of the person you've selected and "named." But in the process, it finds photos of other people as well. So your next task is to tell iPhoto which photos it got right and which are wrong. iPhoto uses this information to update its mathematical models. It then looks back through your photo library for other photos of the same person. If it can't find any, you can manually point one out to give iPhoto another starting point; it will then seek out more. You can also click on a photo and ask iPhoto to try to figure out who is in the picture; if you confirm iPhoto's guess, the model gets better still.
We were astonished at how good iPhoto was at finding photos of our kids. Amazingly, iPhoto could even distinguish between our identical twins. (The trick is that one of them has a face that's a bit thinner and taller than the other's.) We were disappointed, however, that it found many more photos of one twin than the other, although we photograph both in equal numbers--and often in the same shot. A study of their photographs revealed something that we hadn't noticed, but iPhoto had: one twin always looks directly at the camera, but the other tends to tilt his head away, and iPhoto's face recognition doesn't work if the program only sees one eye. We also have lots of photos of kids in face paint. iPhoto found practically none of those, except for when the paint was confined to the middle of the child's forehead--which is outside its recognition box.
It's tempting to read a lot into iPhoto's recognition system. Searching for photos of Beth produced lots of photos of Simson's ex-girlfriends. It's tempting to say that iPhoto knows what Simson likes, but this could also be a bias in our test corpus: pick random photos out of Simson's library, and you're sure to find a bunch of his ex-girlfriends.
iPhoto was also surprisingly good at finding photos of our cats, especially the ones with white or orange fur. Unfortunately, it failed to find the tabbies--presumably, facial features are harder to distinguish when the eyes are the same color as the cheeks. And iPhoto does a startling job at finding and recognizing faces in shadows and other low-contrast situations. That's because iPhoto cranks up the contrast between face and background, presumably to make it easier to get out the features.
Since we installed iPhoto '09, our family has spent hours sitting around the computer, searching for photos of the kids, and teaching the computer what each of us looks like. We found a lot of old photos we had forgotten. We laugh at the mismatches. We try to understand the algorithms. This is one of the most entertaining programs that Apple has ever created.
Google's Picasa technology is far creepier. Instead of starting with a photo of someone you know and searching for all the similar matches, Google takes every photo that you've uploaded to Picasa, searches them all for faces, then "clusters"these faces into groups of, supposedly, the same people. You then go through each group and tell Google who a person is--including his or her full name, nickname, and e-mail address.
In fact, Google's clustering isn't all that great. It frequently puts different people in the same cluster, and it will make lots of different clusters for the same person. And unlike iPhoto, which could easily match photos of our 12-year-old daughter with her photos as a toddler, Google thought that the children were different people. But Google's user interface is pretty easy to employ, the matching task is strangely compelling, and before you know it, you'll have every one of your photos tagged with all the real names and e-mail addresses of each person that the photo features.
But what's really unsettling about Google's service is that it doesn't just stop at your friends. Before you know it, Google is asking you to identify all those other faces in your photographs--the people standing in the background, the faces in the crowds, even the faces on posters. This is certainly keeping with Google's corporate mission "to organize the world's information and make it universally accessible and useful." But is that what we really want from a photo-sharing website?
Our iPhoto experiences have been a delight: we've been excited and pleased to find so many pictures of our kids, family, and friends--and even ourselves. On the other hand, when we used the advanced tagging feature in Google's Picasa, we felt as though we were intelligence analysts working in the windowless lab of some totalitarian government.
We believe that consumer-driven face-recognition technology will fundamentally change public-policy debates about biometrics and mass video surveillance. After September 11, nobody really understood how this technology worked, what it got right, and what it got wrong. But before the end of this year, millions of Americans will have first-hand experience with some of the very best face-recognition systems ever deployed. Once the family-photo novelty wears off, we'll be watching to see if iPhoto and Picasa users ask their government to regulate this technology--or accelerate its deployment.
Info Chief Slaps Met on CCTV in Pubs
Coppers try to hardwire surveillance in Islington
The Met Police got a short sharp rap over the knuckles yesterday, as the Office of the Information Commissioner questioned what looks very much like a blanket policy to force CCTV onto public houses in certain parts of London.
The story begins with a letter to the Guardian last week, from Nick Gibson. He is currently renovating Islington pub The Drapers Arms, after its previous owners allowed it to go insolvent and then disappeared.
In his letter, he argues that if he had merely taken over an existing licence, the police could not have imposed any additional conditions. However, because this was now a new licence, the police were able to make specific requests, including one particular request in respect of installing CCTV.
Mr Gibson wrote: "I was stunned to find the police were prepared to approve, ie not fight, our licence on condition that we installed CCTV capturing the head and shoulders of everyone coming into the pub, to be made available to them upon request. There was no way that they could have imposed this on the previous licence holder."
We spoke to the Police and to Islington Council. The Council were clear that this was not their policy: they would look at individual licence applications in the light of representations made to the Licensing Committee and decide on a case by case basis.
It was left to the Met to confirm the existence of a blanket policy for some parts of London. A spokeswoman for the Met said: "The MPS overall does not have a policy of insisting CCTV is installed within licensed premises before supporting licence applications.
"However, individual boroughs may impose blanket rules in support of their objectives to prevent crime and disorder and to assist the investigation of offences when they do occur.
"Islington is one of the most densely populated districts for licensed premises in London and the borough's licensing authority is committed to providing a safe environment in which to socialise.
"To this end, Islington police recommend all premises are required to install CCTV and make those images available to police upon request before a licence is granted."
This is in stark contrast to existing guidelines put together by the Office of the Information Commissioner, which requires any body seeking to install CCTV to do so on a case by case basis and only after carrying out a full impact assessment. Clearly, a blanket policy covering a whole borough would fail to meet these guidelines.
When we put this to the Met, they clarified further, explaining that they did not "impose" CCTV, but merely put it forward as a "recommendation" to the relevant Licensing body. We also asked why they had mentioned a requirement for all licenseholders to make images available "on request" – which would be a serious extension of police powers. The Met responded that there was no intention to trawl footage for purposes of crime prevention – and this was merely a re-statement of existing law.
However, a spokeswoman for the Office of the Information Commissioner said: "Hardwiring surveillance into the UK’s pubs raises serious privacy concerns. We recognise that CCTV plays an important role in the prevention and detection of crime, and can help to reduce crime in areas of high population density, such as city boroughs.
"However, we are concerned at the prospect of landlords being forced into installing CCTV in pubs as a matter of routine in order to meet the terms of a licence. The use of CCTV must be reasonable and proportionate if we are to maintain public trust and confidence in its deployment.
"Installing surveillance in pubs to combat specific problems of rowdiness and bad behaviour may be lawful, but hardwiring in blanket measures where there is no history of criminal activity is likely to breach data protection requirements. We will be contacting the police and others involved to establish the facts and discuss the situation in Islington.”
This sentiment was echoed by Chris Huhne, Lib Dems Home Affairs spokeman, who added: "The impression is that CCTV is a panacea for preventing crime but the evidence for this is far from conclusive.
"We are already the most watched society in the world, yet more and more CCTVs are being installed every day. What we really need is proper regulation and an end to the over-reliance on this over-used and intrusive technology."
Privacy in the Age of Persistence
Note: This isn't the first time I have written about this topic, and it surely won't be the last. I think I did a particularly good job summarizing the issues this time, which is why I am reprinting it.
Welcome to the future, where everything about you is saved. A future where your actions are recorded, your movements are tracked, and your conversations are no longer ephemeral. A future brought to you not by some 1984-like dystopia, but by the natural tendencies of computers to produce data.
Data is the pollution of the information age. It's a natural byproduct of every computer-mediated interaction. It stays around forever, unless it's disposed of. It is valuable when reused, but it must be done carefully. Otherwise, its after effects are toxic.
And just as 100 years ago people ignored pollution in our rush to build the Industrial Age, today we're ignoring data in our rush to build the Information Age.
Increasingly, you leave a trail of digital footprints throughout your day. Once you walked into a bookstore and bought a book with cash. Now you visit Amazon, and all of your browsing and purchases are recorded. You used to buy a train ticket with coins; now your electronic fare card is tied to your bank account. Your store affinity cards give you discounts; merchants use the data on them to reveal detailed purchasing patterns.
Data about you is collected when you make a phone call, send an e-mail message, use a credit card, or visit a website. A national ID card card will only exacerbate this.
More computerized systems are watching you. Cameras are ubiquitous in some cities, and eventually face recognition technology will be able to identify individuals. Automatic license plate scanners track vehicles in parking lots and cities. Color printers, digital cameras, and some photocopy machines have embedded identification codes. Aerial surveillance is used by cities to find building permit violators and by marketers to learn about home and garden size.
As RFID chips become more common, they'll be tracked, too. Already you can be followed by your cell phone, even if you never make a call. This is wholesale surveillance; not "follow that car," but "follow every car."
Computers are mediating conversation as well. Face-to-face conversations are ephemeral. Years ago, telephone companies might have known who you called and how long you talked, but not what you said. Today you chat in e-mail, by text message, and on social networking sites. You blog and you Twitter. These conversations – with family, friends, and colleagues – can be recorded and stored.
It used to be too expensive to save this data, but computer memory is now cheaper. Computer processing power is cheaper, too; more data is cross-indexed and correlated, and then used for secondary purposes. What was once ephemeral is now permanent.
Who collects and uses this data depends on local laws. In the US, corporations collect, then buy and sell, much of this information for marketing purposes. In Europe, governments collect more of it than corporations. On both continents, law enforcement wants access to as much of it as possible for both investigation and data mining.
Regardless of country, more organizations are collecting, storing, and sharing more of it.
More is coming. Keyboard logging programs and devices can already record everything you type; recording everything you say on your cell phone is only a few years away.
A "life recorder" you can clip to your lapel that'll record everything you see and hear isn't far behind. It'll be sold as a security device, so that no one can attack you without being recorded. When that happens, will not wearing a life recorder be used as evidence that someone is up to no good, just as prosecutors today use the fact that someone left his cell phone at home as evidence that he didn't want to be tracked?
You're living in a unique time in history: the technology is here, but it's not yet seamless. Identification checks are common, but you still have to show your ID. Soon it'll happen automatically, either by remotely querying a chip in your wallets or by recognizing your face on camera.
And all those cameras, now visible, will shrink to the point where you won't even see them. Ephemeral conversation will all but disappear, and you'll think it normal. Already your children live much more of their lives in public than you do. Your future has no privacy, not because of some police-state governmental tendencies or corporate malfeasance, but because computers naturally produce data.
Cardinal Richelieu famously said: "If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged." When all your words and actions can be saved for later examination, different rules have to apply.
Society works precisely because conversation is ephemeral; because people forget, and because people don't have to justify every word they utter.
Conversation is not the same thing as correspondence. Words uttered in haste over morning coffee, whether spoken in a coffee shop or thumbed on a BlackBerry, are not official correspondence. A data pattern indicating "terrorist tendencies" is no substitute for a real investigation. Being constantly scrutinized undermines our social norms; furthermore, it's creepy. Privacy isn't just about having something to hide; it's a basic right that has enormous value to democracy, liberty, and our humanity.
We're not going to stop the march of technology, just as we cannot un-invent the automobile or the coal furnace. We spent the industrial age relying on fossil fuels that polluted our air and transformed our climate. Now we are working to address the consequences. (While still using said fossil fuels, of course.) This time around, maybe we can be a little more proactive.
Just as we look back at the beginning of the previous century and shake our heads at how people could ignore the pollution they caused, future generations will look back at us – living in the early decades of the information age – and judge our solutions to the proliferation of data.
We must, all of us together, start discussing this major societal change and what it means. And we must work out a way to create a future that our grandchildren will be proud of.
Longer Is Not Always Better
This post is a follow-up to our blog last week about a small Czech provider briefly causing global Internet mayhem via a single errant routing announcement. In this incident, SuproNet (AS 47868) announced its one prefix, 18.104.22.168/21, to its backup provider, Sloane Park Property Trust (AS 29113), with an extremely long AS path. We've gotten more feedback about this entry than any other in recent memory, so we thought we'd try to answer some of the questions that were posed both here and elsewhere, as well as provide some clarification about exactly what went on. The questions we try to address include:
• How could anyone be this dumb?
• Why did this cascade throughout the planet?
• Can you provide more details about the impact and its spread?
• How do we prevent this from happening again?
How could anyone be this dumb?
I'll admit that this was my first thought. And since this incident interrupted my lunch, I was only too happy to join the mob. In hindsight, my reaction was due to the fact that my router experience is largely limited to Cisco gear and their router software, known as IOS. For example, suppose SuproNet was using a Cisco and wanted to prepend their ASN (47868) an additional four times to announcements to a particular provider. They could use something like the following, where the string of x's refers to the IP address of the provider's router. Notice that I had to explicitly list 47868 four times.
neighbor xx.xx.xx.xx route-map longerisbetter out
route-map longerisbetter permit 10
set as-path prepend 47868 47868 47868 47868
The is a common way of prepending in Cisco IOS, so I naturally thought, who would be so dumb as to type (or cut-and-paste or whatever) their own ASN hundreds of times into a configuration for a router? Who? The only problem with this line of reasoning is that SuproNet wasn't using a Cisco. They were apparently using a router from MikroTik, a router vendor from Latvia, as first reported in this Czech blog. MikroTik obviously targets the Czech market since they have a local language web page and domain.
So how do you prepend on a MikroTik? According to their on-line manual, you set the following variable in an appropriate configuration mode.
bgp-prepend (integer: 0..16) - number which indicates how many times to prepend AS_NAME to AS_PATH
So if SuproNet was thinking "Cisco IOS", they might have typed "bgp-prepend 47868" to prepend 47868 once. However, this would be a mistake as this router is expecting a count, not an ASN. So at this point, it would be reasonable to expect the MikroTik to report something like "value out of range". Let's assume they didn't do any range checking on the input value and let's assume they devoted one byte (8 bits) to store this value. One byte can represent all integers from 0 to 255. So what happens when you try to stuff something larger, like 47868, into one byte? You get 47868 modulo 256 (i.e., the remainder after dividing of 47868 by 256), which equals 252. As Mikael Abrahamsson first noticed, this was the exact number of prepends of 47868 he was seeing. So I went back and looked at the copious number of announcements we saw of SuproNet's prefix and guess what? Every single one had 252 prepends of 47868, leading me to conclude that this was the exact number sent out by SuproNet. Originally I was thinking the number of prepends probably varied based how these long paths were being truncated and that it was this random truncation that was causing part of the problem.
And using this clue, Ivan Pepelnjak was able to spell out exactly what happened in his blog. As it turns out, the reason for all those routing resets and general instability was due to a previously unknown Cisco bug involving AS paths close to 255 in length. If you try to prepend to a long path that you receive and by doing so, create a path longer than 255, you are toast. So the maps we gave in our our last blog were more of an indication of Cisco market share (at least among prependers), rather than the propensity of outdated routers. Kudos to Ivan for figuring this out.
In summary, we have a situation where a single careless operator in the Czech Republic tickled one bug (i.e., lack of bounds checking in the MikroTik router) that in turn tickled another bug (i.e., a problem with long AS paths on a Cisco). And the result was global Internet instability due to prevalence of Cisco gear in the market. But in fairness to MikroTik we note that Mathias Sundman observes that bounds checking does now exist in version 3.20 of their router software.
Why did this cascade throughout the planet?
Short answer: There is a bug in Cisco IOS with regard to long AS paths and lots of folks use Cisco gear. Longer answer: Most ISPs apparently do not filter out announcements with long AS paths. As we noted in our previous blog on this topic, we are all fairly close to one another on the Internet and there is really no reason to be seeing excessively long AS paths. Such paths only indicate a problem or a clueless operator or both, and can be safely discarded. The fact that they were not dropped allowed them to tickle this bug on many Cisco boxes along the way.
Since we are all just a few AS hops away from each other, the problem only occurred because the paths originated from SuproNet were so close to 255. This allowed them to reach the core of the Internet and continue onto other edge networks before exceeding the 255 path length boundary. It was only when they did that all hell broke loose, far away from the original source of the problem. As Andree Toonk provides on this page, there are apparently others who have made the same mistake on MikroTik routers. AS 20912, Panservice of Italy, is doing it as of this writing, but 20912 modulo 256 is only 176 and these announcements are apparently not causing a problem.
Can you provide more details about the impact and its spread?
This was an easy one, as Renesys monitors every prefix (network) seen on the Internet and computes their stability over time. We also geo-locate them as accurately as possible. Thus we can see events like this propagate through the planet and Google Earth provides an excellent way of performing the visualization. We used it to show every newly unstable prefix in during the hour before and the hour of the incident. Here are a few composite images taken from Google Earth of a few regions and an indication of all the unstable prefixes seen during the 2-hour period. We start with the US where the impact was the greatest.
Next up is the heart of South America, where Cisco obviously needs to send some sales folks. (Before someone points out the population density of South America relative to the US, we noted in our last blog that South America was the least impacted continent on a percentage basis.)
Finally, we take a look at Europe, where all the trouble started.
How do we prevent this from happening again?
This one is really about assigning blame and there is plenty to go around. But before we get too caught up with that, keep in mind that this was really the perfect storm. As of today, Renesys has observed 31,188 unique non-private ASNs on the Internet over the last few weeks. If you compute modulo 256 of each of them, you get 731 with associated values ≥ 250 or 2.3% of the total. There is nothing special about 250. However, the likelihood of a problem decreases significantly as the values get lower, and 250 seems like a reasonable cutoff, given typical path lengths in the Internet. And there are still only 1,919 ASNs whose modulo 256 value is ≥ 240, or 6.2% of the total. Thus for this event to have occurred at all, besides the bugs in the router software of two vendors, only a few percent of the ASes on the Internet could have possibly initiated the meltdown, but only if they had a careless operator and an obscure Latvian router with outdated software. How likely was that?
As for the blame, network operators (SuproNet) should obviously read their router documentation and test any proposed changes in a lab environment to see if they get the results they expect. Router vendors should check bounds on input parameters (MikroTik) and on boundary conditions (Cisco). ISPs should filter out obvious useless garbage, like ridiculously long AS paths and unrouteable (private) IP addresses. They obviously don't, given the scope of the event. And who designed this BGP routing protocol anyway? What were they thinking?
Seriously, the reason for the success of the Internet is because it is not under the control of any one government or company. Because of this fact, it is both cheap and ubiquitous. But because there is no centralized control or authority, we are largely at the mercy of the weakest link. Sure there is plenty we can do to prevent things like this from happening again, but there will always be the next perfect storm. Who could have guessed something like this could have happened? You won't be able to guess the next one either. The happy ending to this story is that the community quickly rallied and worked together to both identify and mitigate the problem. No meetings were held, no bailouts were requested and not a single lawyer was needed to draft an agreement. The Internet was back to normal in short order.
Media-Morphosis: How the Internet Will Devour, Transform, or Destroy Your Favorite Medium
Let me start by saying that I like newspapers. And let me say further that, no matter how much I like them, they just might not have a future.
The Internet chews up media and spits them out again. Sometimes they get more robust. Sometimes they get more profitable. Sometimes they die.
It's a scary thought, especially if you're personally attached to an old medium like movies, books, records, or newspapers.
But just because an industry is socially worthy, it doesn't follow that it is commercially viable. Today, besides newspapers, three other media are thrashing over their futures in a networked world, and as with newspapers, the rhetoric is mostly of the nonproductive "But I like it!" and "It's good for society!" variety, with not enough thought given to whether these media are commercially viable in the Internet age.
In this report, we will take a closer look at the "media-morphosis" taking place across traditional media -- and what that tells us about the future.
The imminent collapse of the American newspaper industry has spawned entire gazeteers' worth of high-minded handwringing about the social value of newspapers and the social harm that their disappearance will unleash. It's probably all true. I love the smudgy old devils, from the headlines to the funny pages.
Newspapers are fundamentally an advertising-supported medium. Advertisers place ads in newspapers because they believe these ads will sell more products for them. The price of an ad is set by four factors:
• How many people will see the ad? The more, the merrier.
• Who is likely to see the ad? Are they the sort of people who are likely to want to buy what the ad is selling? Or is it so cheap to reach people with the ad (via skywriting, say), that it doesn't matter if a lot of uninterested people will see it? (After all, "most" people in a given group might not care for your stuff, but there's always an off-chance that there's one or two customers mixed in with the no-sales.)
• What are the special characteristics of the medium? Can you bind a perfume strip into it? Click on it to go straight to a purchase-page? Turn left at the sign and buy a submarine sandwich? (A lot depends on the beliefs the advertiser has about the factors that contribute to purchase decisions in the medium. If you believe that perfume strips sell the hell out of perfume, you'll buy ads with perfume strips.)
• What is the competition for reaching the same group of people with the same kind of ad? How many other venues afford you, the advertiser, the same opportunity as this one?
What happened to newspapers is easy to understand: There are more and better ways for an advertiser to deliver ads of similar quality to the "spendiest" newspaper readers, most of them on the Internet.
Yes, there are large groups of people who read newspapers but fall below average on Internet use. Seniors, for example -- a group of people who are apt to have already established their brand loyalties, to be focused on low prices, and to have a the motivation and discretionary time to thoroughly research their purchases. Thus, many advertisers won't pay as much to reach those readers.
Big-budget movies (BBMs) require a lot of capital and rely on studios controlling the rate and nature of distribution of the finished product. If you're going to recoup your $300 million box-office turd, you need to move a hell of a lot of DVDs, TV licenses, foreign exhibition, Happy Meal toys, and assorted "secondary" revenues.
Let's be realistic here: Nothing anyone does is going to make it harder to get movies when you want them, where you want them, and at whatever price you feel you should pay for them (including free). And the harder you crack down on Internet movie-downloading, the more attractive you make buying pirate DVDs from criminals on the street -- a virtually zero-risk transaction that directly displaces DVD purchases.
What's more, no one has yet successfully crowdsourced a movie that looks and feels like a BBM. There are lots of fabulous 9-minute YouTube Inc. videos, and plenty of lovely and promising machinima flicks, but no one's yet built the kind of purely escapist, high-production-value feature that we flock to the cinema to see every summer.
Now, maybe film studios can do what Magnolia Pictures is doing -- distributing day-and-date releases to satellite, pay-per-view, cinema, DVD, and foreign film outlets -- and recapture a lot of the money that is squirting between the fingers of the tightly clenched release-window fist. But if it's not enough, commercially motivated BBMs might simply die.
Note that movies as a genre won't vanish. There's plenty to love about 9-minute YouTubes and the quirky features that come out of indie production houses. There's never been a time when more moving pictures were being produced and viewed than today. Many of these things are economic propositions, and many are not -- they're a lot more like stage shows than they are like films. They cost less to produce, they reach smaller, more targetted audiences, and they represent an admirable diversity of voice and point of view. But they're not Big, Culturally Relevant Media in the way that a real classic BBM can be.
The specific, rarefied animal that is the gigantic film spectacle demands a technological reality that has ceased to exist-- just enough technology to distribute the films everywhere, but not so much technology that the audience gets to overrule your distribution decisions.
So, we may be at the end of the period in cinematic history where we can convince investors to pony up $300 million to make a sequel to a sequel to a remake of a movie adapted from a 50-year-old comic book. Which isn't to say that no one will make these things henceforth -- give it a decade or two and there may well be rich weirdos who fund these productions the same way there are lovely old codgers who can be coaxed into putting up the dough to mount 15-hour, all-singing, all-dancing Wagner operas. Not a mass medium, nowhere near as culturally relevant as BBMs are today, but still a going concern as a vanity/prestige form.
And the rest of it? The secret is "cheap": making stuff for the Net just doesn't cost as much the audiovisual material we're used to seeing. It may not be as pretty, but at the rock-bottom prices that some of this stuff gets made for, it's viable to make a slightly crummy-looking YouTube video that's the exact, perfect video for you and 38 other people who are kinked just like you.
Some of this stuff will be sustainable through donations, other through advertising/sponsorship, and others still will be conducted on a non-economic basis. If your material is super-targeted to just the right audience, there's probably an advertiser out there looking to reach them with messages that really benefit from audiovisual treatment, who'll pay you a (relative) fortune for the chance to place an ad with you.
This is the easy one. The problem was that the record industry was built on per-unit income from CDs (and records and tapes and so on). The economics of this stink -- if you believe the record industry, they produce an ungodly number of expensive flops for every success.
Artists have gotten a notoriously raw deal from the record companies -- the average artist with a record deal earns $600 a year or less from it. Artists have "breakage" deducted from their royalty payments -- even payments on sales of digital downloads.
Whatever profitability there is in the system is seriously jeopardized by the music-listening public's ability to get any song they want, at any time, for any price (including free). And, just as with movies, it's never going to get harder to copy music without permission.
Now the good news: The more your music gets copied, the more people there are who will pay to see you perform it live. This may not support a record label with offices on five continents, but it can probably put a comparable (if not larger) amount of money into the pockets of a comparable (if not larger) quantity of artists.
There are artists who can't perform for beans. Those artists' futures are in trouble. Either they're going to have to produce studio music on a non-economic basis (most studio music is produced on this basis, thanks to super-cheap home studios that let anyone and everyone participate) or find sponsors, grants, or advertisers who'll keep them afloat.
But as a category, the future's looking good for recorded music and the musicians who make it.
This one's more of a mixed bag. On the one hand, Internet copying of printed matter is impossible to prevent -- no matter how much energy you attack this problem with, there's no stopping a reader who's willing to retype a book (scanning, of course, is even more efficient, and getting cheaper and easier by the day).
On the other hand, for many kinds of books -- long-form narratives, for instance -- reading off a screen is a poor substitute for a cheap and easy-to-buy codex. Not because screen quality is insufficient (if it were, we wouldn't all spend every hour that God sends sitting in front of our computers), but because computers are damned distracting.
And don't talk to me about ebook readers: Single-purpose devices that cost $400 a pop aren't going to be choice items for people who resent spending money on books. And they're not going to drop to $40 unless they sell in quantity, and that means adding more features to catch a bigger audience -- at which point your ebook reader is as distracting as a PC.
No, the bad news for books is twofold: First, the quantity and variety of titles carried outside of bookstores has radically declined, thanks to the rise of national big-box chain stores, who do all ordering from a centralized database. That means that it's much harder than it's ever been to stumble across a book at the grocery store that turns you into a lifelong reader. There's some damned fine bookstores out there for people who know that they want a book, but it's a lot harder to acquire that knowledge than it has been for a century or so.
The other problem is that we're increasingly conditioned to read short blocks of text -- more text than ever, but in radically different form than you generally find between covers. Combine this with the sheer amount of read-for-pleasure text available at one-click's distance on the Net, and even those of us who worship books find ourselves reading fewer of them.
Now for the good news: It doesn't cost much to write a novel (I should know, I write 'em). And it doesn't cost much to produce one -- getting cheaper every day, thanks to low-cost, computerized setup and printing.
Electronic books are poor substitutes for print books, which makes them great enticements for print books (enjoy the ebook? Buy the book!). And the Net makes it cheaper than ever to get a few novels into the hands of a few people who love them. Being cheap, novels lend themselves to all sorts of Internet-era business models -- advertising, sponsorship, direct sales, and so on.
If big-budget movies might turn into opera, then long-form narrative books might turn into poetry. There's a hell of a lot of published poetry -- more than ever -- mostly consumed by other poets and a small band of extremely dedicated followers of the form. A few poets make a big living at it, a few more make a marginal living at it, but for most poets, income is aspirational, not reality-based (this is pretty close to the situation in short fiction already, and not far off from the world of novel writing in many genres).
But a future in which novels turn into hand-crafted fetish items for a small group of literati is one in which the relevance of the novel dwindles away to a dribbly nothing.
I think that this one is a toss-up: If I wanted to rescue novels as a culturally relevant mainstream industry (and I do), I'd put the majority of my effort into figuring out ways to get a wide variety of books in front of people who don't go to bookstores.
That's my free idea for the month: If you want to save publishing, start a small, hand-crafted "distributor," complete with a sales force that lays down shoe-leather all day long, knocking on doors at non-bookstores, seeing if they'll sell a few titles to be re-stocked frequently.
There's plenty of ways you can imagine the Internet would help here: Hell, you could just feed the books that sold at the local fried chicken outlet on Friday into Amazon's "If you liked this book, you might like that book" engine and stock those titles the next day.
It may not work, but no one ever saved a medium by demanding that it be profitable because it was a social good. Sarkozy can give away free newspaper subs to 18-year-olds until les vaches come home, but it won't change the technological shifts that are bleeding out the old broadsheets.
Murdoch’s Soft Spot for Print Slows News Corp.
Tim Arango and Richard Pérez-Peña
Rupert Murdoch had an office built for him at The Wall Street Journal within days of buying it 14 months ago, and he has made ample use of it — ordering up a wave of changes in the once-staid paper’s content and culture, from the addition of a weekly sports page to general news displacing financial news on the front page to the thinning of its layers of editing.
But Mr. Murdoch, as much old-fashioned press baron as 21st century multimedia mogul, faces a depressing reality: his lifelong fondness for newspapers has become a significant drag on the fortunes of his company, the News Corporation.
The company recently took $8.4 billion in write-downs, including $3 billion on its newspaper unit, which includes The Journal’s publisher, Dow Jones & Company. Meanwhile, the News Corporation’s stock price has fallen by two-thirds in the last year, a sharper decline than at media conglomerate peers like Time Warner and Viacom.
In more vibrant economic times, investors and Wall Street analysts were more willing to look past Mr. Murdoch’s attachment to newspapers — the newspaper segment is now the company’s biggest single source of revenue, about 19 percent in the most recent quarter. But they find that a tougher chore these days, as other media struggle and newspapers suffer through their worst slump since the Depression.
“The thing I hear from investors is that they wish News Corp. was everything but newspapers,” said David C. Joyce, media analyst at Miller Tabak & Company.
“Investors are more forgiving when they are in a better mood,” he said. “The hope for a turnaround in the newspaper business is looking elusive.”
The declining economy and the sinking fortunes of print publications have placed in stark relief Mr. Murdoch’s love of newspapers and his deal to acquire Dow Jones just before the recession set in. Mr. Murdoch, chairman and chief executive of the News Corporation, paid more than $5 billion for an asset that generated about $100 million in operating income last year, a price that now looks like a staggering overpayment. Mr. Murdoch declined to comment for this article.
On the surface, the News Corporation’s Feb. 5 earnings report, for the quarter that ended Dec. 31, appeared to show a nearly $90 million increase in newspaper division revenue from a year earlier. But that was an illusion created by the addition of Dow Jones, which the News Corporation owned for only 18 days of the year-ago period.
The company revealed in a later filing with the Securities and Exchange Commission that Dow Jones had $535 million in revenue in the last quarter, more than one-third of the total for its newspaper segment. Subtracting the effect of Dow Jones, revenue at that segment fell about 25 percent — partly because of weaker currencies in Britain and Australia, where the News Corporation has many papers — compared with an 11 percent drop for the rest of the company.
The company does not disclose details on The Journal’s performance, but executives there say that like the rest of the industry, they have seen a significant decline in advertising revenue.
In another area, however, The Journal has outperformed almost all its competitors by maintaining its paid circulation of more than two million, in print and online, in the most recent reporting periods, while nearly every other major paper showed declines. Some of those subscribers receive only the online version, making The Journal one of the few papers to successfully make its online readers pay for content.
Some of that success, however, could be the result of heavy discounting, a practice that has increased since the News Corporation’s takeover. According to the most recent figures the paper filed with the Audit Bureau of Circulations, for the six months that ended Sept. 30, on an average day The Journal sold 501,000 copies at less than half the basic price, up from 420,000 in the same period in 2007, and 214,000 in 2006.
Mr. Murdoch has a well-earned reputation for making the deals that appeal to him personally, like the Dow Jones purchase, whether or not experts agree. The instinctual, from-the-gut aspect of Mr. Murdoch’s business persona was once appreciated — he was lauded when he swooped in to buy MySpace in 2005 for $580 million, outbidding Viacom — but now seems to be a mark against him in Wall Street’s eyes.
“Emotional biases and attachments play into our strategic decisions in really significant ways,” said Sydney Finkelstein, a professor at the Tuck School of Business at Dartmouth. “And with Rupert Murdoch, there’s a general attachment to the newspaper business because that’s where he got his start, and he really has a feel for it, and also an attachment to the idea of owning The Wall Street Journal.”
At The Journal, the imprint of Mr. Murdoch and Robert Thomson, the managing editor he installed, is visible on the front page, where there are bigger headlines, more political coverage, and fewer of the long, sometimes whimsical yarns that were one of the paper’s signatures. They have also made articles shorter and pushed some business coverage deeper into the paper.
But some journalists also described a certain relief that the new regime has meant an end to the factionalism and politicking of Dow Jones’s last independent years. Reporters and editors also say that Mr. Murdoch and his crew have loosened what was once an exceedingly careful culture, where multiple, lengthy memos were required to begin a reporting project, and an article went through several rounds of editing.
“There’s this kind of attitude that planning is overrated, and memos are for wimps,” said one Journal reporter, who insisted on anonymity for fear of antagonizing his new bosses. “They aren’t as interested in the time-consuming, in-depth projects, either.”
And while the new boss has been a frequent presence at The Journal, he will soon be able to keep an even closer eye on his new prize; the company plans to move The Journal from its financial district offices downtown, to the News Corporation’s Midtown headquarters.
With revenue falling, Mr. Murdoch has not followed through on his early talk of expanding The Journal’s news staff. In fact, it recently laid off some journalists, but the reduction was minimal by recent industry standards.
“I have great faith that if we continue the way we are going, we may even get lucky and not have so much competition at the end of it all,” Mr. Murdoch said in a recent conference call with Wall Street analysts. “We are in good shape on the newspapers.”
While Mr. Murdoch’s personal attention has lately been on The Journal, the financial performance of the News Corporation’s other newspapers is undergoing stricter scrutiny these days.
For years, Mr. Murdoch has stomached tens of millions of dollars in annual losses at The New York Post, in exchange for the power the paper afforded him. But given the economic times and the shift of his attention to The Journal, there is a sense of urgency in the News Corporation executive suite about stemming The Post’s losses.
Executives briefed on the matter, who spoke anonymously to discuss private conversations, said Mr. Murdoch remained committed to the tabloid but is seeking ways to save money by combining back-office operations, purchasing, printing and delivery with The Journal and The Post’s rival, The Daily News. There have also been discussions with Newsday, the Long Island newspaper owned by Cablevision, about sharing certain costs with The Post.
And recently, the company said it would lay off 65 people at its British newspapers, which include The Times of London, The Sunday Times, The Sun and News of the World.
Some analysts and investors have suggested that the News Corporation separate its newspaper businesses from its other entities, like film and satellite television.
In a research report in late January, Rich Greenfield of Pali Research summed up one of the prevailing sentiments on the News Corporation, writing, “Previously we had focused on the fact that News Corporation’s so-called bad assets, including newspaper and TV stations, would become such a small part of the News Corporation story that they would no longer impact growth.”
He continued: “Our fear is that News Corp. is so committed to its existing businesses that it will be willing to sustain businesses that slip into negative profitability for years (similar to its approach to the N.Y. Post.)”
The News Corporation still has significant strengths, including the Fox film studio, which appears poised for a better year than it had in 2008; cash reserves of $3.6 billion; and the Fox News Channel, whose revenue and profit are growing. The company’s operating income in the last quarter, $818 million on revenue of $7.9 billion, was down 42 percent from a year earlier, but hardly anemic.
It seems Mr. Murdoch’s greatest hope, when it comes to his newspapers, is to wait out the downturn and anticipate a future with many fewer papers to compete against.
“That’s in his blood,” Mr. Joyce, the Miller Tabak analyst, said of Mr. Murdoch’s devotion to newspapers. “That’s how News Corp. started 50 years ago.”
Murdoch Sorry for NY Post Cartoon Seen as Racist
New York Post Chairman Rupert Murdoch apologized Tuesday for a cartoon that critics said likened a violent chimpanzee shot dead by police to President Barack Obama.
In a statement published in the newspaper, Murdoch said he wanted to "personally apologize to any reader who felt offended, and even insulted." He said the Post will work to be more sensitive.
Murdoch said the cartoon was intended only to "mock a badly written piece of legislation."
The cartoon, which was published Wednesday, depicted the body of the bullet-riddled chimp Travis and two police officers. The caption said: "They'll have to find someone else to write the next stimulus bill."
The chimp was killed in Connecticut last week after mauling a woman.
The Post also apologized Thursday in an online editorial.
The Rev. Al Sharpton is urging the Federal Communications Commission to review policies allowing Post owner News Corp. to control multiple media outlets in the same market.
Associated Press Considers Locking Up Its Online Content
You can almost hear Reuters, CNN, UPI and a few others laugh maniacally over the news that the Associated Press is considering locking up its content. The Associated Press has a pretty long history of not having much of a grasp about how the economics of information works online, and this is the latest evidence. It wouldn't happen for a while -- because many online sites, including Google, have deals with the AP to present its content online for free. But, if the AP suddenly forced all partners to lock up their content, a couple of things would happen pretty quickly: other sources, such as those listed above, or new startups, would quickly jump into the void, thrilled that their largest "competitor" dropped out of the space. Yes, the AP is slightly different than those other players, in that it's a collective of newspapers sharing content, rather than a truly separate operation -- but functionally, they serve similar purposes. CNN, in particular, has made it clear that it would like to take on the AP with a wire service -- and having the AP lock up its content would make much easier for them.
Also, I know we point this out every time some clueless news exec claims that users need to pay, but it's worth mentioning again: nowhere do they discuss why people should want to pay. Nowhere do they explain what extra value they're adding that will make people pay. Instead, they think that if they put up a paywall, people will magically pay -- even though the paywall itself is what takes away much of the value by making it harder for people to do what they want with the news: to spread it, to comment on it, to participate in the story. Until newspaper execs figure this out, they're only going to keep making things worse.
Thomson Reuters Profit Beats and Sees 2009 Revenue Growth
Thomson Reuters Corp (TRIL.L) (TRI.TO) reported a stronger-than-expected quarterly profit and said it expected revenue to grow in 2009 despite job cuts and decreased spending among financial industry customers, sending its shares up about 6 percent.
The news and financial data publisher, which was formed by Thomson Corp's purchase of Reuters Group Plc last April, also said on Tuesday that it expected its underlying operating margin in 2009 to be comparable to 2008, supported by revenue growth and higher savings from integration.
"I think the good thing is that we're giving outlook at all. I've seen so many companies with supposedly decent visibility into their business this year pull back and say, 'Well it's too hard,'" Chief Executive Tom Glocer said in an interview.
Thomson Reuters reported fourth-quarter net income of $656 million, or 79 cents a share, compared with $432 million, or 67 cents a share, a year earlier.
Profit from ongoing businesses, excluding special items, was 57 cents per share, beating the average analyst forecast of 39 cents, according to Reuters Estimates.
Revenue in the company's closely watched markets division, which serves financial institutions, fell 2 percent to $1.9 billion. Overall revenue was flat at $3.4 billion.
"I think it's going to continue to do better than people expect," Glocer said, referring to the markets division.
"It is hard to see anything else outside the doom and gloom in the two financial and media capitals," he said. "It's going to be a tough year, but when you put it all together, we still think the company will be able to show growth."
The professional division, which sells databases and other deep information reservoirs to lawyers, accountants, scientists and the healthcare industry, reported revenue of $1.5 billion in the fourth quarter, up 3 percent. The rise came in part from online, software and services revenue growth of 10 percent.
The company also said its board had approved an increase in its dividend by 4 cents per share on an annualized basis. The quarterly dividend payable on March 26 is 28 cents per share.
Thomson Reuters also raised its forecast for annualized cost savings from the merger to $1 billion by the end of 2011, up from $750 million projected in May 2008.
The integration plan does not include any new rounds of layoffs, Glocer said.
Shares of the company were up 5.9 percent at 1,400 pence in London trading compared with their previous close of 1,322.
(Reporting by Robert MacMillan in New York and Georgina Prodhan in London, editing by Tiffany Wu and Ted Kerr)
Journal Register Seeks Bankruptcy Protection
Emily Chasan and Robert MacMillan
Journal Register Co sought Chapter 11 bankruptcy protection on Saturday, making it the latest U.S. newspaper company to buckle under deteriorating advertising revenue and debt that it cannot easily repay.
The company publishes 20 daily newspapers, including The New Haven Register and The Trentonian. It joins the ranks of the Minneapolis Star-Tribune, as well as Tribune Co, publisher of the Chicago Tribune and Los Angeles Times, and highlights the challenges U.S. newspapers face as advertisers flee their print editions and more people get their news for free online.
For years, Journal Register has been among the smallest of publicly traded U.S. newspaper publishers. Nevertheless, its filing will increase scrutiny on other U.S. newspaper publishers, including McClatchy Co and Lee Enterprises, which are trying to survive a severe ad downturn without running afoul of their creditors.
Journal Register has already agreed with key creditors on a pre-negotiated reorganization plan, and said it was planning to restructure its operations.
A pre-negotiated bankruptcy allows companies to move more quickly through the court process because they have agreed on major issues with groups of creditors. The company plans to solicit votes quickly from other creditors on its bankruptcy reorganization plan, according to court papers.
The Yardley, Pennsylvania-based company said advertising revenue had been driven lower by the housing downturn, declining automotive sales, the retail sector slowdown, a slow labor market that has hurt employment classifieds and a shift to online media, according to court papers filed in the U.S. Bankruptcy Court in Manhattan.
Journal Register's revenue declined by 2 percent in 2006, 8.5 percent in 2007 and was down 10 percent in the period from January 2008 through November 2008.
The company has about $692 million of debt, it said in court papers. It listed assets of about $596 million and liabilities of about $737 million in court papers.
The company has about 3,465 employees and also owns 159 non-daily newspapers, it said in court papers. Under its proposed reorganization plan, the company said it planned to cancel its common stock.
Journal Register is the leftover company from the former newspaper publishing empire owned by Ralph Ingersoll II in the 1980s. The company nearly went bankrupt late in that decade after bingeing on junk bonds to finance acquisitions and then being unable to pay its debt.
In the late 1980s, it embarked on an ill-fated effort to start a rival daily paper to the St. Louis Post-Dispatch, which was then owned by Pulitzer Inc and now is part of Lee. The paper folded in less than a year.
The company had begun expanding earlier in the decade under late Chief Executive Robert Jelenic, who kept a close watch over expenses. According to some media reports, Jelenic personally took mileage readings on company cars to make sure his employees were not claiming too much money in expenses.
In 2004, the company spent $415 million to buy several papers in Michigan, only to see unemployment rise and the local economy fall apart. Many analysts in recent years said that would prove to be a huge mistake.
Soon afterward, newspaper advertising revenue took a nosedive, leaving more publishers dangerously close to not being able to honor their debt terms. Journal Register eventually was delisted and began restructuring proceedings, including laying off employees and selling off papers.
The company has retained law firm Willkie, Farr & Gallagher LLP as legal counsel, Lazard Freres & Co as investment banker and Conway, Del Genio, Gries & Co as restructuring adviser, it said in court papers.
Lazard also is advising Tribune Co, which filed for bankruptcy after being unable to meet the terms of some $13 billion in debt it took on in a buyout led by real estate magnate Sam Zell.
The case is In re: Journal Register Co., U.S. Bankruptcy Court, Southern District of New York, No. 09-10769.
(Reporting by Emily Chasan and Robert MacMillan; Editing by Peter Cooney)
Hearst Seeks Changes at Chronicle
The Hearst Corp. today announced an effort to reverse the deepening operating losses of its San Francisco Chronicle by seeking near-term cost savings that would include "significant" cuts to both union and non-union staff.
In a posted statement, Hearst said if the savings cannot be accomplished "quickly" the company will seek a buyer, and if none comes forward, it will close the Chronicle. The Chronicle lost more than $50 million in 2008 and is on a pace to lose more than that this year, Hearst said.
Frank J. Vega, chairman and publisher of the Chronicle, said, "It's just a fact of life that we need to live within our means as a newspaper - and we have not for years."
Vega said plans remain on track for the June 29 transition to new presses owned and operated by Canadian-based Transcontinental Inc., which will give the Chronicle industry-leading color reproduction.
If the reductions can be accomplished, Vega said, "We are optimistic that we can emerge from this tough cycle with a healthy and vibrant Chronicle."
The company did not specify the size of the staff reductions or the nature of the other cost-savings measures it has in mind. The company said it will immediately seek discussions with the Northern California Media Workers Guild, Local 39521, and the International Brotherhood of Teamsters, Local 853, which represent the majority of workers at the Chronicle.
"Because of the sea change newspapers everywhere are undergoing and these dire economic times, it is essential that our management and the local union leadership work together to implement the changes necessary to bring the cost of producing the Chronicle into line with available revenue," Frank A. Bennack, Jr., Hearst vice chairman and chief executive, and Steven R. Swartz, president of Hearst Newspapers, said in a joint statement.
Hearst purchased the Chronicle in 2000, but soon afterward felt the impact of an economic downturn in the dot.com sector as well as the loss of classified advertising to Craigslist and other online sites. The problems have been exacerbated by the current recession.
In the news release, the privately-held, New York-based company said that the Chronicle has had "major losses" since 2001.
"Given the losses the Chronicle continues to sustain, the time to implement these changes cannot be long. These changes are designed to give the Chronicle the best possible chance to survive this economic downturn and continue to serve the people of the Bay Area with distinction, as it has since 1865," Bennack and Swartz said in their statement.
"Survival is the outcome we all want to achieve," they added. "But without specific changes we are seeking across the entire Chronicle organization, we will have no choice but to quickly seek a buyer for the Chronicle, and, should a buyer not be found, to shut down the newspaper."
The Hearst statement further said that cost reductions are part of a broader effort to restore the Chronicle to financial health. At the beginning of the year, the Chronicle raised its prices for home delivery and single-copy purchases.
Hearst owns 15 other newspapers including the Houston Chronicle, San Antonio News-Express and the Albany Times-Union in New York. Hearst announced Jan. 9 that in March that if a buyer is not found it will close Seattle Post-Intelligencer, which has lost money since 2000.
Vega said readers and advertisers will see no difference in the Chronicle during the discussions with the unions.
"Even with the reduction in workforce, our goal will be to retain our essential and well-read content," Vega said. "We will continue to produce the very best newspaper for our readers and preserve one of San Francisco's oldest and most important institutions."
The Chronicle, the Bay Area's largest and oldest newspaper, is read by more than 1.6 million people weekly. It also operates SFGate, among the nation's 10 largest news Web sites. SFGate depends on the Chronicle's print news staff for much its content.
The San Francisco Bay Area is home to 21 daily newspapers covering an 11-county area.
The Chronicle's news staff of about 275, even after a series of reductions in recent years, is the largest of any newspaper in the Bay Area.
"While the reductions are an unfortunate sign of the times, the news staff has always been resilient in San Francisco,'' said Ward Bushee, editor and executive vice president. "We remain fully dedicated toward serving our readers with an outstanding newspaper. We are playing to win."
The area's other leading newspapers - the Bay Area Media News Group that includes the San Jose Mercury News, Contra Costa Times and Oakland Tribune - also have seen revenues decline sharply and cut staff.
These problems are a reflection of those faced by newspapers across America as they experience fundamental changes in their business model brought on by rapid growth in readership on free internet sites, a decline in paid circulation, the erosion of advertising and rising costs.
Advertising traditionally has offset the cost of producing and delivering a newspaper, which allowed publishers to charge readers substantially less than the actual cost of doing business. The loss of advertising has undermined that pricing model.
In the case of the Chronicle, Vega said the expense of producing and delivering the newspaper to a seven-day subscriber is more than double the $7.75 weekly cost to subscribe.
At the beginning of the year, in an effort to evolve its business model and offset its substantial losses, the Chronicle raised its subscription and newsstand prices, taking a cue from European papers that charge far more than their American counterparts.
"We know that people in this community care deeply about the Chronicle," Vega said. "In today's world, the Chronicle is still very inexpensive. This is a critical time and we deeply hope our readers will stick with us."
The challenge the Chronicle faces, Vega said, is to bring its revenues from advertising and circulation into balance with its expenses so that the newspaper can at least break even financially.
"We are asking our unions to work with us as partners in making these difficult cost-cutting decisions and reduction in force to ensure the newspaper survives," Vega said.
Part of Denver’s Past, The Rocky Says Goodbye
This was a wild city once, a frontier of the Western imagination full of brawling, dueling, nakedly self-interested fortune-seekers and empire-builders — and The Rocky Mountain News carried their torch.
The rise of Colorado’s capital city and the rise of The Rocky, as it affectionately or scornfully became known, were intertwined from the beginning. The city was founded in late 1858, The Rocky the following April, as gold strikes were making the place a destination.
“Without The Rocky, Denver would not be the city it is today,” said Tom Noel, a professor of history at the University of Colorado in Denver.
On Friday, ashes were mostly all that was left of that legacy as copies of the paper’s final issue, published Friday morning, lay strewn about its newsroom and upon the consciousness of the city. The Rocky’s owner, the E. W. Scripps Company, announced Thursday that the paper, which had been up for sale, had attracted no credible buyers and that its losses — $16 million in 2008 — could not be sustained.
“The shock has passed, the anger is below the surface, and there’s more gallows humor today,” said David Milstead, a reporter in the business section who was packing his desk and responding to e-mail condolences. He said he had hoped the paper would make it to Saturday, his 37th birthday, but it was not to be.
In many ways, Rocky stories and Denver stories are synonymous, partly because the paper’s unabashed mission, especially in its early days, was to help Denver grow and prosper, sometimes even at the expense of the facts.
The first owner and publisher, William Byers, who founded the paper on the second floor of a saloon, decided early on, for example, that Eastern moneyed investors would want Denver to have good steamboat access — a profoundly unrealistic prospect here on the High Plains. So he simply invented it. Shipping news, complete with the made-up names of arriving and departing vessels, heading out on the South Platte River, bound east with made-up loads of freight, became a fictional staple.
When the Union Pacific Railroad bypassed Denver in the 1860s, taking the coast-to-coast route through Cheyenne, Wyo., 100 miles north, The Rocky led the drive to build a rail spur line connecting Denver to the transcontinental system.
The raw shout of The Rocky also meant never avoiding a scoop, even sometimes at the expense of the paper’s dignity and reputation. In April 1876, for example, a woman named Hattie Sancomb, a longtime mistress of the paper’s editor, angrily faced down her lover in the street with a pearl-handled revolver, aiming for revenge.
She missed her shot, but The Rocky got the story. In the next edition, The Rocky published Ms. Sancomb’s poison-pen love letters, under the headline, “Sample Sentences From Spicy Correspondence.” (The front page that day also carried a headline to make a tabloid of any era proud: “A woman swallows a snake!”)
That spirit of enterprise, sometimes in good taste, other times not, continued unabated, right down to last fall, when the paper outraged many people in Denver by covering the funeral of a 3-year-old boy with blog posts from the graveside.
“Coffin lowered into ground,” read one dispatch.
Some readers said Friday that they thought the paper had lost its way as it faced the desperate struggle, common to newspapers everywhere these days, of keeping readers and advertisers.
“A lot of people are very upset, but I saw this coming,” said Larry Britton, a 61-year-old electrician who grew up reading The Rocky but found it less relevant and distinctive in recent years. “You could swap writers around and not see the difference,” Mr. Britton said.
The Rocky, which went from a full-size broadsheet to tabloid size in the 1940s, was never a tabloid in the imprint of say, papers in New York or Chicago, where fierce competition for readers of the penny press drove a frenzy of outlandish stunts and chicanery.
Until 1892 and the founding of The Denver Post, The Rocky enjoyed more than three decades of dominance in the Denver market. But the competition between the papers immediately became fierce, sometimes to the point of physical blows. The Post’s owner, in the midst of a heated newspaper war in 1907 — when The Rocky called The Post’s editor a blackmailer — assaulted The Rocky’s owner in the street.
The Post, which had shared business costs with The Rocky in recent years under a joint operating agreement, survives The Rocky’s passing.
But The Post’s publisher and chairman, William Dean Singleton, said Thursday at a news conference that it was not a monopoly he was looking forward to.
“The first day I wake up not reading The Rocky will be a sad day for me,” he said.
Dan Frosch contributed reporting.
The Kindle: Good Before, Better Now
In the high-tech industry, you live for the day when your product name becomes a verb. “I Googled him.” “She’s been Photo- shopped.”
Amazon, however, is hoping that its product name, a verb, becomes a noun. “Have you bought the new Kindle?”
The Kindle is the most successful electronic book-reading tablet so far, but that’s not saying much; Silicon Valley is littered with the corpses of e-book reader projects.
A couple of factors made the Kindle a modest hit when it made its debut in November 2007. First, it incorporated a screen made by E Ink that looks amazingly close to ink on paper.
Unlike a laptop or an iPhone, the screen is not illuminated, so there’s no glare, no eyestrain — and no battery consumption. You use power only when you actually turn the page, causing millions of black particles to realign. The rest of the time, the ink pattern remains on the screen without power. You can set it on your bedside table without worrying about turning it off.
The big Kindle breakthrough was its wireless connection. Thanks to Sprint’s cellular Internet service, the Kindle is always online: indoors, outdoors, miles from the nearest Wi-Fi hot spot.
This sort of service costs $60 a month for laptops, but Amazon pays the Kindle’s wireless bill, in hopes that you’ll buy e-books spontaneously. “Have you read ‘The Audacity of Hope’?” someone might ask you. “Why, no, but I’ll download it now!” And 45 seconds later, you’ve got the whole book.
It’s all a thousand times more convenient and more exciting than loading books from a PC with a cable, as you must with Sony’s Reader, the Kindle’s archrival. As a bonus, the Kindle includes a simple Web browser, great for quick wireless Wikipedia checks and blog reading.
Starting today, there’s a new Kindle. Amazon calls it the Kindle 2, but Kindle 1.1 would be more like it; the changes are fairly minor. Fortunately, they’re exactly what was needed to turn a very good reader into an even better one.
The page-turn buttons are now much smaller — and the clicky part is on the inward edge of each button — so you no longer set off page turns just by picking the thing up.
The new, square plastic joystick is homely and stiff, but it gets the job done. The back is now brushed aluminum. Turning pages on the Kindle is a tad faster now. The screen shows 16 shades of gray now, not four, so photos look sharper; you can also zoom in and rotate them.
Taking a page from Apple’s playbook, the new Kindle is a sleeker, more sealed-in sort of machine. You can no longer expand storage with a memory card; then again, the built-in memory holds seven times as much — 1,500 books. Think that’ll tide you through the weekend?
The battery is also sealed inside, à la iPhone. Amazon says, however, that it lasts 25 percent longer per charge (four days of reading with wireless turned on, or two weeks if it’s off). If that battery ever needs replacing, Amazon has to do it ($60).
The Kindle will also read aloud to you through its tiny stereo speakers or headphone jack, and even turn the pages as it goes.
But if you have visions dancing in your head of turning every book into an audiobook, forget it. The Kindle’s male and female voices are very good, but nobody will mistake them for the voices of humans, let alone the professionals who record audiobooks. Kindle voices have some peculiar inflections and pronunciations — they sound oddly Norwegian, sometimes — and, of course, they’re incapable of expressing emotion. They read Hemingway the same way they read Stephen Colbert.
As before, your books, annotations and clippings are auto-backed up on Amazon.com. But now, if you buy multiple Kindles (dream on, Amazon), all of them remember where you stopped reading in each book. (This feature will be more useful if, as Amazon has hinted, you’ll soon be able to read your e-books on other machines, like your laptop or iPhone. And why not? The Kindle is just the razor. The books are the blades — ka-ching!)
The Kindle catalog is bigger, too; now 240,000 books are available. New York Times bestsellers are $10 each, which is less than the hardcover editions. Older books run $3 to $6.
That said, Amazon is still a long way from its “any book, any time” goal. You don’t have to look far to find important titles still among the missing; they include all Harry Potter books; “An Inconvenient Truth”; “The English Patient”; and “The Associate” (the No. 1 fiction best seller) or anything else by John Grisham.
You can have any of 30 newspapers, including this one, wirelessly beamed to your Kindle each morning ($10 to $14 a month) — minus ads, comics and crosswords. Magazines (22 so far, $1.50 to $3 monthly) and blogs ($2 a month) can arrive automatically, too.
Finally, you can send Word, text, PDF and JPEG documents to the Kindle using its private e-mail address — a huge blessing to publishers, lawyers, academics, script readers and so on — for 10 cents each. Or transfer them over a USB cable for nothing.
So, for the thousandth time: is this the end of the printed book?
Don’t be silly.
The Kindle has the usual list of e-book perks: dictionary, text search, bookmarks, clippings, MP3 music playback and six type sizes (baby boomers, arise). No trees die to furnish paper for Kindle books, either.
But as traditionalists always point out, an e-book reader is a delicate piece of electronics. It can be lost, dropped or fried in the tub. You’d have to buy an awful lot of $10 best sellers to recoup the purchase price. If Amazon goes under or abandons the Kindle, you lose your entire library. And you can’t pass on or sell an e-book after you’ve read it.
Another group of naysayers claims that the Kindle has missed its window. E-book programs are thriving on the far more portable (and far more popular) iPhones and iPod Touches. Surely smartphones, which already serve as cameras, calculators and Web browsers, will become the dominant e-book readers as well.
The point everyone is missing is that in Technoland, nothing ever replaces anything. E-book readers won’t replace books. The iPhone won’t replace e-book readers. Everything just splinters. They will all thrive, serving their respective audiences.
With those caveats, the new Kindle edges even closer to the ideal of an e-book reader. The reading experience is immersive, natural and pleasant; the book catalog, while not yet complete, is growing and delivered instantaneously; and apart from the clicky keyboard (an unnecessary appendage 99.9 percent of the time), the design feels right.
If the Kindle’s popularity keeps growing, then it may be remembered as the spark that finally ignites mainstream e-books. Someday, other gadgets may even be described as “Kindleizing” their fields. In that case, “Kindle” will be the first product name that ever went from verb to noun — and back to verb again.
MORE READERS A roundup of devices and software for reading books electronically will appear in the Circuits section on Thursday.
Kindling a Revolution: E Ink’s Russ Wilcox on E-Paper, Amazon, and the Future of Publishing
Almost as soon as Amazon released the Kindle e-book reader in November 2007, I settled in to wait for the Kindle 2. Like many other observers, I thought Amazon had made a good first stab at building a usable e-book device, but that it needed a sleeker profile, better ergonomics, new features such as text-to-speech capability, and a lower price point. Well, 15 months later, Amazon has thoughtfully delivered on most of my requests. From all accounts, the Kindle 2, which was unveiled on February 9 and began arriving on customers’ doorsteps this week, is such a giant improvement that it makes the first Kindle look like a clunky lab prototype. (Now if they’d only consider lowering the $359 price tag.)
But there’s someone who has been waiting a lot longer than I have for the Kindle 2, and for the huge buzz it’s creating around e-reading—about 11 years longer, in fact. It’s Russ Wilcox, co-founder and CEO of E Ink, the Cambridge, MA company behind the low-power, high-contrast “electronic paper” screen that is the Kindle’s main selling point. I had a chance to meet with Wilcox on Tuesday—and to play briefly with a Kindle 2, which had just arrived that morning. My first question was about whether any of E Ink’s founders thought it would take so much time, and so much money, to bring e-paper to the mass market.
After all, E Ink was launched in 1997, and has had to raise more than $150 million—mostly from big industry players like Intel, Motorola, Philips, Hearst Interactive Media, and Japan’s TOPPAN Printing—to transform e-paper from a drawing-board concept into a manufacturable product. Conceived at the MIT Media Lab, E Ink’s material consists of a layer of tiny fluid-filled microcapsules that contain positively charged white particles and negatively charged black particles. Applying a voltage across the microcapsules pushes the white particles to the top and pulls the black particles to the bottom, forming white pixels that are clearly visible without the backlighting needed in traditional liquid-crystal displays. Applying the opposite voltage across the microcapsules creates black pixels. The material is “bistable,” meaning the particles stay in place after a voltage is applied—which is why the batteries in the Kindle, the Sony PRS-700, and other devices with E Ink screens last so long.
It sounds simple enough, but Wilcox says the company spent six years getting the technology to the point where Sony could use it in the world’s first e-paper-based e-book reader, the Librié, introduced in 2004. And it’s taken another five years for Sony, Amazon, and their competitors to create e-publishing ecosystems that consumers are interested in inhabiting (meaning not just the devices, but the content available for them and the mechanisms for purchasing, storing, searching, and annotating that content).
So while E Ink has been happy to leave the media spotlight to Amazon this month, the Kindle 2 and the near-iPhone-scale excitement that has greeted it represent an important coming-of-age for the 100-employee company. It’s perhaps the first moment when the founders’ vision for a world of publishing sans paper has seemed feasible. E Ink continues to explore applications for its e-paper displays outside the realm of publishing—Wilcox and his team showed me examples like a remote key fob for high-end automobiles, a credit-card-sized one-time password device for logging into a secure computer network, and a decorative cell phone cover—but the company’s core mission, Wilcox told me, is to “provide the world’s best digital reading experience.” That means creating better displays for handheld e-book devices, but it also means designing larger screens—and eventually, color versions—that would be better for magazine-style or newspaper-style content.
There’s still a lot of uncertainty over the prospects for such technologies. Many potential Kindle buyers (myself included) are balking at the device’s steep price tag, and if Amazon comes out with a rumored tablet-sized version aimed at the college textbook market, it’s sure to be even more expensive. (When I asked marketing vice president Sriram Peruvemba whether E Ink is working with Amazon on such a product, his answer was “No comment.”)
But over the long term, Wilcox expects that simple economics will drive more and more print-media companies toward electronic platforms, and that E Ink will be there to scoop up their business. When Silicon Alley Insider calculated recently that the New York Times could save more than $300 million every year if it stopped printing and delivering its newspaper and simply gave every subscriber a free Kindle, it was with tongue firmly in cheek. But for Wilcox, such suggestions are deadly serious. “What we’ve got here is a technology that could be saving the [global print media] $80 billion a year,” he insists.
Below are some of the other interesting outtakes from my conversation with Wilcox.
On the early days of E Ink, and the importance of being naive:
I co-founded E Ink with three fellows out of MIT and with Jerry Rubin, the founder of Lexis-Nexis. I wrote the business plan in my study, and got copies bound at Staples, and mailed it out through Kinko’s, and all that. I did all the things you should apocryphally do when you’re an entrepreneur. At the time, we had no idea it was going to take so long. It may be that naivete is your friend when you’re starting out in such a daunting venture. We understood that it was probably going to take two years to make something that people wanted to buy. And in terms of making something that looked good, we did that. But what we didn’t see in the beginning, and learned over time, was that it would take another two years to go from something that looked good to something that would look good for many years under all operating conditions—in other words, to achieve stability and robustness. And then it would take another two years to get something that you could reproducibly manufacture, at an affordable cost point.
On finding a sustainable business model:
We went through the bubble bursting like everyone else. We had several different applications on the table. And we had to figure out how we were going to have a big impact on the world with a very small amount of cash. We came up with a grand vision of doing “radio paper”—a complete device and a service. [Essentially, the Kindle, but about eight years before it was feasible---Eds.] But it became clear that, even after spending $100 million, we still had work to do just on the display technology. We were diluting our efforts too much. So we turned to a business model where we would make the ink—a film of microparticles that would be the front part of a display—and we would sell that to the world’s display companies, who could drop it in on top of the same backplanes they use for LCD screens, turning their LCDs into e-paper. And more or less, that vision has held. We sell a component that allows LCD companies to become e-paper companies.
On the first Japanese e-book using E Ink e-paper, and the birth of the Kindle:
In 2004 Sony launched in Japan with the Librié. And it didn’t really work very well in Japan. Critics loved the hardware, but there were only 1,000 books available, and that does not make a successful publishing market. And it turns out that e-books are a tough sell in Japan because there is a thriving used bookstore market. People don’t have bookshelf space in their homes to store a lifetime of books, so they have this well-developed practice of returning books to used bookstores, so you can get any used book you want for a dollar. At the same time, people were getting used to standing on trains and reading on their little cell-phone displays. So between those two things, it was very hard to launch the Librié.
But Sony had the vision that if they added a bunch more content and brought it out in the U.S., they would have a product. And at the same time Amazon took note, and said, ‘Aha, the time might finally be right for e-books, if we were to tackle this as a service and sell the content.’ So the Sony PRS-500 launched in 2006 and Amazon came out with the wireless Kindle in 2007, and those guys have each progressively improved their products. From a business point of view, there were some tough times along the way. But since 2004, when we first saw the Librié come out in Japan, our revenues have doubled every year, because we have just been getting more and more devices out there.
On the latest technological improvements in the E Ink system:
For the upgrade from the Sony Librié to the PRS-500 we upgraded the ink. And for the PRS-700 and the Kindle 2 we have upgraded the electronics.
Making the ink better is all about the quality of the ink coating—the whiteness of the white and the darkness of the black in terms of pigments. It’s also about selecting the right ingredients so the ink can move quickly and hold its image accurately. You also want it to work well in the cold, withstand a certain amount of pressure, and be able to manufacture it at a reasonable cost, reproducibly, and find upstream suppliers who are reliable. It’s a very complex system design that combines chemistry, material science, electronics, optics, and mechanical engineering. It’s not trivial to put together, which is why it’s taken 12 years and $150 million.
On how the “Broadsheet” chip has improved the interfaces of the latest Sony and Kindle e-book devices:
One of the big advances for this generation is that we’ve developed a new method for driving the electronics—essentially, a new graphics card, the Broadsheet. The Sony PRS-700 and the Kindle 2 are the only two products that have it. It gives you the ability to do stuff like scrolling around more smoothly, and writing with a pen, and typing up to 200 words a minute, and showing enhanced grayscale images.
Broadsheet has the ability to, in parallel, update 16 different regions of the screen. Before, to get a dark area of the display to turn white, you would turn on your voltage for a period of time, maybe half a second. (If you want a shade of gray, you just turn it on for a shorter period.) During that time, the display would scan rows and columns of pixels serially, from top to bottom. [Before Broadsheet, in other words, changing any portion of the picture---scrolling through highlighted items on a pop-up menu, for example---required redrawing the entire screen.---Eds.] But the Broadsheet allows you to have 16 regions that you can define on the fly, and start the switching at different times. So instead of waiting for one scan to finish, you can start the next one, and stagger them, which gives you the impression of movement. The bottom line is that when you do this, plus some clever programming, you end up with animation and a much more interactive system, even though the native ability of the ink has not changed.
The result is that for both the PRS-700 and the Kindle 2 we have improved the user experience with faster navigation. You can more easily switch between titles. You can more easily navigate within a book to find the place you’re reading. You can more easily type the names of books you want, and make notes to yourself. Before, it was painfully slow to add annotations; now you can annotate as quickly as you can type. The other side of it is that you can now use the screen as an input device, for typing, touching, or pen writing. So you will start to see products on the market that let you write on the e-paper, just like a diary.
On the cost of the Kindle’s 6-inch e-paper screen:
We don’t disclose that. Our customers wouldn’t be happy with us. But if you want to buy a development kit and design your own device, it’s $3,000. With the development kit, you get everything that’s inside an e-book, including a little chip that runs Linux and a bunch of open source drivers, a touch screen with pen input, and the Broadsheet chip. People are doing all sorts of fancy stuff with that. There’s a fellow in Malaysia who has ported the Linux X Windows system to the device, and there are some folks on the West Coast who have ported Android to it. So part of what we’re doing is just making this open and trying to let lots of people find interesting uses for it.
On the fact that electronic paper is still far more expensive to manufacture than LCDs:
The beauty of E Ink technology is that by and large, what we’re doing fits in with the existing LCD industry. You do have the cost hurdle that you need to worry about, because E Ink has smaller production runs, against the billions of LCDs shipped every year. But we use a lot of the same components as LCDs, so as their cost comes down, we come down too. In the long term, you will see E Ink being similar to LCDs in price. And in the very long term, it should be cheaper than LCDs, because what you can do with E Ink that you can’t do with LCDs is manufacture them using a roll-to-roll process. If you ever want to make a billion of anything cheaply, you print it. We announced last year that we are working with Hewlett-Packard. They have set up a process to develop roll-to-roll, impact lithography printing of active-matrix panels. So they can print a backplane, and our stuff comes in on top. That’s five, seven, maybe 10 years away. So it’s clearly not tomorrow. But in the long term, E Ink should be very competitive on price.
On what’s coming from E Ink in the short term:
What you’ll see next is a great range of screen sizes. So far the industry has been using the 6-inch size, which has helped to drive down the cost for everybody, by consolidating on one manufacturing process. But we are starting to introduce displays that are in many different sizes. And you will see flexible displays going to market, at small volumes this year, but 2010 will be a big year for flexible displays. And then at the end of 2010, you will start to see improvements in the ink. We will have a whiter white and a blacker black, and we will start to experiment with color. You will probably see 2011 be the year of color.
All of those things will progressively broaden and deepen the applications. As you have flexible displays, you can do big displays and something that is much more like a newspaper experience, or in color so that it’s much more like a magazine. So we’ve taken on books, and we will extend to other types of formats over a relatively short period of time. There are a lot of mobile devices that could use a low-power, thin, plastic display, so you will see us in other types of devices as well. But our key focus and mission is to provide the world’s best digital reading experience.
On whether a consumer electronics company should be happy if its device works so well that it becomes “invisible” in the hands of the user [as Amazon executive Ian Freed told CNET in a recent interview]:
I think it’s a good thing. That’s what you want out of a book—you want to be projected into the author’s mind. That’s all about providing a great reading experience. So we take it as a compliment when you lose yourself in a book. Another kind of goodness is that the display shouldn’t break, and that it should be flexible, and that you should be able to read for a long time in your alternate reality without having to recharge. In that sense, our product is very visible, and we’re lucky our display is the face of the Sony and Amazon products.
On selling the same screen technology in Sony’s devices to Amazon, and then to other e-book makers:
We’re in a situation analogous to Nutrasweet enabling the diet cola industry. How do you sell Nutrasweet to Coke when you already have Pepsi as a customer? The answer is, “Very carefully.” We keep as neutral as possible. Our goal is to offer a platform that everybody can innovate on. And by and large, people are making very different product decisions and exploring the boundaries of what’s possible. No two companies have made the same device, they each have their pros and cons, and are good for some people and not for others.
On how e-paper can save the book industry:
Worldwide, the book industry is an $80 billion industry. If, by distributing electronically, they could save 30 percent on their costs, that would add $25 billion a year to their profitability. The newspaper industry is twice as large, and could probably save 50 percent. What we’ve got here is a technology that could be saving the world $80 billion a year. So we take the long view. This is a business problem that you could drive a truck through. So what we need to do is simply be a good supplier, provide a platform upon which others can participate, and provide an ecosystem where lots of companies want to gather.
On the future of newspapers:
The next big wave after e-books will be e-newspapers, enabled by the flexible screens in larger sizes. Then there will be a second wave of e-newspapers enabled by color. The benefit of that is that color enables advertising. The majority of print media is heavily subsidized by advertising, including almost all magazines and newspapers, so e-paper can’t really get to where it’s going until it supports advertising. Once that happens, you’ll see whole new business models emerging. We are still in the second inning of the ball game.
Why Are Book Publishers Making The Same Mistake The Record Labels Made With Apple?
Back in 2005, we noted that Apple's dominance over the online music space, which upset the record labels tremendously, was actually the record labels' own fault for demanding DRM. That single demand created massive lock-in and network effects that allowed Apple to completely dominate the market. If the record labels had, instead, pushed for an open solution, then anyone else could have built stores/players to work as well, and it could have minimized Apple's ability to control the market. Yes, everyone is now opening up (including Apple), but it took a long time, and Apple had already established its dominant position.
So why are book publishers doing the same thing?
Farhad Manjoo has an interesting article in Slate where he notes that publishers are worried about Amazon turning into "the Apple of the book industry," yet at the same time, they're the ones who are pushing for DRM and limitations that will effectively lock users in to Amazon's ebook platform for a long time. If the publishers had insisted on more open solutions, then a real competitive market could develop. That would be better for everyone. As great as the new Kindle is, it's still rather expensive. Allowing others to enter the market would lead to greater innovation -- making it easier for more people to get into the ebook reader market, and open up plenty of new opportunities for publishers. But the dumb and pointless infatuation with "DRM" and "protecting" works will basically hand the market over to Amazon for many years, and get many folks locked into to Amazon's Kindle platform, even when more open solutions finally start to become popular.
Amazon Backs off Text-to-Speech Feature in Kindle
Amazon announced today it will let publishers decide whether they want the new Kindle e-book device to read their books aloud.
The text-to-speech feature allows Kindle owners to have books read to them in a male or female computerized voice. The president of the Author’s Guild, Roy Blount Jr., recently contributed an essay to the editorial page of The New York Times laying out the guild’s objections to the feature, which he said undermined the market for the professional audio books that are sold separately.
Amazon maintains that the feature is legal, and that it would in fact increase the market for audio books.
But it said “we strongly believe many rights holders will be more comfortable with the text-to-speech feature if they are in the driver’s seat.”
Here is the full text of Amazon’s statement:
[quote]Kindle 2’s experimental text-to-speech feature is legal: no copy is made, no derivative work is created, and no performance is being given. Furthermore, we ourselves are a major participant in the professionally narrated audiobooks business through our subsidiaries Audible and Brilliance. We believe text-to-speech will introduce new customers to the convenience of listening to books and thereby grow the professionally narrated audiobooks business.
Nevertheless, we strongly believe many rights-holders will be more comfortable with the text-to-speech feature if they are in the driver’s seat.
Therefore, we are modifying our systems so that rightsholders can decide on a title by title basis whether they want text-to-speech enabled or disabled for any particular title. We have already begun to work on the technical changes required to give authors and publishers that choice. With this new level of control, publishers and authors will be able to decide for themselves whether it is in their commercial interests to leave text-to-speech enabled. We believe many will decide that it is.
Customers tell us that with Kindle, they read more, and buy more books. We are passionate about bringing the benefits of modern technology to long-form reading.[quote]
Google Puts Small Ads on Pages of News Site
Miguel Helft and Brian Stelter
Google began running small text ads on the pages of its Google News service this week, reviving a debate between the company and some struggling newspaper publishers, who have seen their businesses devastated by the shift of advertising to the Internet.
For more than six years, Google refrained from placing ads on Google News, in part to blunt criticism from newspaper publishers who were already unnerved that Google was building a competing news site using headlines and snippets of newspaper articles.
The company’s chief executive, Eric E. Schmidt, has sought to assure newspaper publishers that Google was a friend, not a foe, whose own business depended on a thriving marketplace of newspapers and other content creators. Brian Tierney, chief executive of Philadelphia Media Holdings, which own The Philadelphia Inquirer and The Philadelphia Daily News, said the new ads contributed to his skepticism about Google’s intentions.
“When Eric Schmidt says he worries about the newspaper industry, it’s crocodile tears,” Mr. Tierney said.
Google, whose own growth has slowed sharply in recent months, said its approach had not changed. “Eric has said many times that we strongly support journalism,” said Gabriel Stricker, a Google spokesman. “We’ve got teams of people who are working with hundreds of publishers to find new and creative ways to help them make money from compelling online content.”
Mr. Stricker said Google had decided to place ads on Google News because it had devised an approach that could deliver ads that were contextually relevant.
Google News automatically collects headlines and article snippets from more than 4,500 news sites and sends users who click on those excerpts to the sites where the articles originally appeared. The service drives significant traffic to many sites. As a result, many publishers have come to accept Google News.
“The Internet world is a very competitive world,” said William Dean Singleton, the chief executive of MediaNews Group, which owns 54 daily newspapers including The San Jose Mercury News and The Denver Post. “We don’t have to let them take our content. We let them do so because it drives traffic.”
Representatives of The Tribune Company; A. H. Belo Company, publisher of The Dallas Morning News; and The New York Times Company declined to comment.
Under pressure, and sometimes in response to lawsuits, Google has agreed to license content for its service from some news outlets, including The Associated Press and Agence France-Presse. Those agreements generally allow Google to use the content in different ways, including keeping articles on its own site. Google has maintained that its use of headlines and snippets was permissible under “fair use” provisions of copyright law.
Reuters, which does not have a licensing arrangement with Google, said it would be watching the company closely.
“We are certainly not surprised by the move, which places Google News in a position to compete with news publishers — giving us cause for concern,” said Alisa Bowen, a senior vice president for Reuters.
Exploring a ‘Deep Web’ That Google Can’t Grasp
One day last summer, Google’s search engine trundled quietly past a milestone. It added the one trillionth address to the list of Web pages it knows about. But as impossibly big as that number may seem, it represents only a fraction of the entire Web.
Beyond those trillion pages lies an even vaster Web of hidden data: financial information, shopping catalogs, flight schedules, medical research and all kinds of other material stored in databases that remain largely invisible to search engines.
The challenges that the major search engines face in penetrating this so-called Deep Web go a long way toward explaining why they still can’t provide satisfying answers to questions like “What’s the best fare from New York to London next Thursday?” The answers are readily available — if only the search engines knew how to find them.
Now a new breed of technologies is taking shape that will extend the reach of search engines into the Web’s hidden corners. When that happens, it will do more than just improve the quality of search results — it may ultimately reshape the way many companies do business online.
Search engines rely on programs known as crawlers (or spiders) that gather information by following the trails of hyperlinks that tie the Web together. While that approach works well for the pages that make up the surface Web, these programs have a harder time penetrating databases that are set up to respond to typed queries.
“The crawlable Web is the tip of the iceberg,” says Anand Rajaraman, co-founder of Kosmix (www.kosmix.com), a Deep Web search start-up whose investors include Jeffrey P. Bezos, chief executive of Amazon.com. Kosmix has developed software that matches searches with the databases most likely to yield relevant information, then returns an overview of the topic drawn from multiple sources.
“Most search engines try to help you find a needle in a haystack,” Mr. Rajaraman said, “but what we’re trying to do is help you explore the haystack.”
That haystack is infinitely large. With millions of databases connected to the Web, and endless possible permutations of search terms, there is simply no way for any search engine — no matter how powerful — to sift through every possible combination of data on the fly.
To extract meaningful data from the Deep Web, search engines have to analyze users’ search terms and figure out how to broker those queries to particular databases. For example, if a user types in “Rembrandt,” the search engine needs to know which databases are most likely to contain information about art ( say, museum catalogs or auction houses), and what kinds of queries those databases will accept.
That approach may sound straightforward in theory, but in practice the vast variety of database structures and possible search terms poses a thorny computational challenge.
“This is the most interesting data integration problem imaginable,” says Alon Halevy, a former computer science professor at the University of Washington who is now leading a team at Google that is trying to solve the Deep Web conundrum.
Google’s Deep Web search strategy involves sending out a program to analyze the contents of every database it encounters. For example, if the search engine finds a page with a form related to fine art, it starts guessing likely search terms — “Rembrandt,” “Picasso,” “Vermeer” and so on — until one of those terms returns a match. The search engine then analyzes the results and develops a predictive model of what the database contains.
In a similar vein, Prof. Juliana Freire at the University of Utah is working on an ambitious project called DeepPeep (www.deeppeep.org) that eventually aims to crawl and index every database on the public Web. Extracting the contents of so many far-flung data sets requires a sophisticated kind of computational guessing game.
“The naïve way would be to query all the words in the dictionary,” Ms. Freire said. Instead, DeepPeep starts by posing a small number of sample queries, “so we can then use that to build up our understanding of the databases and choose which words to search.”
Based on that analysis, the program then fires off automated search terms in an effort to dislodge as much data as possible. Ms. Freire claims that her approach retrieves better than 90 percent of the content stored in any given database. Ms. Freire’s work has recently attracted overtures from one of the major search engine companies.
As the major search engines start to experiment with incorporating Deep Web content into their search results, they must figure out how to present different kinds of data without overcomplicating their pages. This poses a particular quandary for Google, which has long resisted the temptation to make significant changes to its tried-and-true search results format.
“Google faces a real challenge,” said Chris Sherman, executive editor of the Web site Search Engine Land. “They want to make the experience better, but they have to be supercautious with making changes for fear of alienating their users.”
Beyond the realm of consumer searches, Deep Web technologies may eventually let businesses use data in new ways. For example, a health site could cross-reference data from pharmaceutical companies with the latest findings from medical researchers, or a local news site could extend its coverage by letting users tap into public records stored in government databases.
This level of data integration could eventually point the way toward something like the Semantic Web, the much-promoted — but so far unrealized — vision of a Web of interconnected data. Deep Web technologies hold the promise of achieving similar benefits at a much lower cost, by automating the process of analyzing database structures and cross-referencing the results.
“The huge thing is the ability to connect disparate data sources,” said Mike Bergman, a computer scientist and consultant who is credited with coining the term Deep Web. Mr. Bergman said the long-term impact of Deep Web search had more to do with transforming business than with satisfying the whims of Web surfers.
Skype Calls' Immunity to Police Phone Tapping Threatened
Suspicious phone conversations on Skype could be targeted for tapping as part of a pan-European crackdown.
Suspicious phone conversations on Skype could be targeted for tapping as part of a pan-European crackdown on what law authorities believe is a massive technical loophole in current wiretapping laws, allowing criminals to communicate without fear of being overheard by the police.
The European investigation could also help U.S. law enforcement authorities gain access to Internet calls. The National Security Agency (NSA) is understood to believe that suspected terrorists use Skype to circumvent detection.
While the police can get a court order to tap a suspect's land line and mobile phone, it is currently impossible to get a similar order for Internet calls on both sides of the Atlantic.
Skype insisted that it does cooperate with law enforcement authorities, "where legally and technically possible," the company said in a statement.
"Skype has extensively debriefed Eurojust on our law enforcement program and capabilities," Skype said.
Eurojust, a European Union agency responsible for coordinating judicial investigations across different jurisdictions announced Friday the opening of an investigation involving all 27 countries of the European Union.
"We will bring investigators from all 27 member states together to find a common approach to this problem," said Joannes Thuy, a spokesman for Eurojust based in The Hague in the Netherlands.
The purpose of Eurojust's coordination role is to overcome "the technical and judicial obstacles to the interception of Internet telephony systems", Eurojust said.
The main judicial obstacles are the differing approaches to data protection in the various E.U. member states, Thuy said.
The investigation is being headed by Eurojust's Italian representative, Carmen Manfredda.
Criminals in Italy are increasingly making phone calls over the Internet in order to avoid getting caught through mobile phone intercepts, according to Direzione Nazionale Antimafia, the anti-Mafia office in Rome.
Police officers in Milan say organized crime, arms and drugs traffickers, and prostitution rings are turning to Skype and other systems of VOIP (voice over Internet Protocol) telephony in order to frustrate investigators.
While telecommunications companies are obliged to comply with court orders to monitor calls on land lines and mobile phones, "Skype' refuses to cooperate with the authorities," Thuy said.
In addition to the issue of cooperation, there are technical obstacles to tapping Skype calls. The way calls are set up and carried between computers is proprietary, and the encryption system used is strong. It could be possible to monitor the call on the originating or receiving computer using a specially written program, or perhaps to divert the traffic through a proxy server, but these are all far more difficult than tapping a normal phone. Calls between a PC and a regular telephone via the SkypeIn or SkypeOut service, however, could fall under existing wiretapping regulations and capabilities at the point where they meet the public telephone network.
The pan-European response to the problem may open the door for the U.S. to take similar action, Thuy said.
"We have very good cooperation with the U.S.," he said, pointing out that a U.S. prosecutor, Marylee Warren, is based in The Hague in order to liaise between U.S. and European judicial authorities.
The NSA (National Security Agency) is so concerned by Skype that it is offering hackers large sums of money to break its encryption, according to unsourced reports in the U.S.
Italian investigators have become increasingly reliant on wiretaps, Eurojust said, giving a recent example of customs and tax police in Milan, who overheard a suspected cocaine trafficker telling an accomplice to switch to Skype in order to get details of a 2kg drug consignment.
"Investigators are convinced that the interception of telephone calls have become an essential tool of the police, who spend millions of euros each year tracking down crime through wiretaps of land lines and mobile phones," Eurojust said.
The first meeting of Eurojust's 27 national representatives is planned in the coming weeks but precise details of its timing and the location of the meeting remain secret, Thuy said.
"They will exchange information and then we will give advice on how to proceed," he said. Bringing Internet telephony into line with calls on land lines and mobile phones "could be the price we have to pay for our security," he said.
Queensland Cops May Tap Facebook, Email
Michael Wray and Tanya Chilcott
FEARS have been raised about police abusing new phone-tapping powers to snoop on social networking sites such as Facebook and private emails.
Leading criminal lawyer Jim Coburn accused the State Government of "fudging" the scope of the telephone intercept powers, which start this year.
"(Police) will have authority to eavesdrop on communications, be they speech, music or sounds, data, text, images or signals," Mr Coburn said.
"Anything and everything will be part of the phone tap power laws."
A Justice Department spokesman said police had the power to access Facebook and email documents once they'd been received by the recipient. However the new powers would give police "real time access".
"Any communication passing across the telecommunications system can be lawfully intercepted under powers provided in a telecommunications intercept warrant," he said.
Mr Coburn also labelled the program's independent watchdog -- the Public Interest Monitor -- a "toothless tiger" that would be unable to reject phone tapping applications.
"Tight controls and restraints need to be imposed on these powers but looking at the legislation, there's nothing in there to stop the police from using these powers to launch fishing expeditions against citizens," he said.
The PIM has lobbied the Government for more resources to cope with an expected glut of applications once phone-tapping powers are in place.
Queensland Council for Civil Liberties vice-president Terry O'Gorman dismissed concerns about the PIM, saying the final decision on phone taps would rest with a Supreme Court judge.
"If (Mr Coburn) is saying that the PIM accepts or rejects phone-tap applications, that is wrong," Mr O'Gorman said.
Premier Anna Bligh held off introducing the powers, which will be based on a federal model adopted in all other states, until the PIM oversight was included.
Mr Coburn, who works for Ryan and Bosscher Lawyers, also predicted lengthy court battles when police attempted to use evidence gained by phone taps in court.
"The police say having these powers will make it easier to catch criminals but phone tap recording and intercepted emails and texts can be manipulated so any such evidence is going to be challenged in court," he said.
Three Ways Twitter Security Fails
The popular micro-blogging platform Twitter continues its explosive growth. Twitter experienced a 900 percent increase in active users in the last year, according to a recent blog post from Biz Stone, the company's co-founder. People are increasingly using it to get breaking news updates, to collaborate with colleagues remotely, and connect with friends on an up-to-the-minute basis. Some businesses are using it as a new promotion and marketing tool.
Despite the popularity, Twitter still a lot to do when it comes to securing the platform. (See "3 Ways a Twitter Hack Can Hurt You.") Two security experts weighed in about three areas where Twitter poses some significant risks.
Twitter "Tweets" have a character limit of 141 characters. Many users enter urls that are too long and which are automatically truncated with a shortening service, such as TinyURL. Users can't tell where the link is going when they scroll over to it.
This makes it much easier for hackers to send out faulty or malicious links, according to Mike Murray, CISO at Foreground Security, a Florida-based security consultancy.
"With these new mediums, we've gone back to 1997 in terms of the way we act," said Murray. "When email first came out, everyone sent out forwards and all of this other stuff and everyone opened it. And we've spent the last ten years convincing people bad things can come from opening emails you don't trust. We are inoculated against that in email. We are not inoculated against that in Twitter and Facebook. We trust the people we talk to and that talk to us."
"We've been saying to people for ages: 'Be careful which links you click on and make sure it really is who it claims to be,'" said Graham Cluley, a senior technology consultant with UK-based security firm Sophos. "If you are clicking on something that is a tiny url, you don't know where you are going to end up. It is harder to check and reassure yourself about where you are really going."
Both experts agree it is important to educate Twitter users about the potential for malicious links (See: Social Networking Dangers Exposed). Instill in them that it is important to verify all of the Twitterers in their network as legitimate and consider the source before clicking on any urls.
"By now we have figured out the hygiene around email," said Murray. "If we can help users figure out that same hygiene around social networking, I think we will all be better off."
Making it too easy to "follow" users.
Many Twitter users will often follow anyone that follows them without question or concern, according to Cluley.
"We are seeing spammers create accounts and then follow thousands of people with these bogus accounts," said Cluely. "There are many people on Twitter who automatically follow back anyone who follows them without considering who on Earth this person is and whether they are a genuine account or not."
The problem is that this makes it possible for spammers to get credibility on the Twitter network. While it is possible to set up your account so that you approve all followers, not enough people actually do that. The more users a spammer gets in its network, the wider its reach, and potential for.
Murray thinks the lack of control over following also brings up privacy concerns.
"That is both the power of Twitter and its biggest threat," said Murray. "Anyone can follow you and anyone can see what you are saying. And you don't know who anyone can be. It can be bad guys. It can be your competitors. By having them follow you, you have opened up that trusted medium to everyone. It is like having a phone conversation where you don't know who is listening in."
Murray advises users to treat their Twitter updates like a public conversation.
"Too many people treat it like they are having a private conversation," he said. "Treat everything you say as if you are posting it on your corporate or personal web site, because it is. It can be seen by anybody."
Lack of e-mail authentication.
When a new user sets up a Twitter account, that person is not required to prove their e-mail address is a legitimate address, which is a big problem, said Cluley.
"With lots of online web accounts, they will e-mail you to confirm registration. With Twitter, you don't have to do that," said Cluley. "You could put in email@example.com and it will never check. So it is very easy to create fake accounts."
That makes it even easier for spammers to create networks on Twitter, he said.
However, Murray thinks e-mail verification is one of the smaller problems with Twitter security.
"Yes, it allows impersonation and for people to set up fraudulent accounts. But people who are going to do those things are going to find ways to do it anyway. I can set up a fake Gmail account in five minutes and accomplish the same thing."
MySpace Founders Roll Out Red Carpet for DailyFill
Celebrity gossip, it seems, will never go out of style. At least, that’s what Josh Berman and Colin Digiaro, co-founders of MySpace, are hoping with their latest venture.
On Tuesday, Mr. Berman and Mr. Digiaro officially unveiled DailyFill, a gossip Web site along the lines of PerezHilton and Defamer, Gawker Media’s Hollywood-centric blog, to the public.
The site is the firstborn product of Slingshot Labs, a Web venture and start-up incubator financed by News Corporation, the media empire controlled by Rupert Murdoch. Mr. Berman and Mr. Digiaro are co-presidents of the company, which was started last year and seeded with a reported $15 million to spawn additions to News Corp.’s growing hub of Web properties.
Mr. Digiaro says DailyFill will stand out from its peers by offering quick takes on the most current and entertaining celebrity news.
“Content interactions are changing,” he said. “Audiences have a much shorter attention span and people are looking for quicker bits of information rather than long articles.”
To that end, DailyFill pulls news highlights from partner gossip sites that include the New York Post, US Weekly and Gossip Girls and parses it into bite-sized stories with a humorous, quick-witted tone. For example, recent features included a top 10 list of Oscar Internet video parodies and a short article entitled “Mickey Rourke Misses His Dog And Also His Face.”
To shape DailyFill’s editorial tone, which Mr. Digiaro described as “quick doses of comedic summaries of celebrity content,” the duo recruited Chris Case, an Emmy-nominated writer with projects such as “Politically Incorrect with Bill Maher,” “Mad About You” and “Spin City” under his belt.
So far, the site, which has been running quietly since November, seems to be gaining traction: DailyFill draws three million monthly unique visitors, according to the online measurement service Quantcast. That’s roughly half the traffic of PerezHilton, which the company expects to surpass in readership next month.
Mr. Berman and Mr. Digiaro, who are aiming to roll out between three and five ventures each year, said they were undeterred by a recession-battered economy.
“We’re being scrappy and entrepreneurial to get profitable quickly,” said Mr. Berman of the site, which runs on an advertising-based revenue model. “Even before the economy went sideways and down, we’ve been focused on being profitable.”
“MySpace went profitable in two months,” he added. “We’re not of the school where you amass a large audience and then profit from it.”
Facebook to Create 'Bill of Rights'
Scott Duke Harris
Facebook, angling to turn a recent user rebellion to its own advantage, called upon the users themselves Thursday to help formulate what has been portrayed as a kind of "bill of rights" to govern the social-networking giant.
The proposed "Facebook Principles" cover such topics as the "freedom to share and connect," "fundamental equality" and "ownership and control of information." Facebook users, now numbering 175 million around the world, are being invited to review, comment on and ultimately vote on the proposals in "a virtual town hall" over the next 30 days.
"This is really a move we're making because we trust our users," Facebook founder and Chief Executive Mark Zuckerberg said. "If we have a good, open dialogue, we feel this will strengthen the community and strengthen trust and loyalty."
Facebook devotees quickly responded to the invitation for commentary, and privacy advocates applauded Facebook's move.
"We think it's good news," said Marc Rotenberg, executive director of the Electronic Privacy Information Center. Facebook backtracked on a revision to its terms of service last week after a group of users protested the changes and EPIC prepared a complaint it intended to file with the Federal Trade Commission.
It was especially important, Rotenberg said, that Facebook's draft principles declared "people should own their own information."
"That's really the heart of it," Rotenberg said. "I don't think privacy issues
are going to be easily solved, but I think it is important for Facebook to say Facebook users own and control their information."
Facebook's action, Rotenberg said, could have a broad impact on business practice on the Web: "It's the most active online community in the world, and what Facebook does has a very big impact on lots and lots of services."
Julius Harper, a Los Angeles video game producer who helped organize a Facebook group called "People Against the New Terms of Service," said the company went beyond the reassurance protesters were seeking. The group grew to 136,000 users.
Harper said he was impressed that Facebook said it would also seek a 30-day discussion period for users regarding possible future amendments to the terms.
"I don't think Facebook ever had an evil or nefarious intent" in making the controversial revisions, Harper said. "How Facebook has responded has shown to me that they are very egalitarian and idealistic. The last thing they want to do is alienate their users."
Facebook makes money through advertising and sale of digital gifts, but it is exploring new revenue sources.
Suspicions about Facebook's intentions arose in recent weeks after it published a revision in its terms of service. The intent, Facebook executives said, was to streamline the document and minimize legalese. But "mistakes" made by Facebook led to "confusion," Zuckerberg said.
In an interview with the Mercury News, Zuckerberg suggested that the controversy accelerated an initiative already brewing within the Palo Alto-based company. "What we announced today isn't really in response to last week. It's something we've actually discussed for a while," Zuckerberg said.
Without the controversy, "we probably would have phased it in over time," he added. "This is a pretty unique opportunity where people care deeply about the governance of the site. Now is a perfect opportunity to roll something like this out and get a real dialogue about the issues."
Promoting "openness and transparency" is at the core of Facebook's mission, Zuckerberg said. "Openness and transparency, instead of just being an end state, has to be a process in how we get there."
Most Popular Sharing Sites
In the past week ShareThis has released data showing their most popular Sharing Services during January 2009. The data are illustrated below in the chart provided by ShareThis. The results are likely an accurate reflection of the various sharing services popularity on the web as a whole as ShareThis is extremely popular:
• Email is by far most popular sharing service getting 57%.
• Facebook is the second most popular sharing service at 21%
• Digg has dropped significantly in popularity to 2%
AddThis used to provide similar data but unfortunately has ceased to provide this information. Thanks to Sharethis for sharing the data!
Privacy preserving peer-to-peer data sharing
Although widely used, currently popular peer-to-peer (P2P) applications are limited by a lack of user privacy. By design, services like BitTorrent and Gnutella share data with anyone that asks for it, allowing a third-party to systematically monitor user behavior. As a result, P2P networks can only be safely used by those comfortable with wholly public knowledge of their activity.
OneSwarm is a new P2P data sharing application we’re building to provide users with explicit control over their privacy by enabling fine-grained control over how data is shared. Instead of sharing data indiscriminately, data shared with OneSwarm can be made public, it can be shared with friends, shared with some friends but not others, and so forth. We call this friend-to-friend (F2F) data sharing. OneSwarm is:
• Privacy preserving: OneSwarm uses source address rewriting to protect user privacy. Instead of always transmitting data directly from sender to receiver (immediately identifying both), OneSwarm may forward data through multiple intermedaries, obscuring the identity of both sender and receiver. For more details, check out the OneSwarm overview screencast or our papers.
• Usable: OneSwarm’s interface is web-based and supports real-time transcoding of many audio and video formats for in-browser playback, eliminating the need for casual users to master a new application’s interface or search for custom media codecs.
• Open: OneSwarm is freely available and built on existing standards. OneSwarm can operate as a fully backwards compatible BitTorrent client, and its friend-to-friend data sharing features are built on
Tracking the Trackers
Investigating P2P copyright enforcement
As people increasingly rely on the Internet to deliver downloadable music, movies, and television, content producers are faced with the problem of increasing Internet piracy. To protect their content, copyright holders police the Internet, searching for unauthorized distribution of their work on websites like YouTube or peer-to-peer networks such as BitTorrent. When infringement is (allegedly) discovered, formal complaints are issued to network operators that may result in websites being taken down or home Internet connections being disabled.
Although the implications of being accused of copyright infringement are significant, very little is known about the methods used by enforcement agencies to detect it, particularly in P2P networks. We have conducted the first scientific, experimental study of monitoring and copyright enforcement on P2P networks and have made several discoveries which we find surprising.
• Practically any Internet user can be framed for copyright infringement today.
By profiling copyright enforcement in the popular BitTorrent file sharing system, we were able to generate hundreds of real DMCA takedown notices for computers at the University of Washington that never downloaded nor shared any content whatsoever.
Further, we were able to remotely generate complaints for nonsense devices including several printers and a (non-NAT) wireless access point. Our results demonstrate several simple techniques that a malicious user could use to frame arbitrary network endpoints.
• Even without being explicitly framed, innocent users may still receive complaints.Because of the inconclusive techniques used to identify infringing BitTorrent users, users may receive DMCA complaints even if they have not been explicitly framed by a malicious user and even if they have never used P2P software!
• Software packages designed to preserve the privacy of P2P users are not completely effective.
To avoid DMCA complaints today, many privacy conscious users employ IP blacklisting software designed to avoid communication with monitoring and enforcement agencies. We find that this software often fails to identify many likely monitoring agents, but we also discover that these agents exhibit characteristics that make distinguishing them straightforward.
While our experiments focus on BitTorrent only, our findings imply the need for increased transparency in the monitoring and enforcement process for all P2P networks to both address the known deficiencies we have exposed as well as to identify lurking unknown deficiencies.
More details about our findings and our experimental methodology are available in our online FAQ. A more thorough treatment is available in our paper.
Row Over Web Blacklist
MAJOR inconsistencies have emerged in the way a top-secret blacklist of web pages is managed by the Australian Communications and Media Authority.
The content of the list of illegal, prohibited and potential prohibited web pages is meant to be strictly confidential. It is the backbone of the federal Government's internet censorship plan.
The list is of critical importance as it is being used as a basis for internet filtering trials, which involves internet service providers blocking web pages.
ACMA is exempt from the Freedom of Information Act 1982, and disclosure of information on the blacklist could jeopardise efforts to block access to harmful and offensive online material.
"ACMA would not disclose information if doing so would contradict the Freedom of Information Act," the spokesman said.
Recent actions by ACMA, however, have called into question its methods of administering the list.
On January 5, an internet user in Melbourne, known online as Foad, lodged a complaint with ACMA about content on an anti-abortion web page, not the entire website. The man did not want his real name published for fear of reprisals. He said his motive was to test the system and show that web pages not showing material connected with sexual abuse of children could end up on the blacklist.
About two weeks later, he received a reply from ACMA informing him it was "satisfied that the internet content is hosted outside Australia, and the content is prohibited or potential prohibited content".
There was no warning from ACMA not to publicise the anti-abortion web page. "I never received any indication from ACMA not to publish it," he said.
ACMA's response was almost immediately published on the internet on various blogs and forums, including the address of the prohibited web page.
An ACMA spokesman confirmed the contents of its letter and said the complainant was not barred from publicising the banned web page. "There is no prohibition on complainants publishing the outcome of their complaints, as has happened in this case," the spokesman said.
The content of the blacklist is meant to be a closely guarded secret, but ACMA did not act while the web address of the anti-abortion page made its way into the public domain.
ACMA, however, insists the system is not broken, as such incidents are isolated.
"ACMA has taken similar action on about 7000 web pages since January 1, 2000, and as far as we are aware complainants generally don't publish the details.
"The fact that a very small number may do so does not undermine the rationale for keeping the list confidential - including several hundred URLs relating to child sexual abuse material," the spokesman said.
Opposition communications spokesman Nick Minchin said the loophole was a "major flaw" and said he would seek an explanation from ACMA on how the address of the banned web page was allowed to be made public.
"One of our concerns about the proposed mandatory ISP filtering scheme is that it would involve the distribution of the ACMA blacklist to some 700-plus ISPs, and the security of the list is paramount," Senator Minchin said.
"ACMA has a good track record, but this highlights that through the exigency of law and procedure, inadvertently, some of the contents on the list can be made public.
"We will pursue this matter with ACMA. There's obviously a major flaw in the current arrangement," Senator Minchin said.
The blacklist is regularly updated to remove inactive web pages and add new ones. Once a complaint is received, ACMA will access the case in line with the National Classification Code and the Classification Board Guidelines for Classification of Films and Computer Games.
As of January 31, the blacklist contained 1090 web pages, according to ACMA.
The agency couldn't provide a breakdown of the list, but the spokesman said the proportions were likely to be similar to those of November 30, which involved 1370 web pages. These include 864 URLs that had been or would be refused classified (RC), 674 links relating to depictions of a child younger than 18, and 441 pages classified X18+.
As recent as last week, a spokesman for Communications Minister Stephen Conroy said the blacklist contained mainly URLs with child sexual abuse images.
Unlike the ACMA blacklist, the classification database is public.
People can search the classification database online to find the classification of a film, computer game or publication.
If a name or title is not known, users can search according to dates, and the system provides all the titles classified - including adult and RC material - for that time. The results include the date of classification, author, publisher and country of origin.
ACMA says the content of the blacklist can't be as transparent as the classification database, as it would provide almost immediate access to the banned items.
"The rationale for this measure is that in the offline environment, if a book or film is refused classification or assigned a restricted classification, its distribution is effectively banned or restricted," the ACMA spokesman said.
"Publication of the title of the book or film has little effect on its availability. Publishing the title or internet address of online material, on the other hand, could allow a person to locate, view and download the material, which would be contrary to the regulatory objectives of the Broadcasting Services Act."
Senator Conroy said ACMA staffers were qualified to decide what went on the blacklist.
He has long argued that internet filtering was one of many ways to protect children online, and the ACMA list was selected for use in live filtering trials.
Asked if the ACMA blacklist would still be used if the Government's current internet filtering plan became law, a spokesman for Senator Conroy merely said: "We're using the blacklist for the purposes of ISP trials."
The censorship regime has made the federal Opposition, Greens, civil libertarians and internet users increasingly suspicious of the Government's motives. They said the Government could use them to block any website it saw fit, as there was no judicial oversight.
Meanwhile, Senator Conroy announced yesterday the appointment of Edith Cowan University to help shape its cyber-safety policies.
District Court Overturns Magistrate Judge in Fifth Amendment Encryption Case:
Back in late 2007, I blogged a lot about a magistrate judge ruling in In re Boucher, a case involving a how the Fifth Amendment right against self-incrimination applies to access to encryption keys. I argued back then that the magistrate's decision was wrong on narrow grounds: Although the Fifth Amendment normally blocked the subpoena of encryption keys, in this particular case the facts divulged by compliance with the subpoena were already known to the government and therefore not privileged under the "foregone conclusion" doctrine.
Although the 2007 ruling garnered a great deal of press attention (including articles in the Washington Post and the New York Times), it was only the ruling of a magistrate judge rather than an Article III District Judge. The government sought review of the case with an Article III District Judge (more or less an "appeal" from the ruling of the magistrate judge), and we have been waiting for a ruling from the District Court for about a year.
A few days ago, District Judge William K. Sessions III finally handed down a ruling. I have posted the opinion here: In Re Grand Jury Subpoena to Sebastien Boucher. Judge Sessions's take was basically the same as mine in my 2007 post: He ruled that under the specific facts of this case, Boucher must decrypt the hard drive and produce to the government an unencrypted version of the drive. (Notably, the subpoena orders Boucher to produce to the government an unencrypted version of his hard drive, not to actually give the government his key.) There was no Fifth Amendment privilege because the government already knew the testimonial things that compliance with the subpoena would help show, making that a "foregone conclusion." From the opinion:
Publishers of Sex Photos Need to Keep Records of Age, Federal Court Rules
A federal appeals court on Friday upheld a ruling that publishers of sexually explicit photographs must keep age and identity records of those pictured and make the records available for inspection by the government.
The ruling stems from a case originally filed in U.S. District Court in Cleveland 14 years ago by a local company that publishes magazines aimed at "swingers" - adults who seek multiple sex partners.
Connection Distributing, headquartered on Kelley Avenue, argued that having to keep such records suppresses the free expression of the company and its subscribers, who place sexually explicit advertisements for themselves in the publications.
Congress required such record-keeping as part of the Child Protection and Obscenity Act of 1988 to reduce the chance of underage models being used in publications.
Since then, the matter has bounced back and forth between the courts. The 6th U.S. Circuit Court of Appeals voted 11-6 to uphold a U.S. District Court ruling that the law is appropriate.
Connection Distributing is not giving up. The company will appeal the decision to the U.S. Supreme Court, said Michael Murray, an attorney for Connection Distributing.
Murray said the record-keeping statute is too broad and that the vast majority of the people who advertise with Connections are 30 years or older and cannot possibly be mistaken for minors.
But the court's majority opinion argued that an appearance- based standard was not a sufficient safeguard, in part because pictures may depict only body parts, making the person's age difficult to discern.
The record-keeping statute applies to both print and Internet publications.
Online Child Abuse Images Warning
Many UK households can access child abuse image sites, say charities
Children's charities have expressed "serious concerns" many UK households still have access to images showing child sex abuse via their computers.
The government had asked all internet service providers (ISPs) to block illegal websites by the end of 2007.
But firms providing 5% of broadband connections have still failed to act.
One of them, Zen Internet, said in a statement: "We have not yet implemented the IWF's recommended system because we have concerns over its effectiveness."
But the NSPCC's Zoe Hilton said: "Allowing this loophole helps feed the appalling trade in images featuring real children being seriously sexually assaulted."
The blocked websites come from a list supplied by the Internet Watch Foundation (IWF), but some smaller providers refuse to use the list.
The Children's Charities Coalition on Internet Safety (CCCIS) says self-regulation is not working and it is calling for firmer action by the government.
"We now need decisive action from the government to ensure the ISPs that are still refusing to block this foul material are forced to fall into line.
"Self-regulation on this issue is obviously failing - and in a seriously damaging way for children."
Home Office Minister Alan Campbell said: "In 2006 the government stated that they wished to see 100% of consumer broadband connections covered by blocking, which includes images of child abuse, by the end of 2007.
"Currently in the UK, 95% of consumer broadband connections are covered by blocking. The government is currently looking at ways to progress the final 5%."
No 'Call of Duty' Without 'Rules of War'
Dad tells son he must read the Geneva Conventions — and abide by them
Evan Spencer wanted to play “Call of Duty: World at War.” So he asked his dad.
Hugh Spencer wasn’t initially thrilled about the idea of his son playing the World War II-based game. “I’ve never really enjoyed first-person shooter games,” he confesses. “They’re just not my favorite aesthetic.”
But the elder Spencer agreed to his son’s request, on one condition: Evan would have to read all four treaties from the Geneva Conventions first. And then, agree to play by those rules.
This story was posted on BoingBoing earlier this week, and it’s been picked up by the game blogs. Most commenters applaud Spencer’s novel approach. But some get downright juvenile and nasty, criticizing dad, questioning whether you can even play “Call of Duty” and abide by the Geneva Conventions — and writing mean things about his son. You know — typical Internet venom. Aimed at a 13-year-old kid.
Spencer the elder is taken aback by all the flaming. He certainly doesn’t expect his son to consult the written rules while he’s playing (he might get fragged, after all). But dad wanted his son to “be aware that there are things called rules of war.”
Spencer and his wife, Helen, have two sons: Evan and Simon, 15. Both like to game, although Simon is more of a “Final Fantasy” type. The boys have “just about every console out there,” says dad, but the games in their Toronto home are mostly E-rated, with very few teen or mature-rated titles. The “Call of Duty” conversation was the first time that “the rubber had hit the road,” says Spencer.
World War II isn’t just pixels on a screen for dad, who works as interpretive designer for museums and other public exhibitions. His uncle was a U.S. Marine in the Pacific Theater during World War II. And his grandfather fought in both world wars. Many people, he says, have been in subsequent wars, and “people are in war, right now. And it’s not a game. It’s really not a game.”
When Evan realized dad was serious about the “Call of Duty” contingency, he read up on the Geneva Conventions. After talking it over with dad, Evan eventually got his prized game.
Spencer agreed to speak with me about why he and his wife decided to institute some “rules of war” in their home; following is an edited transcript of our conversation.
How do you monitor your kids’ play? It sounds like you’re pretty familiar with the content of what your kids play. Do you play with them?
I have hopeless hand-eye coordination, I don’t play anything (laughs). One thing is that you listen very carefully. We do actually monitor what they play quite extensively. It’s not a serious monitoring and we have to trust them, but I think the fact that they actually ask us about ratings (shows) that they’re carrying through on what we’re trying to do.
You mentioned that your son is “relentlessly reasonable” and outlined his reasons for playing the game. How did he present his case?
Mother is the ultimate holder of power in the house, as anyone who’s been married longer than six months knows (laughs). He presented the merits of the game as being good as a game, in terms of interactivity. He did actually ‘fess up and say that he’d played over at a friend’s house but hadn’t played it very much. He knew that the violence wasn’t too graphic. And he said that he really liked the fact that he could play online with his friends and they got to work cooperatively, and he enjoyed that. …
I felt that I had to take this request seriously. So I looked at the game … I didn’t play it, I looked at the box at the store and thought about it, contemplated it … and said “OK, you can get it.”
What did you discuss (about the Geneva Conventions)?
Did you see anything in the game that violates the conventions? And he said, “Well, maybe the part where they shoot zombies.” …
If you go to the National World War II Museum and you see the exhibit on the Pacific Theater, it was much worse than the European Theater. There were a lot of terrible things that happened in the European Theater, too, but I think because of the difference in culture and the difference of appearance that human rights deteriorated so much. My uncle would almost never talk about it.
But I think the main thing was that I didn’t want (Evan) to go into a scenario that was clearly in violation of that, and you slaughter a bunch of prisoners. They usually don’t set up the scenarios in that way, so it was more just to have that discussion and to have that basic check.
So, he has to play by all of those rules?
Well, sort of. Whether he actually incorporates that … I don’t think he actually holds up the page, but he’s aware that there are things called “Rules of War.”
It seems to me that this is metaphorical, really. Don’t just mindlessly go in and do anything in life, but think about the rules and moral implications of your actions, even in play. Is that what you were getting at here?
Yeah. I didn’t expect him to paste the thing by the console. He’d get killed immediately, checking his notes! (Laughs.) It was more like, give it some thought, particularly because it’s based on something real.
If Evan expressed an interest in playing a game like “Grand Theft Auto” or “Mass Effect,” both games with sexuality and other mature themes, would you allow that under certain conditions?
Nope. I think Grand Theft Auto is stupid. And it’s so outside the realm of (Evan and Simon’s) character. I guess Helen and I are just a couple of squares, in a way.
What would you advise other parents wrestling with this whole violence-in-games issue?
You really have to take a deep breath. I think every parent has to (do) what they feel is the responsible thing. I think it has to be informed — that’s the main thing. The other thing … you’re being judged on the level of these discussions. And the more decisions you make that seem arbitrary, the less they’re going to listen.
Internet "Addiction" May Fuel Teen Aggression
Teenagers who are preoccupied with their Internet time may be more prone to aggressive behavior, researchers reported Monday.
In a study of more than 9,400 Taiwanese teenagers, the researchers found that those with signs of Internet "addiction" were more likely to say they had hit, shoved or threatened someone in the past year.
The link remained when the investigators accounted for several other factors -- including the teenagers' scores on measures of self-esteem and depression, as well as their exposure to TV violence.
The findings, published online by the Journal of Adolescent Health, do not however prove that Internet addiction breeds violent behavior in children.
It is possible that violence-prone teenagers are more likely to obsessively use the Internet, explained lead researcher Dr. Chih-Hung Ko, of Kaohsiung Medical University in Taiwan.
However, the findings add to evidence from other studies that media -- whether TV, movies or video games -- can influence children's behavior. The also suggest that parents should pay close attention to their teenagers' Internet use, and the potential effects on their real-life behavior, Ko told Reuters Health.
According to Ko's team, some signs of Internet addiction include preoccupation with online activities; "withdrawal" symptoms, like moodiness and irritability, after a few Internet-free days; and skipping other activities to devote more time to online ones.
In this study, teenagers who fit the addiction profile generally were more aggression-prone than their peers. But the type of Internet activity appeared to matter as well.
Online chatting, gambling and gaming, and spending time in online forums or adult pornography sites were all linked to aggressive behavior. In contrast, teens who devoted their time to online research and studying were less likely than their peers to be violence-prone.
According to Ko, certain online activities may encourage kids to "release their anger" or otherwise be aggressive in ways they normally would not in the real world. Whether this eventually pushes them to be more aggressive in real life is not yet clear, the researcher said.
Ko recommended that parents talk to their children about their Internet use and their general attitudes toward violence.
SOURCE: Journal of Adolescent Health, online February 23, 2009.
Identifying Yourself As A Lesbian Gets You Banned On XBOX Live
Teresa says that she was harassed by other players and later suspended from XBOX Live because she identified herself as a lesbian in her profile. When she appealed to Microsoft, she says they told her that other gamers found her sexual orientation "offensive."
As far as we know, Microsoft is unwilling to reconsider this position.
Analysis: Upgrading From XP To Vista To Windows 7? Good Luck
The world knows it's official: Microsoft has stated that the best option to upgrade from Windows XP to Windows 7 is by not skipping the upgrade to Vista.
That recommendation did not go unheeded as reviewers in the CRN Test Center set out to find the most efficient and easiest way to get Windows 7 deployed on XP clients.
But after a series of tests on older and newer hardware, a number of noteworthy issues emerged: Microsoft's statement that if hardware works with Windows Vista it will work with Windows 7 appears to be, at best, misleading; hardware that is older, but not near the end of most business life cycles, could be impossible to upgrade; and the addition of an extra step in the upgrade process does add complexity and more time not needed in previous upgrade cycles.
The Test Center came to this conclusion after an attempt at a simulated enterprise upgrade and other evaluations of the process on different pieces of PC hardware.
The initial plan: Create a master image on a PC running Windows XP, then upgrade that PC from XP to Vista Service Pack 1 to Windows 7 beta. Then use an imaging utility like Acronis' Snap Deploy to push the image out to other XP clients (all on the same hardware as the imaged machine) and overwrite the XP operating system on them with the Windows 7 image.
While we were prepared to run into some problems with creating a Windows 7 image and pushing it out over a network, we did not foresee the headaches involved in upgrading a single PC from XP to Vista to Windows 7. And believe us, there were headaches.
Using a three-and-a-half-year-old ThinkPad T43 as the master image, the first step in the process—the transition from XP to Vista—was relatively smooth with just two notable issues: After the required reboot once Vista Service Pack 1 was installed, most of the pre-existing data and applications were intact. However, the AVG antimalware client installed would not fire up. Also, Windows Update would not run. We put those issues to the side and continued on.
The next step was to get Windows 7 installed. Since Vista upgraded with minimal issues, the anticipation was that the Windows 7 beta upgrade would be similar.
No such luck.
The installation, in and of itself, took place without incident. Before the actual installation began, running the compatibility report warned of issues with Infrared and Synaptic pointing devices on the ThinkPad. There also was a caution about two, third-party VPN clients that were installed.
No biggie, right? Wrong. After installation, the laptop would not fully boot into Windows 7. It booted fine in Safe Mode, yet the offending driver, file or service that caused the screen to go back, and caused the hard-drive activity to wind down to a barely discernible pulse, could not be identified after almost two days of troubleshooting.
Yes, this is an older, though not ancient, system we were trying to upgrade. Yet, it boggles the mind that the laptop upgraded fairly easy to Vista Service Pack 1 and then flat-lined with Windows 7. So much for the Microsoft mantra "If it works in Vista, it will work in Windows 7."
A second attempt to create a master image was made on another T43 laptop (to rule out any hardware issues). This time, Vista was upgraded to Service Pack 2 beta, and all drivers that had newer versions were upgraded.
After Windows 7 was installed, Windows forced the system back to Vista, citing that the version of Windows trying to be installed was not supported.
At least we were spared the work of rolling back.
A testing of XP to Vista to Windows 7 on a custom-built desktop, with newer components including an AMD quad-core Athlon and motherboard, went smoothly. Yet, how many businesses have the option in these economic times to purchase spanking-new hardware? Why could we upgrade to Vista and not to Windows 7? Further examination will continue.
Yes, the software is in beta. However, support for XP is ending soon. It's clear that VARs and IT professionals will need to do a considerable amount of planning and testing before tackling an upgrade to Windows 7.
EU to Oblige Microsoft to Offer Competitors’ Browsers
The European Commission will require Microsoft to give users of its ubiquitous Windows operating system the opportunity to choose between different Internet browsers to avoid breaching EU competition rules, the bloc's antitrust spokesman told EurActiv.
The European Commission sent a statement of objections to Microsoft in January 2009 regarding competition concerns surrounding the bundling of its browser, Internet Explorer, to the popular Windows operating system. The document represents the first step of a procedure that is likely to end up in a fine or the imposition of remedies (EurActiv 19/01/09).
By tying Internet Explorer to Windows, Microsoft is exploiting its dominant position in the operating systems market to hamper competition in the browsers market, according to the Commission.
For the same reason, the EU institutions had already forced Microsoft to pay a large fine for bundling its media player to Windows (EurActiv 17/09/07).
Although the Commission is still officially waiting for a response from Microsoft to the complaints raised last January, the outcome of this new battle with the IT giant is already taking form.
"If the Commission's preliminary conclusions as outlined in the recent statement of objections were confirmed, the Commission would intend to impose remedies that enabled users and manufacturers to make an unbiased choice between Internet Explorer and competing third party web browsers," Jonathan Todd, spokesperson for EU Competition Commissioner Neelie Kroes, told EurActiv.
To this end, Microsoft will be obliged to design Windows in a way that allows users "to choose which competing web browser(s) instead of, or in addition to, Internet Explorer they want to install and which one they want to have as default," Todd explained. A possible solution could be to present Windows users with a so-called "ballot screen" from which they would choose their browser.
Alternatively, it could be left up to computer or mobile phone manufacturers, such as Dell or Nokia, which support Microsoft Windows by default, to provide users with different browsers, in agreement with Microsoft.
This line stems from the mistakes the Commission recognised it had made by imposing remedies on Microsoft in the Media Player case (see background). Indeed, although Microsoft is now obliged to offer a version of Windows without Media Player, for the most part, users are opting for the readily available bundled offer, which provides extra software at the same price. "That remedy was rubbish," acknowledged an official in the Commission's competition department.
However, the new idea on the table is not without drawbacks of its own. "How the Commission defines the browsers eligible for being offered as an option to Internet Explorer?," a lawyer for Opera, the browser producer which filed the new complaint against Microsoft, told EurActiv.
"There are not many," replied Jonathan Todd. Indeed, at the moment the browser market is shared between Internet Explorer (the market leader by far), Firefox, Safari, Google Chrome and Opera. But the digital world is in a permanent revolution and nobody can foresee how many browsers there will be in five years' time.
Microsoft responded by stating: "We are committed to conducting our business in full compliance with European law. We are studying the statement of objections." The US giant now has until mid-March to respond to the Commission, and might also ask for a hearing. Brussels will not adopt a final decision until it has received Microsoft's official reply.
UK Government Backs Open Source
Open source software allows users to read and alter code
The UK Government has said it will accelerate the use of open source software in public services.
Tom Watson MP, minister for digital engagement, said open source software would be on a level playing field with proprietary software like Windows.
Open source software will be adopted "when it delivers best value for money", the government said.
It added that public services should where possible avoid being "locked into proprietary software".
Licenses for the use of open source software are generally free of charge and embrace open standards, and the code that powers the programs can be modified without fear of trampling on intellectual property or copyright.
Announcing an open source and open standards action plan, the government said it would:
• ensure that the Government adopts open standards and uses these to communicate with the citizens and businesses that have adopted open source solutions
• ensure that open source solutions are considered properly and, where they deliver best value for money are selected for Government business solutions
• strengthen the skills, experience and capabilities within Government and in its suppliers to use open source to greatest advantage
• embed an open source culture of sharing, re-use and collaborative development across Government and its suppliers
• ensure that systems integrators and proprietary software suppliers demonstrate the same flexibility and ability to re-use their solutions and products as is inherent in open source.
Government departments will be required to adopt open source software when "there is no significant overall cost difference between open and non-open source products" because of its "inherent flexibility".
One Third of Dell Inspiron Mini 9s Sold Run Linux
Android may give Linux a boost on netbooks, but according to Dell, its Inspiron Mini 9s with Ubuntu have already seen a steady sales stream coupled with low return rates.
While MSI told us a few months back that Wind netbooks running SuSE Linux saw 4x higher return rates than that of XP machines, Dell has had quite the opposite experience with its Inspiron Mini 9 offering with Ubuntu. “A third of our Mini 9 mix is Linux, which is well above the standard attach rate for other systems that offer Linux. We have done a very good job explaining to folks what Linux is,” says Dell’s Jay Pinkert.
Dell attributes part of the Linux growth to competitive pricing on the Ubuntu SKUs. “When you look at the sweet spot for this category it is price sensitivity, and Linux enabled us to offer a lower price entry point,” added Dell senior product manager John New.
According to Dell, the the return rate of Ubuntu running Mini 9s are comparable to the XP rate, which we are told is “very low.” “Our focus has been making sure that before the order is taken is that the customer knows what he is getting,” New added.
Beyond educating customers before they make a purchase, the success of Dell’s Ubuntu offering may also be attributed to the easy to use graphical user interface that they have created which makes it easier to launch popular programs and Web tools. What do you attribute the success to?
$100 Linux Wall-Wart Launches
Marvell Semiconductor is shipping a hardware/software development kit suitable for always-on home automation devices and service gateways. Resembling a "wall-wart" power adapter, the SheevaPlug draws 5 Watts, comes with Linux, and boasts completely open hardware and software designs, Marvell says.
In typical use, the SheevaPlug draws about as much power as a night-light. Yet, with 512MB each of RAM and Flash, and a 1.2GHz CPU, the unobtrusive device approaches the computing power found in the servers of only a decade ago.
Furthermore, the platform is available in single quantities, and is priced within reach of students, hobbyists, and tinkerers. Its hardware design is completely open -- everything from schematics to Gerber files will be available on a website, Marvell said. For those that do wish to build products on the platform, volume pricing could fall to $50, Marvell expects.
On the software side, the company says ARM ports of several popular Linux distributions are already running, and included. More importantly, Marvell has committed to do everything it can to ensure the best Linux support for SheevaPlug going forward. Raja Mukhopadhyay, product marketing manager, commented, "Whatever the community needs to facilitate development, we will provide the critical resources needed to facilitate that."
Mukhopadhyay calls the SheevaPlug an "ideal platform for in-home service delivery," and adds that he is looking forward to seeing what kinds of products and services are built on top of the device. He said, "We believe that for the consumer and the service provider in the home, it's the right time for some disruptive application delivery. We believe that having a completely open hardware platform will be key in letting people productize it however they want."
Several products based on Marvell's SheevaPlug Plug Computer design have already been announced (see further below for details).
SheevaPlug's ARM9-like core
Marvell's Tate Tran, in a conversation with LinuxDevices, noted that Marvell licenses ARM's ARMv5 architecture. It uses the license to implement special-purpose cores compatible with the architecture, and thus able to run standard ARM software ports. Marvell's areas of expertise include application processors for the cellular handset market, embedded WiFi radio chips, networking gear, and disk drive controllers for storage devices. Tran said, "With more than a thousand CPU engineers in-house, Marvell is larger than ARM itself."
According to Tran, Marvell ships about a billion chips per year. Of those, 800,000 are powered by Marvell's own cores. The "Sheeva" core powering the SheevaPlug's processor is one example.
The $100 SheevaPlug development platform and Plug Computer designs are built around the Marvell 88F6000, or "Kirkwood" SoC, which was introduced last year. The Plug Computer is based on the high-end 88F6281 version of the Kirkwood, with a Sheeva CPU core clocked to 1.2GHz. The Sheeva core combines elements of Marvell's earlier Feroceon and XScale architectures, both of which implemented ARM Ltd.'s ARMv5 architecture, similar to ARM Ltd.'s own "ARM9" cores.
The SheevaPlug Plug Computer is further equipped with 512MB of DRAM and 512MB of flash. The tiny embedded PC also includes gigabit Ethernet and USB 2.0 ports. Marvell did not release precise dimensions for the platform, but one early product based on the design is listed as measuring 4.0 x 2.5 x 2.0 inches. Plugging directly into a standard wall socket, the Plug Computer draws less than five watts under normal operation, compared to 25-100 watts for a PC being used as a home server, claims Marvell.
A SheevaPlug reference design is said to include board layout designs, software, manufacturing diagnostic tools, documentation, and other items. The SheevaPlug system development board (see diagram below) offers debug support, including direct connect to a PC via mini-USB cable, JTAG access, and a serial console interface, says the company.
The SheevaPlug development kit supports standard Linux 2.6 kernel distributions, including specific support for ARM ports of Fedora, Ubuntu, Debian, and Gentoo, says the company. The development platform includes an open-source API framework called RainDrop, currently under development, that will be used to integrate third-party applications in a standardized way. Support is also planned for a Java Virtual Machine and an OSGI stack -- technologies that home automation service providers have already invested in heavily.
Early supporters of the SheevaPlug Plug Computer design include the following companies, each with links to their respective websites:
• Cloud Engines Pogoplug -- The Pogoplug enables remote viewing of external storage devices via a web browser. The device connects to an external hard drive or memory stick via USB, and to a router via gigabit Ethernet, says Cloud Engines. The 4.0 x 2.5 x 2.0-inch device plugs directly into a wall socket, and enables remote uploading of multimedia, including access from an Apple iPhone. Regularly $100, it is now available for pre-order at a special price of $80, says the company.
• Ctera Networks CloudPlug -- This Plug Computer device converts any USB drive into a NAS device, and provides secure offsite backup, says Ctera. The CloudPlug is aimed primarily at service provider OEMs that want to offer online backup services to consumers and small businesses. Equipped with gigabit Ethernet and USB 2.0 ports, the device offers features including automatic and secure online backup, and data snapshot restore, says the company.
• Axentra HipServ -- Axentra has ported its home media server application to the SheevaPlug platform, providing applications for storing, managing, sharing, viewing, or listening to digital media content remotely over the web or across a home network, says the company. HipServ for SheevaPlug is said to enable connection to third-party services such as online backup and photo print apps, as well as social networking sites like Facebook and Flickr. Recently upgraded to HipServ 2.0, the software is built on Red Hat Linux Enterprise, and is said to support UPnP-AV, DLNA, WMC, and iTunes media standards.
• Eyecon Technologies Eyecon -- This "media companion" application enables remote mobile users, including iPhone users, to discover content from sources including the Internet, DVRs, PCs, and NAS devices. The Eyecon software can then direct the media files to any connected device in the home, says the company.
Kirkwood 88F6000 and the Sheeva core
Announced in early June, the Marvell 88F6000 "Kirkwood" is billed as an ultra-low-power SoC that targets IP-based home gateways, set-top boxes (STBs), home routers, media servers, and mobile Internet devices (MIDs). The 88F6000 integrates Sheeva cores that can clock to 2GHz, and draw two watts, Marvell says. The 88F6000 can act as either a main processor or as a co-processor, says the company.
The 88F6000 SoCs are offered in progressively more powerful versions: the 88F6190 (600MHz), 88F6180 (800MHz), 88F6192 (800MHz), and the 88F6281 (1.5GHz). The latter is clocked to 1.2GHz in the Marvell SheevaPlug design, and a future version will be able to clock to 2GHz, says the company. The 88F6000 SoCs offer various I/O, including two SATA ports, a gigabit Ethernet port, and a USB port, plus PCI Express and SDIO connections.
The Kirkwood's Sheeva core is also found in Marvell's superscalar MV78000 "Discovery Innovation Series" networking processors announced last May, as well as in Marvell's newer PXA168 platform . Marvell licenses the ARM architecture from ARM, Ltd., and implements its own cores, including the Sheeva.
SheevaPlug Plug Computer
The Sheeva core implements both Feroceon and XScale micro-architectures, and is backward compatible to both, while maintaining support for Intel's WMMX2 multimedia extensions, says Marvell. Kernel patches supporting Feroceon- and Sheeva-based SoCs, including the 88F6000 Kirkwood family, were recently merged into the mainline Linux 2.6.27 kernel. (For more on the Sheeva core, see our previous coverage of the PXA168.)
Stated Hajime Nakai, Director, Member of the Board, Buffalo, Inc., "Plug computing is a logical evolution for the digital home in the same way enterprise applications moved from servers to network appliances. Marvell is probably the only company that can pack so much processor performance into such a compact form factor."
The SheevaPlug development kit is available now for $100, says Marvell. More information may be found here.
The Netbook Effect: How Cheap Little Laptops Hit the Big Time
Netbooks prove that we finally know what PCs are actually for. Which is to say, not all that much.
Mary Lou Jepsen didn't set out to invent the netbook and turn the computer industry upside down. She was just trying to create a supercheap laptop. In 2005, Jepsen, a pioneering LCD screen designer, was tapped to lead the development of the machine that would become known as One Laptop per Child. Nicholas Negroponte, the longtime MIT Media Lab visionary, launched the project hoping to create an inexpensive computer for children in developing countries. It would have Wi-Fi, a color screen, and a full keyboard—and sell for about $100. At that price, third-world governments could buy millions and hand them out freely in rural villages. Plus, it had to be small, incredibly rugged, and able to run on minimal power. "Half of the world's children have no regular access to electricity," Jepsen points out.
The miserly constraints spurred her to be fiendishly resourceful. Instead of using a spinning hard drive she chose flash memory—the type in your USB thumb drive—because it draws very little juice and doesn't break when dropped. For software she picked Linux and other free, open source packages instead of paying for Microsoft's wares. She used an AMD Geode processor, which isn't very fast but requires less than a watt of power. And as the pièce de résistance, she devised an ingenious LCD panel that detects whether onscreen images are static (like when you're reading a document) and tells the main processor to shut down, saving precious electricity.
To build the laptop, dubbed the XO-1, One Laptop per Child hired the Taiwanese firm Quanta. It's hardly a household name, but Quanta is the largest laptop manufacturer in the world. Odds are that parts of the machine on your desk, whether it's from Apple, Dell, or Hewlett-Packard, were made by Quanta—possibly even designed by Quanta. Like most Taiwanese computermakers, it employs some of the sharpest engineers on the planet. They solved many of Jepsen's most daunting engineering challenges, and by 2007, the OLPC was shaping up. The poor kids of the world would have their notebook—if not quite for $100, for not a whole lot more.
Inspired (or perhaps a bit scared) by the OLPC project, Asustek—Quanta's archrival in Taiwan and the world's seventh-largest notebook maker—began crafting its own inexpensive, low-performance computer. It, too, would be built cheaply using Linux, flash memory, and a tiny 7-inch screen. It had no DVD drive and wasn't potent enough to run programs like Photoshop. Indeed, Asustek intended it mainly just for checking email and surfing the Web. Their customers, they figured, would be children, seniors, and the emerging middle class in India or China who can't afford a full $1,000 laptop.
What happened was something entirely different. When Asustek launched the Eee PC in fall 2007, it sold out the entire 350,000-unit inventory in a few months. Eee PCs weren't bought by people in poor countries but by middle-class consumers in western Europe and the US, people who wanted a second laptop to carry in a handbag for peeking at YouTube or Facebook wherever they were. Soon the major PC brands—Dell, HP, Lenovo—were scrambling to catch up; by fall 2008, nearly every US computermaker had rushed a teensy $400 netbook to market.
All of which is, when you think about it, incredibly weird. Netbooks violate all the laws of the computer hardware business. Traditionally, development trickles down from the high end to the mass market. PC makers target early adopters with new, ultrapowerful features. Years later, those innovations spread to lower-end models.
But Jepsen's design trickled up. In the process of creating a laptop to satisfy the needs of poor people, she revealed something about traditional PC users. They didn't want more out of a laptop—they wanted less.
Spec Shot: Laptop vs. Netbook
Many netbooks trade the speedy onboard processors and roomy hard drives of a full-size laptop for online apps and small—but fast—solid state drives. The result? A formidable machine at a third of the price.
By the end of 2008, Asustek had sold 5 million netbooks, and other brands together had sold 10 million. (Europe in particular has gone mad for netbooks; sales there are eight times higher than in the US.) In a single year, netbooks had become 7 percent of the world's entire laptop market. Next year it will be 12 percent.
"We started inventing technology for the bottom of the pyramid," Jepsen says, "but the top of the pyramid wants it too." This bit of trickle-up innovation, this netbook, might well reshape the computer industry—if it doesn't kill it first.
I wrote this story on a netbook, and if you had peeked over my shoulder, you would have seen precisely two icons on my desktop: the Firefox browser and a trash can. Nothing else.
It turns out that about 95 percent of what I do on a computer can now be accomplished through a browser. I use it for updating Twitter and Facebook and for blogging. Meebo.com lets me log into several instant-messaging accounts simultaneously. Last.fm gives me tunes, and webmail does the email. I use Google Docs for word processing, and if I need to record video, I can do it directly from webcam to YouTube. Come to think of it, because none of my documents reside on the netbook, I'm not sure I even need the trash can.
Netbooks have ended the performance wars. It used to be that when you went to an electronics store to buy a computer, you picked the most powerful one you could afford. Because, who knew? Maybe someday you'd need to play a cutting-edge videogame or edit your masterpiece indie flick. For 15 years, the PC industry obliged our what-if paranoia by pushing performance. Intel and AMD tossed out blisteringly fast chips, hard drives went on a terabyte gallop, RAM exploded, and high-end graphics cards let you play Blu-ray movies on your sprawling 17-inch laptop screen. That dream machine could do almost anything.
But here's the catch: Most of the time, we do almost nothing. Our most common tasks—email, Web surfing, watching streamed videos—require very little processing power. Only a few people, like graphic designers and hardcore gamers, actually need heavy-duty hardware. For years now, without anyone really noticing, the PC industry has functioned like a car company selling SUVs: It pushed absurdly powerful machines because the profit margins were high, while customers lapped up the fantasy that they could go off-roading, even though they never did. So coders took advantage of that surplus power to write ever-bulkier applications and operating systems.
What netbook makers have done, in effect, is turn back the clock: Their machines perform the way laptops did four years ago. And it turns out that four years ago (more or less) is plenty. "Regular computers are so fast, you really can't tell the difference between 1.6 giga and 2 giga," says Andy Tung, vice president of US sales for MSI, the Taiwanese maker of the Wind netbook. "We can tell the difference between one second and two seconds, but not between 0.0001 and 0.0002 second." For most of today's computing tasks, the biggest performance drags aren't inside the machine. They're outside. Is your Wi-Fi signal strong? Is Twitter down again?
Netbooks are evidence that we now know what personal computers are for.Which is to say, a pretty small list of things that are conducted almost entirely online. This was Asustek's epiphany. It got laptop prices under $300 by crafting a device that makes absolutely no sense when it's not online. Consider: The Eee's original flash drive was only 4 gigs. That's so small you need to host all your pictures, videos, and files online—and install minimal native software—because there's simply no room inside your machine.
Netbooks prove that the "cloud" is no longer just hype. It is now reasonable to design computers that outsource the difficult work somewhere else. The cloud tail is wagging the hardware dog.
Most consumers have never heard of Taiwan's quiet, unheralded PC firms, but they've been behind some of the most important hardware of the past three decades. Quanta first gained notice in the '80s for cleverly cramming new components into notebooks. Then, in 2001, Apple contracted with the company to design its G4 notebook from top to bottom. The product was a spectacular success, and Quanta was soon doing engineering for every other major PC maker. Asustek and MSI, the two other giants of the Taiwanese laptop world, also branched out from motherboards into everything from LCD TVs to mobile phones. These companies are enormous: Quanta had sales of $25 billion last year, more than marquee firms like Amazon.com, Texas Instruments, and Electronic Arts.
Even though the Taiwanese manufacturers remained subservient to the well-known PC brands, they soaked up tons of knowledge over the years. For instance, when Intel created its x486 chip in 1988, Asustek built a compatible motherboard before Intel could make its own board work. Later, Asustek was producing components for Apple laptops. "Nine times out of 10," recalls John Jacobs, a former Apple manager who now covers the LCD market as an analyst for DisplaySearch, "when we said 'Jump,' they said 'How high?' That's how Asustek learned a lot."
But for all their success, companies like Asustek and MSI were outsiders. And when Asustek released the Eee netbook, big firms like Dell, HP, and Apple did nothing for months. "All the other brands were thinking, 'Oh, this is crap,'" recalls Lillian Lin, Asustek's global marketing director.
Dell and HP weren't going to pioneer a $400 laptop, because they were already selling laptops for $1,000. Why mess with a good thing? MSI had no laptop business at all, and Asustek had only a small business selling full-price machines under its own brand, mostly in Asia and Europe. Since the Taiwanese weren't addicted to selling SUV-class computers, they could swoop in like Honda with smaller, more efficient models. They also knew how to design on the cheap after years of producing motherboards with excruciatingly tiny margins.
In The Innovator's Dilemma, Clayton Christensen famously argued that true breakthroughs almost always come from upstarts, since profitable firms rarely want to upend their business models. "Netbooks are a classic Christensenian disruptive innovation for the PC industry," says Willy Shih, a Harvard Business School professor who has studied both Quanta's work on the One Laptop per Child project and Asustek's development of the netbook.
The Taiwanese firms, Shih argues, now have enormous clout in the PC industry. In the US, we regard branding and marketing—convincing people what to buy—as core business functions. What Asustek proved is that the companies with real leverage are the ones that actually make desirable products. The Taiwanese laptop builders possess the atom-hacking smarts that once defined America but which have atrophied here along with our industrial base. As far as laptop manufacturing goes, Taiwan essentially now owns the market; the devices aren't produced in significant volumes anywhere else.
If you had asked Taiwanese hardware CEOs a few years ago about their relationship with Dell, HP, and Apple, they'd have told you that the American companies did the branding and sales while outsourcing their design and production to Taiwan. Today the view from Asia is increasingly the reverse. "When I talk to them now," Shih laughs, "they say, 'We outsource our branding and sales to them.'"
"But what about Photoshop?" It's the standard retort from those who dismiss netbooks as children's toys. Sure, a dinky 1.6-GHz chip and Linux are fine for email and silly things like YouTube. But what about when you need to do some real computing, like sophisticated photo editing? The cloud won't help you there, kid.
In the narrowest sense, this is true: A really powerful application like Adobe Photoshop demands a much faster processor. But consider my experience: This spring, after my regular Windows XP laptop began crashing twice a day, I reformatted the hard drive. As I went about reinstalling my software, I couldn't find my Photoshop disc. I forgot about it—until a week later, when I was blogging and needed to tweak a photo. Frustrated, I went online and discovered FotoFlexer, one of several free Web-based editing tools. I uploaded my picture, and in about one minute I'd cropped it, deepened the color saturation, and sharpened it.
I haven't used Photoshop since.
Keep in mind that I like Photoshop. I'm not doing this to make any geeky ideological point about how bleeding-edge I am or how much I hate paying for boxed software. It's simply that the hassle of finding my Photoshop disc now exceeds the ease of using FotoFlexer. The code for working with the browser-based app is a mere 900 KB, and "to the average user, that comes down really fast," as Sharam Shirazi, CEO of Arbor Labs, which created it, points out to me.
My Photoshop experience is just one example of how the software industry is changing. It used to be that coders were forced to produce bloatware with endless features because they had to guess what customers might want to do. But if you design a piece of software that lives in the cloud, you know what your customers are doing—you can watch them in real time. Shirazi's firm discovered that FotoFlexer users rarely do fancy editing; the most frequently used features are tools for drawing text and scribbles on pictures. Or consider the Writely app, which eventually became the word processor part of Google Docs: When Sam Shillace first put it online, he found to his surprise that what users wanted most was a way to let several people edit a document together.
"It used to be, 'I'm buying a paint program, and I'll get the one with 5,000 features. I don't know what 2,000 of those features are, but I'll get it just in case,'" Shillace says. "Today it's just, 'Which one is most easily available? Which one is ready online?' So applications are competing on merit; they're not competing on bulk."
Netbooks are so cheap, they're reshaping the fundamental economics of the PC business. Last October, British mobile-phone carrier Vodafone offered its customers a new deal: If they signed a two-year contract for high-speed wireless data, Vodafone would give them a Dell Mini 9 netbook. That isn't quite the same as getting a free computer; after all, Vodafone bills users $1,800 on that two-year contract, so it can afford to throw in the netbook. (In December, RadioShack offered a similar deal: a $99 Acer Aspire netbook for anyone who signed up for two years of AT&T's 3G service.)
What these deals signal is that computers are developing the same economics as mobile phones. Hardware is becoming a commodity. It's difficult to charge for. What's really valuable—what people will pay through the nose for—is the ability to communicate.
So netbooks have sent a sort of hot-cold shudder through the computer industry. Sure, it's great to have an exploding new product category. But this is a category in which it's incredibly hard to make a dime: At $300, a netbook sells for barely more than the sum of its parts—and sometimes less. "The profit margins on these things are nonexistent," chuckles Paul Goldenberg, managing director of Digital Gadgets, which created a line of netbooks under the Sylvania brand. "Everyone is saying 'We're losing money now, but we'll make it up on volume, right?'"
Nearly every company in the PC industry has had its game plan uprooted by netbooks. Microsoft had intended to stop selling Windows XP this summer, driving customers to its more lucrative Vista operating system. But when Linux roared out of the gate on netbooks, Microsoft quickly backpedaled, extending XP for another two years—specifically for netbooks. Most experts guess that Redmond can charge barely $15 for XP on a netbook, less than a quarter of what it previously sold for. (Microsoft corporate vice president Brad Brooks assures me the company is earning "good money" on the devices and plans to make sure its next OS, Windows 7, can run on netbooks—Vista performs poorly on them.) For its part, Intel is selling millions of its low-power Atom chips to netbook manufacturers. "We see this as our next billion-dollar market," says Anil Nanduri, Intel's technical marketing manager—except that the company makes only a fraction of the money on an Atom chip as on a more powerful Celeron or Pentium in a full-size laptop.
The great terror in the PC industry is that it's created a $300 device so good, most people will simply no longer feel a need to shell out $1,000 for a portable computer. They pray that netbooks remain a "secondary buy"—the little mobile thingy you get after you already own a normal-size laptop. But it's also possible that the next time you're replacing an aging laptop, you'll walk into the store and wonder, "Why exactly am I paying so much for a machine that I use for nothing but email and the Web?" And Microsoft and Intel and Dell and HP and Lenovo will die a little bit inside that day.
The decision is probably out of American hands. Indeed, living in the US—where netbooks are only just taking off—it can be hard to grasp just how popular the devices have become in Europe and Asia and the degree to which they're already altering the landscape. As Shih told me, "I was talking to the chair of one of the major Taiwanese notebook manufacturers, and he said, 'This is where my next billion customers comes from.' And he was not referring to the US." He meant the BRIC countries—Brazil, Russia, India, China—where billions of very price-conscious customers have yet to buy their first computer. And the decisions they make—Windows or Linux? Microsoft wares or free cloud apps?—will have enormous influence on how computing evolves in the next few years.
Netbooks could drive production of even crazily cheaper, lighter-weight computers. "If everything you're doing is online, then the netbook becomes a screen with a radio chip. So why do you need a motherboard?" OLPC designer Mary Lou Jepsen says. "Especially if you want the batteries to last. Why not just make it a screen and a really cheap $2 to $5 radio chip?" The cloud is also probably going to get powerful in ways that now seem like fantasy. AMD is working on an experimental 3-D graphics server farm that would run high-end videogames, squirting a stream out to portable devices so you could play even the most outrageously lush games without a fancy onboard processor. Patrick Moorehead, AMD's vice president of marketing, recalls that in 2007 gamers had to buy special powerful desktop machines loaded with RAM and $600 graphics cards to play Crysis: "Now imagine you've got servers running Crysis and streaming it to an iPhone or a netbook, sending just the vectors that let you navigate the game."
Because this is the future of hardware. For a few users who need a high-performance device, PC makers will offer ever-more-blisteringly fast, water-cooled boxes with screens the size of your living room—at $2,000 a pop. For everyone else—lawyers looking for something to do on the train, women desperate for something that fits in their handbag—netbooks will dominate. It's the rise of the very small machines.
Intel Moves Against Psion for 'Netbook' Trademark
Intel has filed for a declaratory judgment against Psion Teklogix in order to continue using the term "Netbook" generically. The legal filing also revealed, as a separate matter, that Google would prohibit search advertisements that include the term "netbook."
What's the difference between a Netbook and a notebook? More than the design, according to Psion Teklogix.
Psion "purports to be the owner of U.S. Trademark Registration No. 2404976 issued on November 21, 2000 for the mark Netbook for use in connection with laptop computer," according to an Intel legal filing in the United States District Court for the Northern District of California.
Not surprisingly Intel and others, including Dell, don't agree.
"Our view is that the term 'netbook' is a widely used generic term that describes a class of affordable computing devices, much like the term 'notebook' or 'ultra-mobile PC,'" Intel said in a statement Wednesday.
Intel continued: "In order to continue to use the generic term 'netbook' we filed the case. We're asking for a decision to clarify that the use of 'netbook' does not infringe anyone's rights."
Psion Teklogix, which describes itself as a "provider of mobile computing solutions," has been sending cease-and-desist letters to manufacturers, retailers, bloggers and others since December claiming the trademark. Before it became Psion Teklogix, Psion PLC made handheld "organizers" in the 1990s whose tiny clamshell design resembled the smallest Netbooks offered today by Asus or clamshell mobile Internet devices (MIDs) offered by companies like Compal and OQO (see photo).
Part of the Intel counter-claim is that the chipmaker believes that Psion did not use the Netbook trademark on laptop computers for five consecutive years following the date of registration in 2000--apparently a legal requirement. Mostly because Psion's mobile computers did not succeed in the market and were discontinued, according to Intel.
Intel cited a letter in its suit from Psion's legal counsel that asserted that "Intel aided, abetted and otherwise induced manufacturers and retailers" to "use the term 'netbook.'"
The Intel suit for a declaratory judgment also cited the fact that Google informed Intel that it "would prohibit all advertisements that include the term 'netbook' in the ad text." This was the result of a legal action by Psion against Google that "had the immediate effect of effectively ending Intel's (and all others') ability to advertise the netbook category of computers via search engine marketing."
Full Text: An Epic Bill Gates E-Mail Rant
Sometimes, software isn't so magical. Even for Bill Gates.
For the opening piece in our series on Gates leaving daily life at Microsoft, one goal was to give a clear picture of the Microsoft co-founder's role inside the company, as a gauge of the impact his departure will have. As part of that, I went back through the internal e-mails turned over in the antitrust suits against the company, looking for new insights into his personality.
Read on past the jump for one of the gems that turned up, showing Gates in the role of chief rabble-rouser. (Original document: PDF, 5 pages.) It shows that even the Microsoft co-founder -- who champions the "magic of software" -- isn't immune to the frustrations of everyday computer users. Keep in mind that this was more than five years ago, so it doesn't necessarily reflect the specific state of things now. At the bottom, see what Gates said when I asked him about the message last week.
---- Original Message ----
When we were concluding our interview last week, I showed Gates a printout of the e-mail and asked if he ever got Movie Maker to work. Gates noted that Microsoft plans to include Movie Maker as part of Windows Live, so people will get the program when they download that online package. The company isn't confirming that officially yet, but's not a complete surprise. See this Wikipedia entry and this related post on LiveSide.net. (Site temporarily down as of Tuesday morning.)
As for the message, Gates smiled and said, "There's not a day that I don't send a piece of e-mail ... like that piece of e-mail. That's my job."
Follow-up: No BS: A glimpse of the real Bill Gates
Update: Dave Ross of KIRO-AM/710 in Seattle did a dramatic reading of the message on air Wednesday morning. Click here to access the audio.
Update, Friday: During his farewell event at Microsoft this morning, Gates referred to this, and poked a little fun at us: "One of the newspapers had some e-mail that I sent about how maybe Windows could have been better at something, and they said, 'This is a shocking e-mail. Shocking!' And I said, 'What do you think I do all day? Sending an e-mail like that, that is my job. That's what it's all about. We're here to make things better."
Dell Income Drops 48% as It Seeks to Cut Costs
During past downturns in technology spending, Dell tended to boast about its ability to weather the conditions better than competitors bogged down by higher-priced goods and cumbersome business models.
In 2009, no such gloating has been heard.
Like its peers, Dell has watched as businesses and consumers have sharply curtailed technology purchases. The decline in sales has proved so severe that Dell’s earnings have fallen to the lowest level since 2002.
On Thursday, Dell reported that net income for the fourth quarter, ended Jan. 30, fell 48 percent, to $351 million from $679 million reported in the fourth quarter of the prior year. Revenue tumbled 16 percent, to $13.4 billion from $16 billion.
Excluding one-time expenses, Dell earned 29 cents a share, 3 cents better than the consensus forecast of Wall Street analysts, according to Thomson Reuters.
Instead of tough talk about trouncing competitors, Dell executives are focusing on their efforts to streamline the company.
The stronger-than-expected earnings reflect work done as part of a $3 billion cost-cutting program, which has included layoffs, the closing of manufacturing plants and a shift toward using contract manufacturers to build more of its laptops.
Dell, based in Round Rock, Tex., said Thursday that it would cut $1 billion more in annual costs over the next two years to improve the bottom line.
“We will be the first to admit that this is a work in progress, and there is more to do,” Brian T. Gladden, the company’s chief financial officer, told Wall Street analysts in a call to discuss the quarterly results.
Meanwhile, Dell’s 18-month effort to move away from personal computers and increase sales in potential higher-profit growth areas has stalled with the broader economy.
“They are doing what they can, but are basically still treading water,” said Richard Kugele, an analyst with Needham & Company.
The main drag on Dell’s revenue came from plummeting desktop and laptop computer sales, which dropped 27 percent and 17 percent, respectively. Sales of Dell’s servers, software and services also fell, leaving storage products as the only area of growth.
Michael S. Dell, the company’s founder and chief executive, insisted that it had made progress as part of a turnaround effort he started in 2007.
“Our strategy is to develop disruptive technology and innovation and shift our business to higher margin products and services,” Mr. Dell said on the call.
But in fact, Dell’s overall business appears very similar to what it was last year.
For example, servers and higher-profit services and software account for the same overall percentage of Dell’s revenue as they did last year. And despite making its way into 24,000 retail outlets, Dell derives only 2 percent more of its revenue from consumer sales than it did a year ago.
Investors may give Dell some leeway with its long-term transition given the state of the economy, which makes growth and expansion into new markets difficult. Still, they are demanding that Dell maintain profits in the near term as it chases changes in strategy.
“I think the issue is that their profitability today is much lower than it was historically,” said A. M. Sacconaghi, a securities analyst with Sanford C. Bernstein.
Dell’s once-vaunted direct sales model continues to trail the overall profitability of Hewlett-Packard’s PC operation, Mr. Sacconaghi said.
Competitors like Hewlett-Packard and I.B.M. have also proved better equipped to deal with stagnant hardware orders because of vast services businesses, software sales and long-term contracts with customers.
Shares of Dell fell 15 cents, to $8.21 Thursday, before the earnings announcement. In after-hours trading, Dell shares rose more than 1.8 percent, to $8.36.
Sharing Consumers’ Tastes in Cellphone Web Surfing
Claire Cain Miller
On Monday, a Denver-based start-up, Buzzwire, is unveiling a new site for cellphone users that will tap their collective preferences to create a guide to the best Web content for mobile users.
Buzzwire began in 2006 as a service to help wireless carriers and video producers play videos on mobile phones. With its new site, which users can reach on their mobile browsers at m.buzzwire.com, it hopes to expand its reach.
By 2012, “people will be browsing the Web more on their phones than on PCs or laptops,” said Greg Osberg, Buzzwire’s chief executive. “This is the first site 100 percent dedicated to the best of the mobile Web, with nothing to do with the PC Web.”
The number of people who surf the Internet on phones has doubled since 2006, according to Nielsen Mobile, to 40 million. Still, only 16 percent of people with cellphones use them to go online, and those that do visit an average of six sites a month, versus 100 on their computers.
Mr. Osberg wants to change that. He joined Buzzwire in November after leaving as president and worldwide publisher of Newsweek, where he revamped the magazine’s Web site and started its mobile site.
Similar to Digg, which lists stories recommended by Digg users, Buzzwire readers will pick stories they want to appear on the site by sending a text message or e-mail message or clicking on a Buzzwire button at the bottom of a story. Unlike Digg, Buzzwire has four editors who also cull articles from around the mobile Web.
Readers can scan the 20 most popular stories; navigate by topic, such as the Academy Awards, politics or health; or see the most recent links. They can also keep a list of stories they want to read later and, à la Twitter, follow their friends to see what they are reading and watching.
Buzzwire aims to direct readers to the best content on the mobile Web and help publishers that have struggled to lure readers, Mr. Osberg said.
TVGuide.com gets 16 million visitors each month but its mobile Web site gets only 500,000. Joining with Buzzwire “is part of a larger strategy of making sure we reach consumers wherever they are,” said Paul Greenberg, general manager of TVGuide.com.
Buzzwire makes money by licensing its mobile video technology, which Verizon Wireless, AT&T and Alltel already use. It will also sell ads on the site. The Deutsch advertising agency has bought all the ad space for 90 days and will run banner and video ads for clients that include DirecTV and Kodak, said Peter Gardiner, the agency’s chief media officer.
Advertisers have had a hard time figuring out how to reach people on their phones, he said, because TV and Web ads do not work well on phones and advertisers are wary of assaulting readers on such a personal device.
“These are fears in the mobile industry,” he said. “The reason we’re getting involved with these guys so deeply is it’s a chance to test different approaches and see what consumers like and don’t like.”
Chicago Bears Fan Hit for Thirty Grand for a Bit of Slingbox
And you thought your roaming charges were high
A hapless Slingbox user managed to run up a data bill of $28,067.31 watching a game of American football, despite being aboard a docked cruise liner and having an unlimited data tariff, thanks to a technical hitch or two.
The story comes from the Chicago Sun-Times, who have managed to get AT&T to credit Wayne Burdick with most of the bill despite his own failure to make the network understand that he should never have been able to run up international roaming charges, of two cents per kilobyte, without leaving the USA.
A Slingbox grabs video from the user's home and streams it over the internet, in this case to a laptop over Burdick's cellular connection which he thought was connected (though a datacard) to AT&T's network. Unfortunately for Burdick he was actually connected to the ship's onboard network, which accounts for the international roaming, and his datacard was unable to display the repeated warnings that AT&T kept sending him over SMS.
Less clear is why the onboard network was operational - ships often offer cellular access, using a microcell connected to a satellite link and billed at international roaming rages. But they are supposed to be switched off when nearing port to prevent interfering with local networks. In fact most ship-based GSM equipment is fitted with GPS to prevent it being used within range of a land network, so the onboard network shouldn't have been in operation at all.
When Burdick received the bill he complained to AT&T, who eventually offered to reduce the bill down to $6,000 - hardly comparable to the $220 he reckons is average. Luckily the chaps over at the Sun-Times were able to argue his case, and AT&T has credited Wayne with $27,776.66, though really it shouldn't have taken media involvement to fix a case like this.
Roaming rates like that, without even leaving the country, make the EU look like a haven for the international traveler - perhaps once Ms Reding has battered EU operators for unreasonable behaviour, we can ship her over the pond to sort out the septics.
Supreme Court Rules for AT&T in Price Dispute with Net Service Provider
The Supreme Court Wednesday unanimously ruled for AT&T in the company's antitrust dispute with an Internet service provider over prices for high-speed Internet access.
The court reversed a decision by the 9th U.S. Circuit Court of Appeals. The San Francisco-based appeals court had ruled the telecom company was setting its wholesale prices so high that an Internet service provider could not compete with the low prices AT&T charged in the retail market.
The plaintiff in the lawsuit, LinkLine Communications, buys access to AT&T's transmission lines. LinkLine then competes with AT&T in selling high-speed Internet access.
"Under these circumstances, AT&T was not required to offer this service at the wholesale prices the plaintiff would have preferred," Chief Justice John Roberts wrote.
Roberts was joined in his opinion by Justices Antonin Scalia, Clarence Thomas, Anthony Kennedy and Samuel Alito. Justices Stephen G. Breyer, Ruth Bader Ginsburg, John Paul Stevens and David Souter concurred only with the judgment.
The ruling does not end the case. The justices sent the case back to a trial judge, who can decide whether AT&T was charging too little for its product in hopes of running its competitors out of business.
Verizon Wireless Takes Aim at Pesky 'Rabbit'
It's not often I cheer for a corporate giant unleashing its legal might, but let's make an exception for the lawsuit Verizon Wireless has filed this week against a film distributor.
In fact, it might even enough for me to forgive Verizon for its own robo-call assault on my house.
From the Verizon Wireless press release:
What could this film company possibly be thinking?
A Utah newspaper managed to get a comment:
No way my kids are seeing this movie (unless Mom caves; that's her call).
U.S. Cable, Programmers Set for Web TV by Summer
Cable and satellite TV providers are working on a free online video service to deliver up-to-date cable shows to computers and mobile phones, but the industry is worried the project could cannibalize pay-TV's long-standing revenue model.
Millions of U.S. consumers are already watching broadcast TV shows on free websites such as Hulu.com, but cable network programing is available primarily on cable and satellite TV services, such as Comcast Corp and DirecTV Group Inc, or nascent video services from phone companies.
"This is about bringing new amounts of content to the Internet in a business model that continues to support the creation of that content," said Sam Schwartz, executive vice president of Comcast Interactive Media.
Comcast is leading talks with programmers like Viacom Inc and Discovery Communications Inc, with Time Warner Cable, DirecTV and others involved. Their plans are at different stages, and cable operators will likely discuss putting cable programing online at an industry meeting this week according to people familiar with the plans.
The project would let cable and satellite TV subscribers watch up-to-date cable shows on the Web, and possibly on mobile phones, for free possibly as soon as this summer, the sources said.
The idea is to give customers added flexibility to view their favorite shows. It is also seen as a preemptive strike against possible 'cord-cutting' of video services, particularly by younger subscribers used to watching other programs online.
But the project presents a number of business and technology challenges to both operators and programmers.
Cable programmers like Viacom's MTV Networks make money from advertising sales, as well as affiliate fees that cable and satellite TV service providers pay.
One stumbling block in agreeing to an online business model is the low level of revenues generated by Internet advertising. Online video advertising is a poor relative to TV on a dollar for dollar basis.
Final decisions on business models for cable programing online are in flux, said Comcast's Schwartz. "You're going to see a lot of different models out there," he said.
Whatever business models are agreed upon will depend to some extent on overcoming technological challenges.
One involves identifying which customers have the right to view a show, and managing digital rights to avoid over-wide distribution. There is also the need to accurately 'time' the content so it is available to users for a restricted period -- so as not to jeopardize other media content distribution systems such as video on demand and DVD releases.
Yet executives also acknowledge the risk of ignoring the Web, as seen by the music and newspaper industries that have suffered as consumers change their media consumption habits.
Comcast sees the project, which it calls On Demand Online, as a natural progression from digital video recorders and video-on-demand channels.
It is working on technology to authenticate subscribers who go to Comcast's Fancast and Comcast.net websites for video. This would effectively create a "wall" behind which programmers might feel comfortable keeping some of their premium shows.
"For us, this would be a step to put even more content online," said Denise Denson, executive vice president of content distribution and marketing at MTV Networks.
"Operators and programmers have shared interests in that we'd like people to keep watching linear TV while growing new platforms and products, and this further aligns those interests in many ways," she said.
Comcast's web video delivery unit theplatform is working the project and CEO Ian Blaine believes many of the initial problems will get ironed out.
It would be easiest for operators to develop their own website, such as Fancast, for online video offerings. But this could cause discomfort for programmers, who may want their own advertising-supported site, such as broadcast TV's Hulu.com. Hulu, owned by NBC and News Corp, is one of the most popular U.S. video sites with 24.5 million users in December, according to comScore.
Mobile, DVR Video Log Fastest Growth
Online video is cutting into television, albeit slowly.
People are watching more video than ever on every type of screen -- television, the Internet and mobile devices -- according to a report on the nation's viewing habits to be released Monday by Nielsen Co.
Nielsen found that during the fourth quarter of 2008 the number of users and the time spent watching each of the three screen media rose from the previous quarter. "If people like video, they like it wherever they can get it," said Susan Whiting, vice chair of Nielsen.
The biggest jumps came in the number of viewers watching video on mobile devices and "time shifted" television, that is, programming viewed with a digital-video recorder. Each rose about 9% in the fourth quarter from the third quarter. Roughly 11 million people used mobile viewing and 74 million people watched DVR programming. Internet video users increased 2.3% to 123 million people.
Traditional television is still the most popular by far. Roughly 285 million of the nation's 306 million people watched TV in their home in the fourth quarter, up about three million people, or 1%, from the prior quarter.
Television also wins in terms of the time spent on each medium. People spent more time watching TV: an average of 151 hours a month or five hours a day -- a record high, according to Nielsen. That is a 7% increase, or roughly 11 hours more.
Internet video viewers, on the other hand, spent just under three hours on that a month, or 22 more minutes than the prior quarter, a nearly 15% increase.
In both time spent and number of viewers, Internet video grew at a rate twice that of television. Michael Vorhaus, president of consulting firm Frank N. Magid Associates, points to the growth as a threat to traditional television viewing. "It's not going to go away and it's not going to get better," he said.
For the first time in the Nielsen study, people ages 18-24 spent nearly the same amount of time -- roughly five hours -- watching Internet video each month as they did watching DVR programs. Other age brackets watched half as much or less Internet video than they did DVR video.
Online video viewing is increasingly seen as more valuable than DVR viewing because, unlike DVR viewing, viewers can't fast-forward through the advertising.
Television viewing, however, remains the most valuable for advertisers because of its breadth of audience.
Digital Transition Opens Door to 'WiFi on Steroids'
Canada finds itself lagging more than two years behind the United States in the transition from analog to digital television broadcasting, a process that could leave millions of Canadians without access to over-the-air television signals.
While the elimination of "free TV" would spark outrage in many communities, the most harmful effect of the slow migration will be felt in the competitiveness of Canadian telecommunications, not broadcasting.
The link between the digital television transition and telecommunications stems from the freed-up spectrum that will become available as broadcasters abandon their analog transmissions. This spectrum – known as the 700 MHz spectrum – opens up a host of possibilities for new innovation, competitors, and open Internet access.
The 700 MHz spectrum will lead to another spectrum auction that could open the door to further entrants into the Canadian wireless market. In fact, some speculate that some would-be bidders stayed out of the most recent AWS spectrum auction (which raised more than $4 billion in revenue for the government), in the hope of grabbing some of the MHz spectrum since it is viewed as technically superior (for example, it more easily penetrates walls, making it ideal for delivering wireless high-speed Internet services).
Industry Minister Tony Clement has the chance to dramatically reshape the Canadian wireless market by establishing a bold policy approach to the auction. For example, as pressure mounts to open up the Canadian market to foreign competition, this auction could provide the entry point.
By permitting foreign investors to bid for majority stakes in 700 MHz spectrum, the government could simultaneously invite increased competition and promote new investment in the Canadian marketplace. Moreover, the rules governing the use of the spectrum will also attract considerable attention.
In the United States, the Federal Communications Commission has adopted some "open access" requirements, mandating certain openness standards in the use of this spectrum.
For consumers tired of the "walled garden" approach of current providers that use both contracts and technology to lock-in consumers, open spectrum policies would spur new innovation and heightened competition by facilitating greater consumer mobility and promote the introduction of new services not tied to a single wireless provider.
In addition to the auctioned spectrum, there is the potential for further unused spectrum to be made available for public use. Known as "white spaces", this spectrum was previously used by broadcasters to ensure that their analog broadcasts did not interfere with one another.
A consortium of companies, including Google, Microsoft, and Dell, has argued that this spectrum can be safely used for other purposes. Rather than auctioning the white spaces, they recently persuaded the FCC that the public interest would be better served by allowing anyone to make use of it (as is the case today with spectrum used for WiFi signals). This would allow for the introduction of new services over the white spaces, such as broadband in rural communities or, in the words of Google co-founder Larry Page, "WiFi on steroids."
While broadcasters lodged objections to the white space plan, claiming the new uses could interfere with their digital broadcasts, last week the FCC formally gave the green light to the use of WSDs, or white space devices.
As the U.S. marches along on this policy front, Canada has not even left the starting gate. Indeed, it appears increasingly likely that the U.S. approach will be fully implemented by the time Canada gets its act together.
While that points to a carbon copy approach, it will ultimately fall to Clement to make the call and to set in motion policies that could change the way Canadians access broadcast, telecom, and Internet services.
Sirius-Liberty Deal Open-Ended
Jui Chakravorty Das and Yinka Adegoke
The battle for Sirius XM (SIRI.O: Quote, Profile, Research, Stock Buzz) might not be over just yet.
As Liberty Media Corp (LINTA.O: Quote, Profile, Research, Stock Buzz) agreed to inject up to $530 million in loans to Sirius XM for a 40 percent stake in the satellite radio operator earlier this month, the battle appeared finished.
The move was seen as fending off a takeover attempt by Charles Ergen, whose satellite TV company EchoStar Corp (SATS.O: Quote, Profile, Research, Stock Buzz) quietly amassed debt in Sirius and ended up in talks to take over the company.
But a closer look at the agreement between Liberty Media, led by John Malone and Sirius XM, led by Mel Karmazin, shows a few clauses that leave the deal's status open - at least for now.
The first credit agreement for a $250 million term loan and $30 million of purchase money loans between Sirius and Liberty Media is a fairly simple one.
But the second agreement provides $150 million and does not close immediately. That funding is subject to Sirius XM getting other credit agreements extended and Liberty buying $100 million of its debt.
This second loan is also contingent on Sirius auditors not issuing any "going concern" qualifications on the company's 2008 audited results -- which usually mean a company is likely to go bankrupt without additional financing.
The deal with Liberty helped Sirius XM avoid a possible bankruptcy.
Liberty Chief Executive Greg Maffei said on a conference call on Wednesday that "some have suggested" a bankruptcy would be a positive for Sirius as it would allow it to renegotiate contracts with automakers and artists. He said Sirius was trying to renegotiate some contracts now to reduce costs.
"We're hoping Mel is as successful as he can be at doing some of these things...but we are positioned OK, we believe, if that does not come to pass," Maffei added.
In exchange for the loans, Liberty receives 12.5 million shares of preferred stock in Sirius, which become convertible into common stock once the deal receives antitrust clearance.
But the preferred stock is only issuable on the satisfaction of the conditions to funding the XM loan, giving Sirius a period of time in which it can find another deal.
"If, prior to April 15, 2009, we receive an alternative proposal that our board concludes in good faith is a superior proposal...our board may terminate the investment agreement in order to transact the superior proposal," Sirius said in an 8K filing with the U.S. Securities and Exchange Commission.
"They have the option to bail on the whole thing," Todd Mitchell, analyst at Kaufman Bros., said.
Maffei told Reuters in an interview that while Sirius is not allowed to shop itself around before April 15, it is allowed to consider a higher bid from another party. Liberty would have the right to match such an offer, he said.
The deal's structure leaves the door ajar for EchoStar's Ergen, a longtime rival to both Karmazin and Malone. While it is possible that he could return with a "superior proposal," Maffei considers any third party involvement unlikely.
"It would be fairly expensive for someone to go over the top because there's a bunch of penalties and they'd have to pay off our debt," Maffei said, adding he was confident Liberty would complete the second stage of the deal with Sirius.
If Sirius were to accept another deal, the stock issuance and the second-phase loan would not occur. Liberty is also entitled to a $7 million termination fee and can demand that the first loan be repaid with a $14 million premium.
EchoStar declined comment. An external spokeswoman for Sirius also declined comment.
Even as Sirius is burdened with $3.25 billion in debt, it remains an attractive asset with 20 million subscribers. The only U.S. satellite radio provider, it is guaranteed to grow its subscriber base in new cars despite a plunge in car sales.
About half the new cars being sold have satellite radios in them and about half of those become satellite radio subscribers, Karmazin has said.
Maffei told Reuters Sirius XM was "a good company in unfortunate circumstances," citing tight credit markets and plunging auto sales.
A satellite radio and television partnership could also provide synergies: Maffei said Liberty would look at co-marketing DirecTV, the largest U.S. satellite television provider, and Sirius XM to each other's subscribers, do joint content deals, and consider a mobile video initiative.
Ergen's talks with Sirius earlier this month involved taking over the company as Karmazin held equally intense parallel discussions with Malone, sources have told Reuters.
"I'm surprised they did it," Mitchell said, referring to Liberty investing in Sirius.
"I think they took a defensive look because of Ergen's involvement and then maybe the numbers worked for them. If he hadn't served as catalyst, I'm not sure I would ever thought Sirius would end up with Liberty."
(For more M&A news and our DealZone blog, go to www.reuters.com/deals)
(Editing by Leslie Gevirtz)
Obama Proposes New Wireless-Spectrum Fee
Faced with a whopping $1.7 trillion deficit, President Obama is proposing tacking on a spectrum license fee to wireless operators to help generate revenue for the government.
The Obama administration's proposal was loosely outlined in the new budget plan for 2009 and 2010 submitted Thursday. In that plan, the administration proposes adding a new fee to be paid by wireless carriers that license wireless spectrum from the government.
These annual fees would start at $50 million in 2009 and jump to $200 million in 2010, Reuters reported. The fees will gradually increase over the next 10 years to $550 million per user per year, generating an estimated total of $4.8 billion over the next decade.
The proposed fees are in addition to license fees that operators have already paid the federal government as part of its wireless auctions. The Federal Communications Commission has been auctioning off wireless spectrum to phone companies and other entities since the 1990s. These auctions grant license holders exclusive rights to the spectrum in exchange for cash.
Over the years, these auctions have generated billions of dollars for the federal government. The most recent auction, which ended in March 2008, was for the 700 MHz block of spectrum that is being vacated by television broadcasters after the mandated digital TV transition. This valuable spectrum generated a record $19.6 billion.
But wireless spectrum is a limited resource. And the government is running out of airwaves to auction. In fact, the Obama administration predicts that it will only be able to generate about $4.8 billion in revenue from wireless auctions over the next 10 years.
Even though the additional fees could help the government halve the deficit by 2013 as well as help it fund several new spending initiatives, it's likely to be met with a great deal of resistance from mobile operators.
So far, none of the big four wireless carriers in the U.S.--AT&T, Sprint Nextel, T-Mobile USA and Verizon Wireless--has been willing to comment on the proposal. And the CTIA wireless-industry association said it's still looking into the matter.
"We are currently reviewing the details of the proposal and look forward to participating in the next stages of this issue," CTIA said in a statement.
Previous spectrum fee proposals have been strongly opposed by the wireless industry, and there's little reason to suggest that the industry would support them now. The big difference this time around is that a Democrat-controlled Congress could be more willing to support President Obama's plans.
More details about the proposal are expected later this spring when the administration releases a more detailed budget package. But any changes to the fee structure would require legislation. And my guess is that the wireless industry would fight hard against it.
FCC Proposes Fines Over Data Protection
Federal regulators slapped hundreds of small telecommunications providers for not abiding by new rules designed to protect consumer phone records, proposing more than $13 million in total fines.
The Federal Communications Commission proposed $20,000 fines on more than 650 small phone, pager and wireless providers Tuesday, accusing them of not filing paperwork that certifies they have put protections in place to protect customer phone data.
"I have long stressed the importance of protecting the sensitive information that telecommunications carriers collect about their customers," said Michael Copps, the FCC's interim chairman, in a statement. "The broad nature of this enforcement action hopefully will ensure substantial compliance with our [privacy] rules going forward as the Commission continues to make consumer privacy protection a top priority."
In April 2007, the FCC tightened privacy requirements on phone companies in response to consumer complaints about data brokers selling phone records they had obtained illegally through "pretexting," or getting information under false circumstances.
The agency required telecom companies to increase security of phone records, requiring customers to provide a password before receiving account information over the phone or online. Phone companies are required to notify customers when changes are made to their accounts or if their information has been improperly accessed.
Companies are required to file annual certifications that they have complied with those requirements.
The FCC said hundreds of small companies didn't provide the information in 2008, although it noted it was the first year the agency had required the paperwork. The agency warned that future noncompliance could face "more severe penalties."
Obama Picks Leibowitz as FTC Chairman
President Obama plans to appoint current Federal Trade Commission member Jon Leibowitz to lead the agency, which partially enforces antitrust laws and has taken a recent interest in online advertising.
An administration official on Monday confirmed to CNET News that Leibowitz, a Democrat appointed to the five-person commission in 2004, would be nominated as chairman.
Liberal groups including the ACLU and U.S. PIRG last year called on the Obama administration to appoint a chairman who would take a more regulatory approach. More recently, many of those same groups criticized the FTC's view that self-regulation of online targeted advertising was sufficient, which Leibowitz also seemed to take issue with.
"Industry needs to do a better job of meaningful, rigorous self-regulation, or it will certainly invite legislation by Congress and a more regulatory approach by our commission," he said earlier this month.
In November 2007, Leibowitz suggested that Internet companies should take an "opt in" approach to cookies instead of the current "opt out" approach, a requirement that would have roiled the industry. He also suggested the idea of a "Do Not Track" list for Web surfers.
"Leibowitz will help transform what has been a largely anemic regulatory watchdog during the Bush years into an agency that sees its first priority as consumer protection," said Jeff Chester, executive director of the Center for Digital Democracy, a liberal group that advocates for more regulation. "Public interest groups such as mine appreciate that Leibowitz has called for tougher online privacy safeguards, and that his door has always been open."
The FTC under Leibowitz will also continue to address questions of anti-comptetitive practices in the technology sector, including in its proceeding investigation of Intel.
"Under Leibowitz's lead, we expect this investigation to proceed fairly and hope that the new chairman uses his position to investigate similar anti-competitive abuses by other companies," said Ed Black, the president and CEO of the Computer and Communications Industry Association. "His knowledge of high-tech and Internet issues is a huge plus."
On Monday, the U.S. Supreme Court dealt the FTC a bitter defeat when it declined to hear the agency's appeal of the unsuccessful Sherman Act antitrust case it brought against chipmaker Rambus. The case has lasted seven years and is now effectively over; the FTC initially alleged the company "threatens to undermine participation in industry standard-setting activities."
Leibowitz was one of two commissioners to dissent from the FTC'S 2006 decision to allow Time Warner and Comcast to buy cable television systems from Adelphia Communications, without conditions. He and commissioner Pamela Jones Harbour called for restrictions to keep the cable companies from discriminating against rival providers.
On the issue of Net neutrality, Leibowitz stood out from his colleagues in June 2007 when the FTC released a report stating no new laws were necessary. Leibowitz issued an opinion saying existing antitrust laws may not have been "adequate to the task" of Internet broadband regulation.
"Will carriers block, slow or interfere with applications?" Leibowitz asked at a public hearing held by the FTC in November 2006. "If so, will consumers be told about this before they sign up? In my mind, failure to disclose these procedures would be...unfair and deceptive."
Leibowitz previously worked as a lobbyist for the Motion Picture Association of America. Before that, he was chief counsel and staff director for a Senate antitrust subcommittee.
Plans for Leibowitz's nomination were first reported by Bloomberg.
CNET's Declan McCullagh contributed to this report
Telecoms Oppose Tighter Net Neutrality Rules for Stimulus Funds
Congress may have made its preferences for Net neutrality policies clear in the open access requirements included in the broadband provisions of the stimulus package, but the details of those requirements have yet to be drawn up.
It is too early to tell how the Commerce Department may shape its open access policy, a key Obama adviser said at a conference Thursday. Cable companies at the event, however, made it clear they feel any new policies would stifle innovation in the communications sector. Content creators are also ready to take on any further regulation, a Republican congresswoman said.
"If Congress or the (Federal Communications Commission) moves forward to aggressively regulate the Internet, Net neutrality advocates will soon confront some of my guitar-toting, NASCAR-loving songwriters ready to come to D.C.," Rep. Marsha Blackburn, a Republican from Nashville, said at a conference on "New Directions in Communications Policy." The event was sponsored by the Free State Foundation, a think tank that promotes free-market policies.
Noting the music industry's $7 billion economic impact on the Nashville area and the increasing growth in digital music sales, Blackburn said new Net neutrality regulations would inhibit the market from meeting the demand for new applications to access that music legally.
"We are only witnessing the tip of the iceberg of technological development," Blackburn said. "Policy makers must allow all of these great ideas to incubate."
The stimulus bill President Obama signed this month includes $7.2 billion to promote broadband access, including $4.7 billion that will be distributed through the Commerce Department's National Telecommunications and Information Administration. Those particular funds come attached to a requirement that recipients must adhere to "open access" principles.
Blair Levin, an informal Obama adviser who served as an official adviser during the campaign, said the requirement remains flexible.
"I don't think anybody knows what (the NTIA is) going to do because I don't think they know what they're going to," he said.
The policy may be influenced by the project proposals that the agency receives, Levin said.
"I don't think there's an algorithm that can tell you which projects will be chosen," he said. "That's why they need to set up a system that is transparent and flexible."
Levin said that he was struck by the amount of dissent within the telecom industry over the broadband provisions in the stimulus package. Different companies, he said, disagreed on just about everything from the right strategy for increasing broadband access to the number of "unserved" homes in the country.
"There are a lot of very competitive factors," he said. "They wanted to get money, but they mostly don't want their competitors to get money."
Representatives from Comcast, T-Mobile, AT&T, and Verizon were in agreement at Thursday's conference, however, that more Net neutrality regulation would do more harm than good.
"I still don't know that there's a problem out there that anyone is citing that compels big government action," said Thomas Tauke, executive vice president for policy at Verizon.
The industry representatives said they were hopeful the Obama administration's interest in expanding broadband access implied a shift in focus from market regulation to market expansion.
"When we talk about Net neutrality, our mind is focused on how we manage scarcity," Tauke said. "Shouldn't our public policy be to create abundance?"
Yet simply adding bandwidth will not accommodate the explosion of Internet activity the industry anticipates, said Robert Quinn, senior vice president for regulatory policy at AT&T.
He said the market needs "cutting edge management techniques in order to handle this explosion of data."
"We need to avoid extensive new regulations in the name of Net neutrality that are going to impede the evolution of these networks and disincent investments of these," he added.
Michael Savage Fighting The Fairness Doctrine
Talk Radio Networks’ Michael Savage and the Thomas More Law Center, a Michigan based national public interest law firm, have joined forces to fight the oft-discussed potential return of the "Fairness Doctrine."
Savage recently claimed on air that as a conservative, he is a minority in the media, and says the Fairness Doctrine would take him, Sean Hannity, Rush Limbaugh and other conservative voices off the radio.
"Whatever form its reinstatement may take, any limitation on the free speech rights of Michael Savage will result in an immediate legal action," Richard Thompson, President and chief counsel of the Thomas More Law Center. "The U. S. Supreme Court would most likely find any reinstatement of the Doctrine unconstitutional."
"With the stink of public corruption blanketing Washington, with our elected officials passing the single largest spending bill in our nation's history without even reading or debating it, with the increasing nationalization of our financial institutions, with almost dictatorial control of Congress by one political party, and with increasing signs we are becoming a socialistic country, Americans need more dynamic talk show hosts like Savage, not less," Thompson said.
"A regulation of speech motivated by nothing more than a desire to silence political opposition on controversial issues of public interest is the purest example of a law abridging the freedom of speech," said Thompson.
Conservative broadcasters have been raising the specter of the Fairness Doctrine in recent months, despite President Obama stating that he and his administration have no plans to reinstate the policy.
Saint Laurent Art Sale Brings In $264 Million
Despite the global economic crisis, a lot of money seems to be left over. On Monday, the private collection of Yves Saint Laurent and his partner became the most expensive one ever sold at auction, bringing in more than $264 million on the first night alone.
The only significant failure in the sale, one of six auctions of the collection being held over three days by Christie’s, was an apparently overpriced Picasso from the artist’s late Cubist period. The auction house pulled the work when bids stopped at 21 million euros ($26 million) on a painting that had been estimated at 25 million to 30 million euros.
But records were set for Matisse, Marcel Duchamp, Constantin Brancusi, James Ensor, Piet Mondrian and Giorgio de Chirico. The Matisse, a colorful painting from 1911 of a vase of cowslips on a carpet, sold for $40.9 million, double its estimate.
The auction unfolded in the cavernous Grand Palais in central Paris, where thousands of visitors lined up for hours over the weekend to get a chance to see the collection in what became a kind of temporary museum. Mr. Saint Laurent is regarded with great affection and awe here as a paragon of French style, and he evokes an era when no country could challenge French prominence in the world of fashion.
The Matisse was believed to have gone to an American, but Christie’s refused to identify the buyer. Few Matisse paintings of quality come on the market, and each of the three Matisse paintings did better than its estimates.
A remarkable painting by Ensor, “The Jealousy of Pierrot,” sold for $5.38 million. Thomas Seydoux, a Christie’s expert in Impressionist and modern art, described it as “the climax of Ensor’s work,” and noted that it last sold in 1987 for $700,000. “Those masterpieces are never on the market,” he said.
The Duchamp, “Beautiful Breath, Veil Water,” was a Dada hallmark. Its label depicts the artist dressed as a woman, Rrose Sélavy, a punning alter ego he created in 1920 in a photo taken by Man Ray. It sold for $10.1 million, more than six times the estimate, after a bidding war between two anonymous American collectors. “People had waited a long time for this to go on the market,” Mr. Seydoux said.
Mr. Seydoux accepted some blame for the Picasso’s failure. He said he had based his estimate “on the exceptional quality of that period, but I got carried away.”
Isabelle de Wavrin, deputy editor of BeauxArts magazine, said that while Mr. Saint Laurent loved the Picasso, “Picassos are not rare.”
“But everyone is looking for a good Matisse,” she added.
Jean-Marie Baron, an art dealer who bid on some art but could not afford to go high enough, said the sale suggested that to some extent, “the art market is still good and still strong. Almost everywhere they met the high estimate.”
Pierre Bergé, who was Mr. Saint Laurent’s business and personal partner for many years, said in a brief interview that he was very happy with the results. “But you have to know that I’m very cool about things every day,” he said.
He was more emotional later at a brief news conference. “The day Yves Saint Laurent died, I decided this collection had run its course,” he said. “It was something we created together.” Mr. Saint Laurent died last June at 71.
Mr. Bergé said he explored the possibility of creating a museum for Mr. Saint Laurent’s fashion and art collections but that the project proved too difficult. “Selling it was the only possible solution,” he said.
The proceeds, separate from the commissions that his own company, Pierre Bergé & Associés, will presumably share with Christie’s, are to go to the foundation he established with Mr. Saint Laurent, to various cultural projects, to charity and to found a new research center to combat AIDS. Mr. Bergé said he would keep the Picasso for his foundation.
French museums pre-empted the sale of three works: the record-setting painting by de Chirico, which is thought to be an allegory about the return of Napoleon; “At the Conservatory,” an Ensor satire; and “The Lilacs,” by Édouard Vuillard.
A rare wood Brancusi statue, “Madame L.R.,” originally owned by the artist Fernand Léger, sold for $33.3 million, and it was also a good night for the Mondrian market, with a 1922 composition in white, blue, yellow and black selling for $24.59 million, a record. A monochrome by Mondrian, whose art inspired one of Mr. Laurent’s best-known collections, sold for more than $16.4 million.
The final prices include taxes and the commission paid to the auction house: 25 percent, excluding tax of the first $26,000; 20 percent, excluding tax from that amount up to $1.03 million; and 12 percent excluding tax on the rest. Presale estimates do not reflect commissions.
More than 1,200 buyers, dealers, collectors and wealthy art lovers were in their seats as Christie’s staff members took bids from those abroad on 100 telephone lines. Most of the buyers were said to be American and European.
The sale has five other portions over the next few days, including furniture, silver and Asian art. A dispute has unfolded over two Qing dynasty bronze animal heads, a rabbit and a rat, originally looted from an imperial palace in China but purchased legally. China has demanded their return, but a French court ruled Monday evening that the auction of the heads can proceed as scheduled on Wednesday.
The issue has become a heated one in China, stirring nationalist indignation. Mr. Bergé countered that the heads belonged to him, saying he would give them to China if Beijing would “observe human rights and give liberty to the Tibetan people and welcome the Dalai Lama.”
A Christie’s official then hastened to emphasize that while Mr. Bergé was entitled to his opinion, the auction house had the greatest respect for China and Chinese art. Mr. Bergé, 79, said: “They are my heads. I own them and I said what I meant.”
Before the sale, President Nicolas Sarkozy toured the auction offerings, and accepted from Mr. Bergé a legacy to France from Mr. Saint Laurent: a 1791 portrait of a child by Goya.
Maïa de la Baume contributed reporting.
Exiting Workers Taking Confidential Data with Them
As layoffs continue apace, a survey released on Monday shows what many companies fear--exiting workers are taking a lot more with them than just their personal plants and paperweights.
Of about 950 people who said they had lost or left their jobs during the last 12 months, nearly 60 percent admitted to taking confidential company information with them, including customer contact lists and other data that could potentially end up in the hands of a competitor for the employee's next job stint.
"I don't think these people see themselves as being thieves or as stealing," said Larry Ponemon, founder of the Ponemon Institute, which conducted the online survey last month. "They feel they have a right to the information because they created it or it is useful to them and not useful to the employer."
The survey also found a correlation between people who took data they shouldn't have taken and their attitude towards the company they are leaving. More than 60 percent of those who stole confidential data also reported having an unfavorable view of the company. And nearly 80 percent said they took it without the employer's permission.
Most of the data takers (53 percent) said they downloaded the information onto a CD or DVD, while 42 percent put it on a USB drive and 38 percent sent it as attachments via e-mail, according to the survey.
The survey also found that many companies seem to be lax in protecting against data theft during layoffs. Eighty-two percent of the respondents said their employers did not perform an audit or review of documents before the employee headed out the door and 24 percent said they still had access to the corporate network after leaving the building.
The survey was commissioned by Symantec, which offers software that helps companies protect against data loss by indexing database and monitoring for patterns of word combinations that might be used by exiting employees to steal data. The Symantec software also can monitor outbound e-mail for confidential data and alert IT if large amounts of certain types of data, such as Social Security numbers, are being copied to removable storage devices.
Attackers Infect Ads with Old Adobe Vulnerability Exploit
Attackers inserted malware into ads in an apparent attempt to get users to download rogue anti-virus software, eWEEK finds. The malware authors attempted to exploit a patched vulnerability affecting Adobe Acrobat and Reader that is unrelated to recent security reports of a zero-day bug. eWEEK.com and other Ziff Davis Enterprise sites were affected, though the ads were taken down shortly after the situation was discovered and the site is now clean.
Attackers infected some advertisements on the eWEEK.com Web site Feb. 23 in an apparent attempt to get readers to download a rogue anti-virus application. eWEEK has found the exploit and removed the infected code from its Web site.
Although the exploit involved a bug affecting Adobe Reader and Adobe Acrobat, it is not related to the zero-day Adobe bug publicized Feb. 20, and is detected by Symantec as 'bloodhound.exploit.213.'
The infected code was found early Feb. 24 and the infected ads were removed from the eWEEK site within a short time. The eWEEK Web site is now working without any problems.Resource Library:
"The exploit in question did not compromise eWEEK.com or any Ziff Davis Enterprise Web sites," said Stephen Wellman, director of community and content for Ziff Davis Enterprise. "The attack was served through an advertisement and took advantage of certain advertising-serving codes and was not our fault. This vulnerability has been removed from all of our Web sites and we are taking steps to ensure that this does not happen again."
The eWEEK site itself was not hacked; however, code was hosted on certain advertisements that performed a redirect to a malicious Web site through a series of IFrames. The new URL led to an adult Web site, which attempted to load a PDF that exploits a known Adobe vulnerability. The vulnerability affects versions 8.12 and earlier and has been patched.
According to Websense, if the exploit is successful, a file named "winratit.exe" is installed in the user's temporary files folder with no interaction from the user and two additional files are dropped onto the user's machine.
"The host file is also modified so that if the user tries to browse to popular software download sites to remedy the infected machine, s/he is instead directed to a malicious Web site offering further rogue AV downloads," Websense said.
The Websense advisory also noted, "The name of the rogue AV application is Anti-Virus-1. If the user chooses to register the rogue AV, a connection is made to hxxp://[removed]-site.info/, which has been set up to collect payment details."
eWEEK uploaded a copy of the exploit code to VirusTotal, which reported that only six vendors were detecting the exploit—Symantec, BitDefender, GData, nProtect, Secure-Web Gateway and AntiVir.
It is unknown if other sites were affected Feb. 24 with a similar attack via infected ads.
"Hide-and-Seek" Death Case Irks Web Users
Chinese Internet users, asked to probe the death of a man in custody who police say ran into a wall playing hide-and-seek blindfolded, repaid the local government's faith by hacking into its website and leaving bizarre messages.
Authorities in the southwestern province of Yunnan invited Internet users last week to investigate the death of Li Qiaoming, 24, who died from a severe brain injury days after being sent to hospital from a detention center in Jinning county.
The death of Li, arrested for cutting down trees, had been widely questioned online.
Internet users hacked into the Jinning government website (jinning.gov.cn) on Tuesday, state media reported, replacing links with a series of odd phrases.
"Push-ups, buying sauce, hide-and-seek, the three master works of the martial world," the phrases said, repeated a number of times, according to report carried on Xinhua news agency's website (www.xinhuanet.com). No explanation was provided.
Internet authorities had taken down the website's server, Xinhua said. The site was inaccessible on Wednesday morning.
Yunnan authorities have come under fire from media commentators and legal experts who have branded the investigation as shambolic and questioned its legality.
An investigation team including journalists and Internet users had submitted a 7,000-word report to authorities after touring the detention center and speaking with inmates this week, the China Daily said.
But the report did not shed any new light on the case, angering other Internet users who described it as "meaningless," the paper said.
Li's father had received 1,000 yuan ($145) in "consolation money" from local authorities, the Beijing News said in a separate report.
The paper quoted a Yunnan propaganda official as saying the investigation could yield a result "this week at the earliest."
(Reporting by Ian Ransom; Editing by Nick Macfie)
Terror Law Watchdog Lord Carlile Joins Clamour for Accused Hacker to be Tried in UK Instead of US
Lord Carlile, the independent reviewer of anti-terror laws, has written to the home secretary to ask her to enable computer hacker Gary McKinnon to be prosecuted in the UK rather than face extradition and a jail term in the US. The intervention comes as the director of public prosecutions, Keir Starmer QC, is considering whether to prosecute McKinnon, who has been diagnosed with Asperger's syndrome, under British computer misuse laws.
In a letter to Jacqui Smith, Lord Carlile, QC, writing in a personal capacity, states that "there is no doubt that Mr McKinnon could be prosecuted in this country given that the acts of hacking occurred within our jurisdiction".
Lord Carlile of Berriew suggests that McKinnon's condition makes a very strong case for any prosecution to take place in the UK. "Of especial importance is the opinion of Professor Simon Baron-Cohen, who describes the unusual and florid symptoms which place Mr McKinnon clearly within the category of Autism Spectrum Disorder, with potential injury to his health of a high order were he to be transferred to the US legal system," he said. He suggests that the argument that such a transfer to the US jail system could infringe European human rights laws, is "plain and strong".
He adds: "I would hope that the US authorities would be prepared to accept that the English legal system is capable of dealing with the case in full."
Lord Carlile is the latest among a growing number of political and legal figures who have urged the home secretary and the DPP to prosecute in the UK. About 100 MPs from all parties have signed an early day motion urging that McKinnon should not be extradited unless a guarantee is given that he would serve his sentence in the UK. Earlier this month, the mayor of London, Boris Johnson, called for the case to be tried in the UK. McKinnon's lawyers have also written to President Obama urging him to intervene.
McKinnon's hacking activities were carried out between 2001 and 2002 from a room in a north London flat. He entered the US defence department and Nasa computer systems, sometimes leaving messages such as "your security is crap".
A final judicial review of the case is scheduled for next month. The DPP's office confirmed last week that the matter was still being considered.
UK Hacker Loses New Round of US Extradition Fight
British prosecutors said Thursday they would not bring criminal charges against a London man accused of hacking into U.S. military computers. The decision is a blow to Gary McKinnon's attempts to avoid extradition to the United States.
U.S. prosecutors allege that McKinnon, 42, broke into 97 computers belonging to NASA, the Department of Defense and several branches of the military soon after the Sept. 11, 2001, attacks. McKinnon says he was looking for evidence of UFOs.
British and European courts have rejected repeated legal attempts by McKinnon's lawyers to block his extradition. Last month McKinnon offered to plead guilty to a criminal charge in Britain to avoid facing trial in the United States.
But the Crown Prosecution Service said Thursday that the case was best prosecuted in the U.S.
"The bulk of the evidence is located in the United States, the activity was directed against the military infrastructure of the United States, the investigation commenced in the United States and was ongoing, and there are a large number of witnesses, most of whom are located in the United States," said Alison Saunders, head of the service's organized crime division.
Saunders said the charge McKinnon had admitted _ obtaining unauthorized access to a computer _ was far less serious than the U.S. allegations against him.
"These were not random experiments in computer hacking, but a deliberate effort to breach U.S. defense systems at a critical time which caused well documented damage," she said.
U.S. prosecutors allege that McKinnon's hacks shut down the U.S. Army district responsible for protecting Washington, D.C., and cleared logs from computers at Naval Weapons Station Earle in northern New Jersey, which tracks the location and battle-readiness of U.S. Navy ships.
The hacker was caught in 2002 when investigators traced software used in the attacks to his girlfriend's e-mail account. If he is extradited to the United States, he will face trial on eight charges of computer fraud. Each count could bring a sentence of 10 years in prison and a $250,000 fine, but U.S. prosecutors have said he would likely receive a much lighter sentence.
His lawyers will go to court again next month to argue that McKinnon should not be extradited because he has Asperger's syndrome, a form of autism. They say McKinnon is likely to become suicidal if he is removed to the U.S. away from family and familiar surroundings.
Christopher Nolan, Irish Author, Dies at 43
Christopher Nolan, an Irish writer who, mute and quadriplegic since birth, produced a highly praised volume of verse and short stories at 15 and went on to publish a prize-winning autobiography, “Under the Eye of the Clock,” died Friday in Dublin. He was 43 and lived in Sutton, near Dublin.
His death was confirmed by a condolence message from the president of Ireland, Mary McAleese. His family told the Irish and British press that he died after food became trapped in his airway.
Oxygen deprivation during a difficult delivery left Mr. Nolan physically helpless, able to communicate with family members only through eye movements. At 11, supplied with a new drug to relax his neck muscles, he began writing with a “unicorn stick” strapped to his forehead, pecking a letter at a time on a typewriter as his mother held his chin with her hands.
The brain that one doctor had predicted would remain infantile turned out to contain a distinctive literary voice awaiting release.
“My mind is like a spin-dryer at full speed, my thoughts fly around my skull while millions of beautiful words cascade down in my lap,” he told The Observer of London in 1987. “Images gunfire across my consciousness and while trying to discipline them I jump in awe at the soul-filled bounty of my mind’s expanse.”
Christopher John Nolan was born in Mullingar, Ireland, “a gelatinous, moaning, dankerous baby boy,” as he put it in a poem. His parents, Joseph and Bernadette, worked a small farm in Corcloon, about 50 miles west of Dublin, and his father brought in extra income by working part time as a psychiatric nurse.
To keep the boy’s mind stimulated, his father told stories and read passages from Joyce, Beckett and D. H. Lawrence. His mother strung up letters of the alphabet in the kitchen, where she kept up a steady stream of conversation. His sister, Yvonne, two years older, sang songs and acted out skits. All three survive him.
“I was wanted dearly, loved dearly, bullied fairly and treated normally,” Mr. Nolan told The Christian Science Monitor in 1988.
After selling their farm, Mr. Nolan’s parents moved the family to a Dublin suburb in 1972 so that Christy, as he was called, could attend a remedial school. In 1979 he transferred to a local comprehensive school, where his classmates included members of the rock group U2. Their 2004 song “Miracle Drug,” with lyrics by Bono, was about Mr. Nolan.
With the unicorn stick, Mr. Nolan “gimleted his words into white sheets of life,” as he put it in “Under the Eye of the Clock.” Liberated, he spent feverish hours at the typewriter. “I bet you never thought you would be hearing from me!” he wrote to an aunt and uncle. In 1981 he published “Dam-Burst of Dreams,” a collection of poems, short stories and plays that impressed critics with its acrobatic wordplay and striking metaphors.
He enrolled in Trinity College Dublin, but left after a year to complete “Under the Eye of the Clock” (1987), an autobiography told in the third person through a narrator named Joseph Meehan. A best seller in Britain and the United States, it made good on the promise of his first book and won the Whitbread Prize, beating out works by the poet Seamus Heaney and the biographer Richard Ellmann.
Mr. Nolan then spent more than a decade writing his first novel, “The Banyan Tree” (1999), the multigenerational story of a dairy-farming family in his native county of Westmeath, seen through the eyes of its aging matriarch. It was inspired, he told Publisher’s Weekly, by the image of “an old woman holding up her skirts as she made ready to jump a rut in a field.” At his death, he was at work on a second novel.
A prominent Los Angeles producer wanted to make a film of Mr. Nolan’s life story. Mr. Nolan turned the offer down.
“I want to highlight the creativity within the brain of a cripple,” he wrote to the producer, “and while not attempting to hide the crippledom I want instead to filter all sob-storied sentiment from his portrait and dwell upon his life, his laughter, his vision, and his nervous normality. Can we ever see eye-to-eye on that schemed scenario?”
Philip José Farmer, Daring Science Fiction Writer, Dies at 91
Philip José Farmer, a prolific and popular science fiction writer who shocked readers in the 1950s by depicting sex with aliens and challenged conventional pieties of the genre with caustic fables set on bizarre worlds of his own devising, died Wednesday. He was 91 and lived in Peoria, Ill.
His official Web site, http://pjfarmer.com/ , announced his death, saying he had “passed away peacefully in his sleep.”
Mr. Farmer’s blend of intellectual daring and pulp-fiction prose found a worldwide audience. His more than 75 books have been translated into 22 languages and published in more than 40 countries. Though he wrote many short stories, he was best known for his many series of multiple novels. These sprawling, episodic works gave him room to explore the nuances of a provocative premise while indulging his taste for lurid, violent action.
In his Riverworld series Mr. Farmer imagined a river millions of miles long on a distant planet where virtually everyone who has died on Earth is physically reborn, strong and vital, and given a second chance to make something of life.
In the first of the series, “To Your Scattered Bodies Go,” a reborn character discovers that his “skin was smooth, and the muscles of his belly were ridged, and his thighs were packed with strong young muscles.”
“He no longer had the body of the enfeebled and sick 69-year-old man who had been dying only a moment ago. And the hundred or so scars were gone.”
In his Dayworld series, an overpopulation crisis on Earth has been relieved by a technical fix: each person spends one day a week awake and the other six days in suspended animation. In his World of Tiers series, mad demigods create pocket universes for their own amusement, only to face rebellion from their putative creatures.
In a genre known for prolific writers, Mr. Farmer’s output was famously prodigious. At one point in the 1970s he had 11 different series in various stages of completion. Even some of his admirers said he wrote too much too fast. The critic Leslie Fiedler said that his work was sometimes sloppily written but added that was a small price to pay for the breadth of Mr. Farmer’s imagination.
Mr. Farmer made no apologies for his excesses. “Imagination,” he said, “is like a muscle. I found out that the more I wrote, the bigger it got.”
Philip José Farmer was born Jan. 26, 1918, in North Terre Haute, Ind. He grew up in Peoria, where his father, a civil engineer, was a supervisor for the power company. A voracious reader as a boy, Mr. Farmer said he resolved to become a writer in the fourth grade. After washing out of flight training in World War II, he went to work in a steel mill while attending Bradley University in Peoria at night and writing in his spare time.
His first success came in 1952 with a story called “The Lovers,” about a man seduced by an alien with an unusual reproductive system. The story was rejected by the two leading science fiction editors; both said that its graphic description of interspecies sex made them physically ill. Published in a pulp magazine called Startling Stories, the story won Mr. Farmer his first Hugo as “most promising new writer.”
Emboldened, he quit his job to become a full-time writer. Entering a publisher’s contest, he won the $4,000 first prize for a novel that held the germ of his Riverworld series. But an unscrupulous editor failed to deliver the money, and the manuscript was lost. Struggling financially, Mr. Farmer left Peoria in 1956 to become a technical writer. He spent the next 14 years working for defense contractors, from Syracuse, N.Y., to Los Angeles, while continuing to write science fiction on the side.
With the loosening of social taboos in the 1960s, Mr. Farmer emerged as a major force in the genre. In a 1966 story set on Riverworld, one of the resurrected is a resentful Jesus, angry that he had been deceived about the nature of the afterlife.
Mr. Farmer won a Hugo for his 1967 novella “Riders of the Purple Wage,” a satire on a cradle-to-grave welfare state, written as an exuberant pastiche of James Joyce’s “Ulysses.” His 1971 novel “To Your Scattered Bodies Go” also won the Hugo.
After moving back to Peoria in 1970, Mr. Farmer published 25 new works over the next decade. A 1975 novel, “Venus on the Half-Shell,” created a stir beyond the genre. The jacket and title page identified the author only as Kilgore Trout, a fictional character who appears as an unappreciated science fiction writer in several of Kurt Vonnegut’s novels. Although Mr. Farmer claimed he had permission for this playful hoax, Vonnegut was not amused to learn that some reviewers not only concluded that he had written “Venus on the Half-Shell” but that it was a worthy addition to the Vonnegut canon.
Mr. Farmer also wrote full-length, mock-scholarly “biographies” of Tarzan and Doc Savage, two of the pulp heroes whose stories had inspired him to become a writer.
Mr. Farmer had his detractors. “A humdrum toiler in the fields of science fiction,” Christopher Lehmann-Haupt wrote in The New York Times in 1972. But Mr. Fiedler saw in Mr. Farmer’s approach to storytelling a “gargantuan lust to swallow down the whole cosmos, past, present and to come, and to spew it out again.”
In the Riverworld series, for example, Mr. Farmer resurrected not just historical personages like Samuel Clemens and the explorer Richard Francis Burton but legendary figures like Odysseus and Gilgamesh.
He is survived by his wife, Bette, his son, Philip, his daughter, Kristen, and several grandchildren and great-grandchildren.
An agnostic from the age of 14, Mr. Farmer was ambivalent about humanity’s hunger for life after death. “I can’t see any reason why such miserable, unhappy, vicious, stupid, conniving, greedy, narrow-minded, self-absorbed beings should have immortality," he said in Science Fiction Review in 1975.
But he added, “When considering individuals, then I feel, yes, this person, that person, certainly deserves another chance.” Life on this planet, he said “is too short, too crowded, too hurried, too beset."
Kelly Groucutt, Electric Light Orchestra Bass Player, Dies at 63
Kelly Groucutt, a former bass player with the 1970s rock hitmakers Electric Light Orchestra, died on Feb. 10 in Worcester, England. He was 63.
Mr. Groucutt’s management said his death followed a heart attack.
Formed in Birmingham, England, in 1971 by the musicians Jeff Lynne and Roy Wood, Electric Light Orchestra, also known as E.L.O., combined rock ’n’ roll with orchestral arrangements complete with string sections, choirs and symphonic sweep.
Mr. Groucutt joined the group in 1974 after leaving his previous band, Sight and Sound. He played bass and sang in E.L.O.’s heyday as one of the world’s biggest rock acts. E.L.O. had a string of hits in Britain and the United States in the 1970s and early ’80s, including “Livin’ Thing,” “Mr. Blue Sky” and “Don’t Bring Me Down.”
Mr. Groucutt left the band in 1983 but later toured with several successor acts, including E.L.O. Part II and the Orchestra.
He is survived by his wife, Anna, and four children.
In Innovation, U.S. Said to Be Losing Competitive Edge
The competitive edge of the United States economy has eroded sharply over the last decade, according to a new study by a nonpartisan research group.
The report by the Information Technology and Innovation Foundation found that the United States ranked sixth among 40 countries and regions, based on 16 indicators of innovation and competitiveness. They included venture capital investment, scientific researchers, spending on research and educational achievement.
But the American economy placed last in terms of progress made over the last decade. “The trend is very troubling,” said Robert D. Atkinson, president of the foundation.
Measuring national competitiveness and the capacity for innovation is tricky. Definitions and methods differ, and so do the outcomes. For example, the World Economic Forum’s recent global competitiveness report ranked the United States first. Much of the forum’s report is based on opinion surveys.
A report last year by the Rand Corporation concluded that the United States was in “no imminent danger” of losing its competitive advantage in science and technology.
The new report, published on Wednesday, offers a more pessimistic portrait. Its assessment is in line with a landmark study in late 2005, “Rising Above the Gathering Storm,” by the National Academies, the nation’s leading science advisory group. It warned that America’s lead in science and technology was “eroding at a time when many other nations are gathering strength.”
President Obama has often said that in the future, international prosperity will depend on the United States becoming an “innovation economy.” The administration’s economic recovery package includes added spending for areas favored by innovation policy advocates, including higher research and development spending and funds for high-technology fields like electronic health records. But the administration has no coordinated innovation agenda.
Some countries, including Singapore, Taiwan, Finland and China, are pursuing policies that are explicitly designed to spur innovation. These policies typically try to nurture a broader “ecology of innovation,” which often includes education, training, intellectual property protection and immigration. This is in contrast to the industrial policy of the 1980s in which governments helped pick winners among domestic industries.
The foundation study, according to John Kao, a former professor at the Harvard business school and an innovation consultant to governments and corporations, is an ambitious effort at measurement. He called its conclusions “a wake-up call.”
In the foundation report, unlike some competitiveness studies, results were adjusted for the size of each economy and its population. Consequently, the United States ranked sixth in venture capital investment (Sweden was first); fifth in corporate research and development spending (Japan led); and fourth in science and technology researchers (again, Sweden was first).
Over all, the most innovatively competitive nation was Singapore, which embarked on a national innovation strategy years ago, investing heavily and recruiting leading scientists and technologists from around the world.
Mr. Atkinson of the foundation said the United States should act more like the individual states had been doing for some time. They have government programs to attract investment and talent and improve work force skills of local people.
The study’s specific recommendations include federal incentives for American companies to innovate at home, ranging from research tax incentives to work force development tax credits. Public investments and regulatory incentives can accelerate the use of information technology in health care, energy systems, transportation, government and education.
How Do We Fix Crappy U.S. Broadband?
According to a recent study, broadband access in the U.S. has dropped to 19th place worldwide. The recent passage of the stimulus bill will provide about $7 billion to improve it. But why is U.S. broadband so crappy in the first place? And can government intervention improve it?
For a little insight, I sat down with Emily Green, CEO of the Yankee Group, a Boston-based consultancy that specializes in connectivity. They consultancy advises network providers, manufacturers, media companies and financial services companies, and also specializes in helping those companies deal with public regulations.
The Yankee Group recently wrote an open letter to President Obama about the need for an "anywhere" network. In it, it argues that a Federally-motivated expansion of wired and wireless communications is one of the most vital components of economic recovery, community service, and improved health care and education.
Is broadband access in the U.S. really that bad?
There was a note in The New York Times the other day that sent me around the bend. There was a convocation of broadband industry folks a few days ago, and they came out and said that if you looked at the right metrics, the U.S. is actually number one in broadband access. Anyone who says that is engaging with delusional metrics. For a country that's as advanced as we are, and has provided so much leadership in the commercialization of the Internet, our deployment of broadband technology is pathetic and embarrassing.
What's the problem?
We're not short on ideas. We're short on people's understanding of the importance of a comprehensive, seamless, high-capacity digital network. A lot of articles in the media discuss the stimlulus' "shovel-ready" projects. But we need a network that is digital-worker ready. If we had that, we could re-shore some of the jobs that are leaving the country. We are moving to a service-based economy, away from a manufacturing economy. The projects in the stimulus that will have the longest impact won't be the bridges and roads.
But to get that kind of economy, do we need the government to work with private companies? Many ISPs won't even divulge maps of their coverage to regulators.
There is great economic benefit for the whole country if we have an expansive, higher capacity network. But for these companies, holding information close to the chest is genetic; it's an attitude born of long habit. It's as if they're saying, 'I'm not sure what will create my competitive advantage, but information is power--so I'll withhold as much as possible.' It's not a very Google-era perspective.
Do they have good reason to protect that information?
This information is too critical to stay private. Here's an exact analogy: Just because many highways in the U.S. are managed by independent companies, that doesn't mean we let them withhold information about where those roads go, or which facilities are at each stop. We're at the stage now where information highways are just as important as our physical highways.
In your letter, you say we need a "public safety broadband network" for police and fire squads. Do we really need to get the Federal government involved in things like this? Can't states and municipalities hash this out, whenever the market can't?
The Federal government should be setting examples and pioneering things. What we really need is coherence, so that these networks--police, fire, emergency services--can talk to each other. The Federal government needs to come in and say, 'there's too much risk involved here. We need to provide leadership.' After they establish the model, the states and cities can refine it.
But the government can't even handle the transition from antenna TV to digital.
Delaying the DTV transition was a mistake. Any change of this magnitude is never going to go smoothly. The value of returning that white space to the American people is huge; that space could be one of the vehicles for improving broadband access across the country, and I think that's a lot more important than television programming. Broadband access is correlated to economic benefit in a way that TV is not.
Some studies have shown that when rural residents without broadband express very little interest in getting high-speed Internet.
I understand there are people who don't think they need it. There are also people that think they don't need to exercise or drink milk. We shouldn't let that make us self-satisfied about where we are in terms of connectivity.
Curricula in schools are devised by people given the authority to decide what's best for our children. We should be mandating connectivity and skills development too. Luckily, the stimulus package has funding for access points in communities and educational programs for broadband usage.
You say in your letter that the government should create some "initial applications" for the new white space being vacated by the broadcast TV stations, so serve as examples to private companies. Can't we just let the market do its work? Won't Microsoft and Google build on that white space anyway?
The government needs to show some real leadership by moving its activities--health, education--onto an anywhere network. The reason networks are so efficient is that they diminish the relevance of your physical location; for example, you don't have to drive down to the DMV if you can do everything online. If the government can challenge itself to move its governance online, that commitment will help our whole economy move onto the network.
The letter to President Obama talks about building public housing with fiber optic connectivity. How do we mandate things like this and make sure that we're using the right technology? Is fiber, for example, definitely the best?
It's not the only vehicle--you can get broadband over cable, telephone, power lines--but it's the best technology we have today. When the government builds public housing, there are requirements for things like green space. Why shouldn't there be requirements for letting inhabitants participate in an anywhere network? This funding should be technology agnostic, but there are technologies out there today that we should be using.
Femtocells dramatically increase connectivity in an economical way; it's an extremely exciting technology that could expand the reach of wireless technology. One of the biggest costs for network providers is mounting equipment around a city. These operators are very excited about people using femtocells, but they're still trying to figure out how the business model works. They'll be great for schools and hospitals.
Does Obama understand the need for an "anywhere" network?
He certainly was great during the campaign about promulgating broadband for all, but less than 1% of the stimulus package is for broadband. Maybe we don't know how to use any more than 1% responsibly at this point. But over the next eight years, we need to tip the balance in favor of service economy investment.
CC0: Waiving Copyrights
The CC0 (or CC Zero) tool, which has been in the works since December 2007, was recently quietly released to the public in the format of a full version 1.0.
CC0 is not so much a license as it is a waiver. It is an attempt by the Creative Commons organization to improve upon its public domain dedication system by making it both more international-focused and rectify many of the challenges and problems that come up when trying to simply place a work on the public domain.
The idea is that, rather than licensing your work with certain terms and restrictions, you are instead waiving as many of your rights as possible, including all related rights (including moral rights). Though it isn’t the same as placing a work in the public domain, it would, theoretically, have much the same effect.
The question is how much will the license be used and whether Webmasters, many of which are already wary of the terms CC licenses place on their work, will be willing to waive all of their copyright interest.
How it Works
Obtaining a CC0 waiver for your site is much the same as getting a regular CC license. You visit the CC0 page, input the information for the work you want to license (Note: This is optional but, as with CC licenses, the data you put in is embedded in the license itself).
The one element that is different is that the CC0 waiver does have an active “clickwrap” element that requires the user to agree to both waive their copyright interest and that they have read the terms of the license and understand it. This is, assumedly, to ensure that there are no misunderstandings with users and that no one who doesn’t intend to waive their rights uses this system. It also, likely, helps with enforceability as the person waiving their rights has electronically “signed” a contract indicating as such.
However, even after that, the process then again does one last check to ensure that the user is certain they wish to waive their rights and cautions artists that depend on copyright for their income against using the service.
Once you’ve completed that, you are then presented with a familiar set of HTML and button options for marking your work. You can then take that code, paste it into your site or otherwise affix it to your work. Otherwise, the waiver has not been completed as the CC0 licensing procedure is NOT a registration process and nothing is stored by the Creative Commons Organization.
The most common button will look like the one to the left (or above in your RSS reader). Please note though that this is meant to be used as a sample of the CC0 button and is not designed to indicate that this article or anything on this site is licensed under a CC0 license. All content on PT is underneath the BY-SA license.
The Advantages of and Problems with CC0
Though CC0 isn’t designed to replace the public domain dedication service, it is designed to improve upon it. The CC0 system works better internationally, is likely more legally valid (since one can not dedicate their works into the public domain in many countries and there are questions about doing so in the U.S.) and, if the icon and meaning becomes recognizable enough, more clear.
The problem is that there hasn’t exactly been a rush to use the public domain dedication system and it hasn’t been the legal issues that has kept others at bay. Those that have used CC licenses have favored other licenses overwhelmingly and those that have wished to license their work in the public domain have usually just done so without the aid of CC.
Though the CC0 waiver system is a potentially useful tool for those that want to waive their rights and certainly preferable in most cases to either a CC Public Domain Dedication or a similar dedication of their own, it seems unlikely that the “license” will catch on in a big way, that is, barring a major shift in the attitudes of writers and artists.
It seems that, for most who want to give up most of their rights, the CC-BY license is more than open enough.
There’s little doubt that the CC0 waiver system is a major step forward for the Creative Commons Organization in terms of their public domain efforts. Even though it isn’t a true public domain dedication, it only waives the rights as far as they can be waived (Note: Moral rights, in many countries, can not be outright waived), it opens up what is likely as close to a public domain option as practical under the current legal climate.
Though it doesn’t seem likely that this license will be widely used, trying to get willing copyright holders to license their works with the terms that they want, or no terms at all, is a big part of their effort so having this waiver available likely means more than the number of likely licensors would indicate.
To say the least, it is an interesting project that has been over a year in the making. It is nice to see it reach its version 1.0.
Lawyers in love
John McCain to Champion Fair Use in Jackson Browne Suit?
A federal judge will allow rocker Jackson Browne's copyright suit against John McCain to proceed, rejecting McCain's argument that he wasn't responsible for the ad—and that the suit aims to chill political speech.
Former presidential contender Sen. John McCain (R-AZ) clearly wishes Jackson Browne would just Take It Easy. Before the Deluge of ballots that handed Barack Obama the presidency in November, the Ohio Republican Party used Browne's hit "Running on Empty" in a YouTube ad claiming Obama lacked a plan to lower gas prices, prompting the left-leaning rocker to sue for copyright infringement and "false association." McCain has argued that he's not responsible for the ads, and that the suit is an effort to squelch protected political speech. But now Browne's Lawyer's in Love with federal judge Gary Klausner, who this weekend rejected those arguments and allowed Browne's suit to proceed.
Browne's suit comprises both a garden-variety copyright claim—ordinary performance of songs can be cleared through a standard ASCAP license, but used\ in an ad requires a separate "sync license" the Ohio GOP didn't obtain—and a pair of "publicity" and "false association" claims, essentially arguing that McCain's use of his song created a misleading impression that Browne endorsed his candidacy. In November, McCain filed a declaration asserting that he "was not involved at all in any way in the writing, creation, production, distribution or dissemination of the video"—and indeed, didn't even know it existed until the suit was filed, meaning Browne ought to take it up with the Ohio GOP. His attorneys also sought to invoke California's anti-SLAPP (Strategic Lawsuits Against Public Participation) statutes, which are meant to bar litigation aimed at suppressing political speech. In an order issued Friday, Judge Klausner rejected both claims.
The Ohio Republican Party has, ironically, been dropped from the California suit for jurisdictional reasons. But Klausner agreed with Browne's lawyers that McCain could be held liable for their actions, even if he was unaware of the disputed video, because the party was acting as his agent—just as a corporation might be on the hook for (say) discriminatory hiring by an employee.
As for the anti-SLAPP claim, the judge agreed that Browne's "publicity" argument was based on "protected activity"—political speech on a matter of public interest. But anti-SLAPP statutes allow litigation in such cases if the plaintiff can show "probability of success" on the underlying claim—a condition Klausner interpreted as requiring only a "mere possibility" of success. That ought to be a little worrying, because it would seem to imply that even a protected "fair use" of a copyrighted work for a political purpose could be subject to "right of publicity" and "false association" suits if the owner of the work disagrees with the message being promoted.
In October, the McCain/Palin campaign complained to YouTube that copyright takedown notices were being improperly used to chill political speech, arguing that the campaign's videos had been removed despite containing only "transformative," non-commercial fair uses of copyrighted material. Rather than urging the video-streaming site to be more circumspect about responding to such notices across the board, though, the campaign requested that YouTube extend special scrutiny to takedown notices aimed at political campaigns.
In the wake of Friday's order, Browne attorney Lawrence Iser called it a "solid victory for songwriters and performers and reflects an affirmation of their intellectual property rights and their freedom from being conscripted as involuntary endorsers of political candidates and campaign messages."
EMI Joins Suit Against SeeqPod
EMI Music Group, with subsidiary Capitol Records, has joined in the lawsuit against Internet music search engine SeeqPod, joining Warner Music Group which originally took action last January. The EMI suit was filed in New York, while the WMG filing was in California.
EMI has also expanded the scope of the dispute by naming several SeeqPod employees and investors as defendants, not just the company itself. CEO Kasian Franks, founder and chief scientist Raf Podowski and investor Shekhar Lodha of eSynergi Ventures were specifically targeted.
A SeeqPod spokeswoman says the company will continue to operate as usual, and defends the service as legal. She said SeeqPod is hoping to ultimately strike partnership deals with the major labels.
EMI has also filed suit against a music-on-demand site called Favtape, which relies on the SeeqPod technology to deliver music to users.
The AP's "Hot News" Lawsuit Lives On; are Scoops "Quasi-Property?"
A New York federal judge ruled Tuesday that The Associated Press can sue its competitors not merely for copyright infringement, but for a "quasi property" right in the news known as the "hot news" doctrine. See the AP's own coverage.
Well, let it not be said that I fell down on the job when "hot news" crossed my beat. I took a look at both sides' motions and the judge's ruling to learn more about the case. The doctrine raises some questions for me about journalistic practice and ethics. And, it has some interesting history to boot.
According to 2nd Circuit law quoted in the order in this case, a "hot news" misappropriation claim is viable when:
(i) a plaintiff generates or gathers information at a cost;
(ii) the information is time-sensitive;
(iii) a defendant's use of the information constitutes free riding on the plaintiff's efforts;
(iv) the defendant is in direct competition with a product or service offered by the plaintiffs;
(v) the ability of other parties to free-ride on the efforts of the plaintiff or others would so reduce the incentive to produce the product or service that its existence or quality would be substantially threatened.
Personally, I can't see how all five of these don't apply to any news organization that's writing a story that follows a "scoop" by a competitor.
It's quite common for editors to ask reporters to "match" a story that has been published or broadcast by a competitor by re-tracing the facts, and often the sources, of the "scoop." As long as it's all re-reported and re-written, that's fair game. But the criteria of a "hot news" claim are still met; because newsgathering still costs money, the information is still time-sensitive (that's why we're rushing to get the second-day story), and it could still be thought of as "free-riding" if the second news outlet is just publishing a do-over of the themes, sources, ideas, in the first article—and such second-day stories could reduce the incentive to get scoops. (Although "substantially threaten" might be a reach.) It's all a bit theoretical, since individual scoops aren't really worth that much money, but it does seem like "hot news" misappropriation claims could allow any scooper to sue the still-competing "scoopee."
Now in this case, the AP argues that the defendant, All Headline News (AHN), has no "newsgathering" operation at all, and that it just re-writes AP articles shortly after they're published on various Internet sites. But that seems irrelevant to the analysis presented in this order.
The History of "Hot News"
The "hot news" doctrine that The Associated Press now wants to enforce is actually a product of a much earlier AP litigation. "Hot news" originates in a lawsuit that AP brought 90 years ago against a competing news service, International News Service (INS), which was owned by Hearst and later became part of United Press International (UPI). "In 1918, INS was unable to provide its clients with news stories from the war zones because, having been accused of violating wartime censorship restrictions, it was barred from use of the British and French mail and cables," writes the AP in its brief against AHN. INS solved that problem by grabbing early editions printed in AP newspapers and sending them to its own clients. INS even bribed employees of the AP and AP member newspapers to get AP's news before it was published. (And yet, the AP notes—"Unlike AHN, INS was a real news service, with reporters who gathered the news.")
Ultimately, the Second Circuit found that AP had a property right, separate from copyright, in the news that it sold, "arising from the labor and expense involved in its gathering and disseminating that news." This right could only be used against competitors and only lasted as long as the news had commercial value. The case then went to the Supreme Court, which voted 5-3 in favor of the AP's "quasi property" and upheld the 2nd Circuit decision.
Justice Brandeis wrote in dissent:
That decision is no longer federal law, for reasons I won't get into, but it has been integrated into the common law in several states, including New York, where this case is being litigated. AHN argued that the dissenters' view was ultimately victorious, because of a later decision, Feist Publications. In that case, the court made it clear that it was perfectly legal to copy a phone book, which held uncopyrightable facts, regardless of how much hard work goes into making a phone book. (And the AP does go on in this case about the time and expense of its newsgathering operation.)
"The views of Justices Holmes and Brandeis have ultimately prevailed, and INS has been filed away as an accident of history. To the extent any state law still attempts to breath life into the case or its doctrine, it too must fall."
AHN also argued that the "hot news" doctrine was just meant to apply to bad actors like INS, which bribed AP employees and intercepted AP telegraph messages. Successful "misappropriation" claims involved confidential information not yet released to the public, and recent court rulings have only found misappropriation where there was "fraud or deception, or an abuse of a fiduciary or confidential relationship." It argued that despite all the AP's rhetoric about its newsgathering operation, all it had was a copyright claim (which is still alive, and AHN has said it will contest vigorously.)
Interestingly, AHN mentions in a footnote the AP's controversial copyright campaign against bloggers, even citing BoingBoing, in a July 2008 motion: "Within the past month, the AP has apparently sent threats to various third parties, such as bloggers, for doing nothing more than copying one or two paragraphs of AP content (which virtually anyone would consider a fair use under copyright law). The AP has also apparently recently begun a “licensing” program in which it demands extortionate royalties from anyone who copies merely five words or more from an AP story, regardless of fair use."
The AP notes that AHN has no news operation at all, and says its employees just browse the Internet scooping up AP copy and re-writing it. AP also accuses AHN of paying its writers badly, and says the company is now "off-shoring" preparation of AHN news stories to the Phillippines and Malaysia. (But paying writers poorly, I happen to know, is not a crime.)
AP lawyers explain their support for "hot news" lawsuits: "Today more than ever, the public needs the hot-news doctrine to protect its access to news. News happens globally, in conflict zones and disaster areas. Most of the time, the reporter has to go to the news, often by overcoming various barriers to access. To collect this news requires massive, continuing investments by news services... they must be able to recoup their costs by selling their news stories to newspapers, websites, and other publishers. They cannot profitably do so in the face of parasites like AHN."
Judge Kevin Castel wrote that the defendants didn't have any good reasons why the most recent 2nd Circuit case on this issue should not apply here. In that case, the NBA was allowed to use the "hot news" doctrine to sue Motorola for transmitting its basketball game scores. (Castel's order linked below)
The Associated Press is represented by DLA Piper. All Headline News Corp. was represented by Darby & Darby until September 2008, and is now represented by Brian Caplan of Caplan & Ross.
UPDATE: Eric Goldman has legal analysis of this case, including thoughts on the non-"hot news" claims.
‘Die Hard’ Director Is Allowed to Withdraw Plea
A judge has allowed the director of “Die Hard,” John McTiernan, to withdraw his guilty plea in a case that involved the wiretapping of Hollywood celebrities.
Judge Dale S. Fischer of Federal District Court granted on Monday a request from Mr. McTiernan to reverse his plea to a charge of lying to the F.B.I. about his association with a convicted private investigator, Anthony Pellicano.
Mr. McTiernan was not in court for the hearing.
Dan Saunders, an assistant United States attorney, said a new indictment against Mr. McTiernan would be forthcoming. He didn’t elaborate.
Judge Fischer also ordered Mr. McTiernan’s $100,000 fine refunded and set a March 23 hearing for lawyers to further discuss the case.
Mr. McTiernan, 58, argued he had inadequate legal representation and was jet-lagged and under the influence of alcohol when he made the plea in April 2006.
Mr. McTiernan, whose movie credits also include “Predator,” said in his plea agreement that he paid Mr. Pellicano to conduct an illegal wiretap of the film producer Charles Roven after the two worked on the 2002 film “Rollerball,” and that he lied to an F.B.I. agent about the wiretapping.
The director was free on bail while he appealed. In October, the United States Court of Appeals for the Ninth Circuit vacated Mr. McTiernan’s four-month prison sentence and ruled that he was entitled to a hearing on whether he could withdraw his plea.
Mr. Pellicano was convicted of bugging phones of celebrities and others to get information for his clients. He was sentenced to 15 years in federal prison.
More Lawmakers Sign Local Radio Freedom Act
Sixteen additional lawmakers have signed the Local Radio Freedom Act, a bipartisan House resolution opposing the introduction of "any new performance fee, tax, royalty, or other charge" on local radio stations for playing music. The total number of signatures now stands at 126. The Local Radio Freedom Act, unveiled earlier this month at a Capitol Hill event hosted by the Free Radio Alliance, was introduced by Reps. Gene Green (D-TX) and Mike Conaway (R-TX). It counters legislation supported by the Recording Industry Association of America (RIAA), which would require local radio stations to pay royalties for the music they air.
The musicFIRST Coalition, an artist advocacy group that is seeking support for the RIAA-backed legislation, is hosting an event on Capitol Hill today at 2 p.m. Several legislators and a number of artists are expected to be in attendance at the rally at the Rayburn Building this afternoon, according to the New York Post.
Commenting on today's musicFIRST event, NAB EVP Dennis Wharton stated, "NAB welcomes an honest debate over whether radio stations or the record labels have historically been a 'better friend' to musicians. Since the days of Count Basie, there have been two constants in music: free radio airplay has propelled the financial success of countless performers, and those same artists have been systematically abused by the labels. For RIAA to now use artists as a shield in their quest for a performance tax is utterly cynical and hypocritical."
Free Radio Alliance spokesperson Cathy Rought added, "All the star power in the world can’t make the record labels’ performance tax work for communities across America. The record labels’ proposal to levy a performance tax on radio would bankrupt the very stations that make them — and the artists — successful. On top of the threat it would pose to the more than 106,000 radio jobs across America, this legislation would be the death of diversity over our airwaves. Local stations would become even more unaffordable, preventing more women and minorities from entering the business and achieving the dream of ownership. The notion that radio is somehow harming property rights is silly; the artist is in essence, an employee of the label and is supposed to be paid by the label, much like many other creative fields. If the musicFIRST goal is truly to protect the artist, they should be starting with the labels’ notorious compensation issues, not radio whose airplay generates over $2 billion in annual sales for the artist."
Music Industry Takes Stand in Pirate Bay Case
The music industry has lost more than 30 percent in sales since 2001 because of illegal downloading, a top industry official said Wednesday, giving evidence in a Swedish trial.
John Kennedy, the head of the International Federation of the Phonographic Industry, told the Stockholm District Court that Swedish site The Pirate Bay had become "the No. 1 source of illegal music," following court actions against two other popular file-sharing sites, Grokster and Kazaa.
"Over a period of time, piracy has done immense damage to the music industry," Kennedy said, adding that illegal Internet downloads had caused industry sales to tumble from $27 billion in 2001 to $18 billion in 2008.
Kennedy testified on behalf of a handful of record companies, including Sony BMG and EMI, which together with movie companies such as Universal and Warner Bros. Entertainment Inc. are seeking 117 million kronor ($13.2 million) in compensation and damages.
"I believe they are justified and may even be conservative because the damage is immense," he said of the claim.
Gottfrid Svartholm Warg, 28, Peter Sunde, 30, Fredrik Neij, 30, and Carl Lundstrom, 48, are accused of breaking Swedish copyright law by helping millions of Internet users download protected music, movies and computer games for free through The Pirate Bay. They have pleaded not guilty.
According to IFPI, the Pirate Bay is the biggest site of its kind in the world, with 22 million users.
The defendants say their site doesn't violate any laws because it does not host any copyrighted material. Instead, it directs users to other file-sharers, with whom they connect through so-called torrent files to download content. If convicted the four would each face up to two years in prison.
After his testimony, Kennedy told The Associated Press that the defendants were "hiding" behind their torrent technology, saying they still contribute in making copyright-protected works available to others.
"I want Pirate Bay to close down," he said. "I want some compensation and I want it to be clear people cannot steal other people's property without there being consequences."
Still, he acknowledged that even if the quartet is convicted, the battle against illegal file-sharing willbe far from over.
"I don't think it can be eradicated any more than any other crimes," he said. "I think it could be brought to a more manageable level."
The trial is scheduled to end next month.
How To Kill The Music Industry
During The Pirate Bay trial, the music industry placed the blame for the decline in their revenues squarely on the shoulders of file-sharers. Their logic is clearly flawed, but it could sway the verdict if no alternative explanation is presented. So, if piracy isn’t to blame, then what is *actually* killing the music industry?
According to Per Sundin, CEO of Universal Music, the decline in music revenues in the past 8 years can be fully attributed to (read: blamed on) illegal file sharing. If this were actually true, many of us might even respect his decision to go after pirates as fiercely as the music industry is doing right now. However, the past 8 years have seen a lot more changes in the landscape of home entertainment than Per Sundin would like to admit, and some of those changes have had a massive impact on music profitability — much more so than any amount of piracy.
Let us refresh our memories and take a look at what actually happened during and just before the past 8 years:
1. First, the explosive rise of computer and console gaming. This competitive ‘third element’ has appeared in the entertainment landscape, beaten both music and movies to the curb and taken a huge cut out of the music industry’s revenues. Consumers don’t have infinitely-deep pockets, and billions of ‘recreation dollars’ that used to go almost exclusively to music, are now going into gaming.
2. International trade agreements have allowed consumers to buy their music across borders, rather than accepting local prices on music based on the ‘relative wealth’ of nations, rather than the actual value of the product.
3. New forms of distributable media, most notably MP3s but also CDs, have become mainstream. These new media don’t degrade over time and rarely break at all, making music rebuys a thing of the past, and allowing the second-hand market for music to thrive and expand - both of which take a cut out of the music industry’s former revenues.
4. Radical technological innovation has taken place in the field of music creation, processing, mixing, and mastering. Recording hardware, CD burners, music software, and media encoders have evolved to the point where most artists can actually afford decent-quality equipment to do their own recording and producing. Furthermore, this has fostered literally thousands of smaller, specialized studios that are challenging the ‘Big 4′ with lower prices, better terms for artists, genre-specific expertise, etc. Successful artists can now leave the big labels and start their own recording outfits on relatively modest budgets. Naturally, super stars like The Beatles or Frank Sinatra have always had this option, but the recent technological advances have lowered the bar drastically. This development is depriving the ‘Big 4′ of many of their former cash cows, who now use the major labels for their advertising and distribution infrastructure alone.
5. The World Wide Web has become an omnipresent force in the world, allowing cheap, end-to-end distribution of digital music, increasingly cutting out the corporate music distributors, who deal in trucks and CD covers, rather than bytes and bandwidth. With iTunes leading the way (very successfully ‘competing with free’, I might add), billions of songs are now purchased digitally rather than physically, no longer necessitating the big labels’ distribution networks.
6. The total number of radio stations, music television networks and other ’streaming’ sources of music has grown exponentially, giving music fans a huge selection of free (and legal) music options. Satellite radio, DAB, and internet radio broadcasts have made it trivial for consumers to simply tune into a channel broadcasting the exact sub-genre of music that they feel like listening to (they can even have a stream created for them dynamically, e.g. on Pandora), making the *purchase* of music entirely optional for the casual listener.
7. A massive selection of entertainment alternatives (home computing, console gaming, mobile devices, etc.) have appeared in the home, effectively marginalizing music as an activity. 15-20 years ago, youths would regularly visit each other just to listen to music together; today, that is virtually unthinkable without some form of activity involved, such as playing Guitar Hero or Rock Band, or dancing at a concert.
8. And finally, the music industry itself has embraced the opportunities of digital media, at last letting consumers buy *single* tracks at a time rather than forcing entire albums full of ‘fillers’ on them. Looking at the RIAA’s own sales figures for the past 10 years, there is a *direct* correlation between the break-off in album sales and the introduction and increase in single track digital sales. Looking at the actual numbers, it is abundantly clear that the vast majority of consumers never wanted to buy full albums in the first place, but were merely forced to by the lack of affordable single-track media. Now that the digital revolution has arrived, countless millions of 16-track album sales are being turned into 1- or 2-track sales, *decimating* the former revenues on music. THIS is the real reason why the music industry is hurting.
In other words: The “it’s common sense” argument that the music industry is peddling in their attempt to tie the declining revenues to piracy, simply doesn’t hold. It is not as clear-cut as the industry believes; the true reason for the decline is something they are still unwilling to face, but will have to face sooner or later:
The fact is that the music industry’s revenues have been artificially inflated for decades because of limited consumer options. The last 15 years of innovation have lifted those limitations, effectively leaving the music industry with an obsolete, defective business model of monopolized production technology, forced album bundling, and almost nonexistent competition in the realm of home entertainment. What is happening now - the decline of music profits and the piracy witch hunt by the music industry - is merely the panicked struggle of a dying business model, a complacent industry’s refusal to accept its diminishing role in a digital world. The pirates are not the reason, and the decline is the not the disease. It is the cure.
This is a guest post by Jens Roland. Jens is a computer scientist by training, but a technology forecaster by trade. He has worked at international think tanks as a consultant and researcher in emerging technologies and has written more than 300 articles and a book on the subject.
DATA: Net value of shipped music, in billion dollars
(source: RIAA’s annual reports)
Pirate Bay Trial Day 7: Screenshots for Evidence
After a long weekend break, both sides have returned to the Stockholm court room. Day 6 of the trial was a rest day, so we skip to Day 7 where the IFPI’s evidence collector relies only on screenshots and admits he’s not a BitTorrent expert. Furthermore, the Prosecution don’t know where policeman Jim Keyzer is.
Today’s scheduled witnesses are Magnus Mårtensson, a lawyer for the IFPI, Anders Nilsson of Antipiratbyrån and John Stéenmark.
Prosecutor Håkan Roswall begins by saying that Tobias Andersson from Piratbyrån, John Stéenmark and police officer Jim Keyzer no longer have to testify. The Judge asked the Prosecution about Jim Keyzer, but they said they hadn’t been able to get hold of him but had sent an e-mail to try and find out. The defense says they will hear Tobias Andersson tomorrow.
The Prosecutor then further modified the charges against the defendants. He no longer claims that all The Pirate Bay’s components are necessary in order to share files. He further added to the charges that TPB allows its users to upload torrents and that TPB then store the torrents on their server. There was no immediate objection to the changes. IFPI lawyer Peter Danowsky introduced yet more new evidence, but the defense won’t comment on it until they have had a chance to examine it.
First up to testify was Magnus Mårtensson, a lawyer for the IFPI. The court heard that Mårtensson has been working for the IFPI for 15 years, specializing in anti-piracy work. He explained that he worked gathering evidence against The Pirate Bay by downloading various music albums via .torrent files he obtained from the site using the Azureus client.
Mårtensson’s evidence gathering equipment consisted only of screenshots, as quickly became apparent. Mårtensson’s technological ability was called into question and he acknowledged that it was difficult for him to answer some technical questions.
When asked if he had any network equipment logging exactly what was going on ‘behind the scenes’ of any of his sample downloads, he replied that he didn’t. When asked if he verified in any way during the download process that he had any contact with The Pirate Bay’s tracker, again the answer was negative.
“I didn’t check what tracker was contacted,” Mårtensson said when he was asked if he actually confirmed that he was connected to the Pirate Bay tracker. It seems unthinkable that the Prosecution has gathered ‘evidence’ in this way.
After a short break policeman Magnus Nilsson of the Anti-Piracy Office was next up. He described how he downloaded several .torrent files from The Pirate Bay as part of his evidence gathering, and he explained in detail how one downloads files with BitTorrent. Nilsson told that he downloaded several games and movies, all with uTorrent.
Then, Nilsson came out to say that he was sure that a majority of the content on The Pirate Bay was copyrighted. However, he had no evidence that supports these claims. The defense lawyers pressed him on this and he had to cave in, “I have no documentation as to the claim that most material is copyrighted. It is just an opinion.”
One of the defense lawyers (Carl Lundtröm’s) kept repeating the same line of questioning. He asked what BitTorrent program Nilsson used. Then he asked if he downloaded that program from The Pirate Bay. When told no, he asked a couple of questions about the download process to show that TPB isn’t involved in the actual transfer.
“So the actual downloading [of the actual pirated works/files] happens outside of TPB?” Carl Lundström’s lawyer asked. “Yes,” Anders Nilsson replied.
After only two hours the court decided to end the hearings for today. Tobias Andersson of Piratbyrån and John Kennedy of IFPI will be heard tomorrow.. http://torrentfreak.com/pirate-bay-t...idence-090224/
Day 8: Pirates Kill the Music Biz
It’s Day 8 of The Pirate Bay trial and several entertainment industry CEOs take the stand. IFPI’s CEO John Kennedy said that TPB was an extremely damaging force on the global music industry and what the site offers is just too tempting for people to resist. He also admitted to not understanding how TPB or even uTorrent works.
Today’s first witness is Tobias Andersson from Piratbyrån and later on the IFPI’s CEO John Kennedy will testify, although it’s not expected that he will respond to the open letter and peace offering issued yesterday by the ‘Kopimists’. Also up, Rasmus Ramstad (CEO of SF) and Per Sundin (CEO of Universal Music)
Tobias Andersson was briefly questioned about the speech Fredrik Neij (TiAMO) gave after the TPB raid in 2006. Andersson told the court that he wrote the speech for Neij, since speech writing isn’t Neij’s thing. Andersson’s appearance was over in a few minutes and by 9:15 John Kennedy was testifying in English, through a Swedish translator.
IFPI’s John Kennedy confirmed he was the CEO of IFPI and summarized his duties there, noting the group has 1500 members worldwide and it’s main aims were to ‘improve’ copyright laws through government lobbying and fight piracy around the world since “piracy has done immense damage to the music industry.” Kennedy says that IFPI takes up strategic litigation against various targets worldwide.
Kennedy said that for a long time the industry sold its product in physical form (and experienced a limited piracy problem) but with the advent of digital music this situation has grown worse, with some claiming that copyright didn’t even exist in the digital world. He noted that the main sets of previous litigation were in the US (Grokster) and Australia (Kazaa).
Kennedy then said how pleased the music industry was with the legal wins against these two companies and in the wake of their demise, The Pirate Bay took their chance to develop their business. Kennedy said he first heard of TPB in 2004 and it was quickly becoming the #1 source of illegal music and this was damaging to the industry.
Kennedy noted the transition to digital music was a great threat to them, and although more music is currently being consumed than ever before, “less is being paid for than ever before.” If music is available for free, says Kennedy, many people find that temptation too much to resist and new business models can’t flourish.
The discussion then moved to the claim for damages. Kennedy said the claims were “justified and maybe even conservative, since the damage is immense.” Talk moved to the link between the cost of downloading legally and the claim for damages. Kennedy said that for the industry, CDs were more profitable than digital downloads are today.
He said that artists, studio producers, songwriters, music publishers, studio staff and the marketing and promotion people all have get paid and the music industry spends more money than most other industries on R&D. It invests 20% of its revenue on finding new artists and although some suggest that this isn’t needed in the Internet age, they are wrong said Kennedy.
Kennedy went on to explain that music marketing is designed to take effect in “Week One” of an album’s release and in an ideal world a new release would chart at #1 and would reach its sales targets in that first week. But if products are made available on Pirate Bay during that time he said, “then purchases are taken out of the market and because of the illegal use of music, the legal use of music under-performs and in some countries that can have a dramatic effect.”
Kennedy was asked about CD sales in the last 10 years. He said they dropped from $27 billion to $18 billion. He said that the Top 10 CDs in 2001 sold 69 million units and the Top 10 CDs in 2008 sold 46 million units. 9 years ago the #1 record sold 13 million units but in 2008, Coldplay sold half of that.
Kennedy was asked what impact legal downloads have on these figures, but he denied they made up the difference. The music industry has always relied on young people for sales he said, and these same people have got used to using illegal sites. “Many legitimate sites have struggled to compete with free. It’s impossible to compete with free,” said Kennedy.
When put to him that some claim that illegal downloading promotes sales, Kennedy labeled this as old-fashioned thinking and said that people don’t think this way anymore. When asked about P2P providing live performance promotion, Kennedy said that every single live performance success is linked to a previously successful recording career/sales.
When asked about the differences between TPB and Google, Kennedy said there is no comparison. “We talk to Google all the time about preventing piracy. If you go to Google and type in Coldplay you get 40 million results - press stories, legal Coldplay music, review, appraisals of concerts/records. If you go to Pirate Bay you will get less than 1000 results, all of which give you access to illegal music or videos. Unfortunately The Pirate Bay does what it says in its description and its main aim is to make available unauthorized material. It filters fake material, it authorizes, it induces.”
Kennedy says TPB threat is growing all the time. “They are proud of this with their statistics - there are 22 million users, 1 million visitors each day, 1.6 million .torrent files and they say they are responsible for 55% of BitTorrent traffic. They pride themselves on the quality of what they deliver.”
When questioned about the IFPI’s 10X damages multiplier for pre-release material, Kennedy felt this was fair considering the damage it does to the launch of a product. Kennedy says they have teams of experts monitoring the Internet everyday for piracy.
He went on to say that people who download music from TPB spend much less on music than they would otherwise and if they didn’t get it for free they would buy it. “It is common sense, if they couldn’t get it for free they would buy it and when we ask them, they confirm that.”
When asked if downloaders have less money than others, Kennedy said that younger people have the money but just don’t spend it on music anymore. Kennedy said that the reduction in sales in the music industry is directly attributable to illegal downloading.
When asked about scientific research on the issue, Kennedy said that of several reports, only one said there was no causal link between file-sharing and lost sales - all the rest say there is. Discussion of certain reports on the issue took place, with defense lawyers questioning Kennedy on the details of the reports.
The defense lawyers pointed out that in one of the reports Kennedy refers to, lesser known artists appear to be downloaded a lot on TPB but Kennedy said although he is 56 years old, he recognizes nearly all of the artists in the TPB Top 100 list.
Carl Lundstrom’s lawyer asked about the profit on the industry’s $18bn turnover from 2008. “Terrible,” Kennedy replied. Of the big players “..only one company is making a profit.” Kennedy was pushed, if he knows the turnover, why doesn’t he know the profit. He said it was difficult to say.
He was also asked how much of this $18bn turnover is used to fight piracy, Kennedy said there are three main areas of expenditure. Funding the RIAA in US, IFPI globally and more local groups such as IFPI (Sweden). They all have budgets and a large proportion of this is used to fight piracy.
The global amount used by IFPI on lobbying and fighting piracy is £75 million.
Kennedy said he qualified as a lawyer since the 70’s but hasn’t practiced recently. He was asked if he understood BitTorrent. Kennedy said he did, but in “very vague terms.” When the defense lawyers asked more detailed questions, about uTorrent for instance, Kennedy said he’d heard of it but had no idea of the details. It was very clear he knew nothing about any remotely technical issues.
Kennedy was asked if IFPI has taken any action against the actual sharers of the music made available via TPB, as detailed in this case. He said he couldn’t say and didn’t know who these individuals are. He then admitted to not knowing how The Pirate Bay works so the defense lawyers put it to him - if you don’t understand how TPB works, how can you say they are to blame? Again he was pressed why he took no action against the actual sharers but he said he didn’t know and admitted “It’s probably unlikely we took action.”
Kennedy was asked why they haven’t sued Google the same way as TPB. He said that Google said they would partner IFPI in fighting piracy and he has a team of 10 people working with Google every day, and if Google hadn’t announced they were a partner, IFPI would have sued them too.
When pressed on the earlier reports that Kennedy referred to, the defense lawyers wanted to know if IFPI had commissioned any of them. Kennedy said he didn’t know.
The court then took a morning break.
After the break the hearings continued as Bertil Sandgren, a board member of the Swedish film institute took the stand. He was asked to explain what he knows about file-sharing, and told the court that he knew that some movies leak on filesharing networks before they premiere, that there is no copy protection on these files and that there are even subtitles available.
The court then asked to keep the questions relevant to the damages that are claimed. Sandgren went on to say that he believes that the impact of file-sharing on the movie industry started in 2002/2003. He claimed that there is statistical evidence that illegal file-sharing has affected the number of seats sold per film. In Sweden, the ticket sales between 2002-2006 have fallen by 31%, Sandgren explained.
“The reason for this drop is that the number of premieres have increased but sales have decreased. File-sharing has somewhat made the market thinner. The difference between number of sold tickets on average has dropped 10,000 per film per year. That equals between 800,000 and 1,000,000 SEK per film,” Sandgren said.
Sandgren further told that the damages they claim are based on a fictitious license fee. They have calculated the total number of movie downloads in a year, and use the film’s market share (4% for the movie “Mastermind”) to come up with the total number of downloaders . “If there were 1 million downloaders in total, it’s probable that 4% downloaded Mastermind,” Sandgren said. “Of those, 28,5% were downloaded from TPB. That gives 12000.”
After his explanation of how the damages are calculated, the defense lawyers questioned Sandgren. Most of their questions focused on the link between downloading and the decline in ticket sales. According to the defense lawyers there is research showing this link is not that straightforward, while stressing that 2008 has been the best year for the Swedish movie industry ever. Sandgren said that he didn’t want to comment on factors underlying the success year.
Around noon the court took a lunch break. http://torrentfreak.com/the-pirate-b...-day-8-090225/
Day 9: BitTorrent Is Not Evil
Yesterday several entertainment industry insiders explained how piracy was responsible for the downfall of their industries. Today, Kristoffer Schollin from Gothenburg University explains that BitTorrent is not evil, while media professor Roger Wallis informs the court that the file-sharing is actually beneficial to the entertainment industry.
First up today was Kristoffer Schollin who spoke via telephone from Gothenburg University. He explained he is a lecturer in IT law with a particular interest in file-sharing and has written a paper on Digital Rights Management (DRM). He has also made a special witness report for the court.
Answering questions from the defense, Schollin explained that .torrent files are a more sophisticated type of Internet link (such as an http hyperlink) and that The Pirate Bay is an “open database” of .torrent files. Several large companies are using BitTorrent technology said Schollin, including Blizzard who use it for World of Warcraft.
When asked about TPB specifically, Schollin noted that the site is essentially a BBS (Bulletin Board) for .torrent files, attached to a forum for debate. He was also asked, in his opinion, if TPB is illegal. “That’s for the court to decide,” he said, while noting that the technology behind the site is not illegal in any way.
Schollin told the court that The Pirate Bay may not be the world’s largest tracker, but it is the most famous one, largely thanks to the media and thanks to the trial. Right now there are maybe a dozen other big ones and maybe even a thousand others, he said.
Going on, he noted it is usually sites that are known to users, while trackers can operate behind the scenes, not seen by the regular users. The day of the very big torrent site may be over, he added, and said he believes the future could lie in meta-searches, while explaining how client-based searches like Vuze’s operate.
When asked about the type of content indexed on TPB, Schollin said, “My God, everything,” noting that both copyright and copyright-free material can be found.
When speaking with Carl Lundstom’s lawyer Per E Samuelsson, Schollin admitted that while searching for .torrents via Google (using Harry Potter as an example) more results could be found than with TPB’s search alone. Indeed, said Schollin, EU law documents are easier for him to find via Google than they are on the EU’s own website.
The so-called King Kong defense also resurfaced, with Samuelsson asking Schollin if it was possible to conclude that the torrent file uploaded by user ‘KingKong’ was first published on TPB. Schollin said it was not possible.
Touching again on the issue of whose actual tracker is used when a torrent file is activated, Schollin said that just because a .torrent is available on TPB, it doesn’t automatically follow that the file uses TPB’s tracker.
Schollin went on to explain how to make a .torrent file which links to content. He said that in the creation stage, it doesn’t even require an Internet connection and everything is done on the user’s PC with a torrent client, not on TPB. Once created the .torrent file could be uploaded on to the Internet. It would then be indexed by Google, which then allows anyone to access the .torrent via a Google search.
Then it was Prosecutor Håkan Roswall’s turn to question Schollin. He put it to Schollin that kudos could be achieved in file-sharing circles if an individual put pre-release material up on the Internet, a point with which Schollin agreed.
Roswall asked Schollin why he felt the TPB had grown so big and so popular. Schollin said that many users may feel that participation might be considered ‘cool’. The discussion again moved back to DHT (Distributed Hash Table) and then the court took a break.
On return, IFPI lawyer Peter Danowsky stepped up to question Kristoffer Schollin. He asked where Schollin’s interest in TPB began and he replied it started when there was lots of discussion about them on the Internet. Conversation moved to Schollin’s knowledge of TPB’s infamous ‘legal’ page and the ideology of some of its users.
Next up to question Schollin was Monique Wadsted, representing the movie companies. She asked Schollin if he had heard the rumor that 40% of the Internet’s traffic is down to TPB. Schollin said this was incorrect and it was more likely that they were responsible for 40% of all BitTorrent traffic. Wadsted then put it to Schollin that 50% of all the world’s .torrent files sit on TPB, and he denied this amount too, but recognized that there would be a significant number.
Schollin was then asked by the defense if he believed that TPB has a role in transmitting communications on the Internet. Schollin agreed it did. When asked if TPB might be considered a ’service provider’ under the law, he said that was for the court to decide.
Up next as a witness was Roger Wallis. Wallis is a media professor, composer and Chairman of the Swedish Composers of Popular Music and is involved in other outfits dedicated to the rights of musicians. However, Wallis previously said that he did not see the difference between TPB and other search engines such as Google and has criticized the music industry for being too slow adopting technology.
Speaking with Peter Altin, (Peter Sunde’s lawyer), Wallis said he specializes in developing the music industry on the Internet and because of this some have incorrectly drawn the assumption that he works for the industry - he doesn’t.
Wallis referred to a report he wrote which detailed the music industry’s approach to digital technology. He said there were elements who would do anything to smother it, referring to the backlash against cassette tapes in the 1970’s.
Altin asked Wallis if there is any connection between illicit downloads and lost sales in the music industry. Contradicting the opinion of John Kennedy of the IFPI in his testimony yesterday, Wallis said that downloading caused an increase in sales of live event tickets and although there has been a reduction in CD sales, this won’t continue.
Wallis went on to explain that while some people download, these people also tend to buy more CDs than others that don’t. It’s not just downloading causing competition for the industry, other things have an effect such as the growth of computer games, he said.
Wallis believes the music industry is shooting itself in the foot by going after file-sharers, for the reasons mentioned in the previous paragraph. He said that on the whole, file-sharing is beneficial to the music and movie industries, pointing out that the movie industry just had its most successful year ever. But the music industry doesn’t help itself he argues. Anyone who has bought a Beatles single in the past, simply cannot buy the same single in the digital domain due to licensing issues. “This is madness,” he said.
Next up to question Wallis was Peter Danowsky, who immediately started to annoy him by questioning his credentials. Danowsky mused if Wallis was even a proper professor, while disputing the year when Wallis qualified as such, calling him into doubt and criticizing him. “Have you no better questions to ask?” Wallis replied, reportedly visibly annoyed.
With tempers starting to fray, the court took a break.
After the break media professor Roger Wallis was questioned by Henrik Pontén from Sweden’s Anti-Piracy Office. Pontén went on where Danowsky left off and asked the professor if he could elaborate a bit more on how he acquired his title. “Can you use Google? Wallis replied “Then you could easily find my CV,” he added, and the court agreed with his assessment that they have already been over this.
Pontén then showed some graphs from a study that showed that 18% of those who download copyrighted music buy less, while only 8% indicate to buy more. These figures cause some confusion in court, and Wallis responded by saying that these figures do not correspond with his findings. “I believe that it has no relevance,” Wallis added. The prosecution asks some more questions about the contradicting results of the other study, but Wallis doesn’t want to go into it.
When Wallis left the stand he was asked whether he wanted compensation for his appearance. “You are welcome to send some flowers to my wife,” he responded.
Defendant Peter Sunde then asked the court if it’s ok to show an 8 minute clip that explains how BitTorrent works. The defense explains that the film will show that none of the alleged criminal offenses actually took place since torrent files can be shared in many ways. Fredrik Neij, one of the other defendants, further said that the SLK investigation was flawed because not all the torrents that were presented as evidence are exclusively tracked by TPB.
After a short break the film was played (available for download here) and it showed how a torrent is created. First a BitTorrent is downloaded. To make the torrent a tracker has to be added, hundreds of trackers can be found through Google the film explained. It further explained how these torrent files can be shared through MSN, Skype, through blogs like Wordpress or a website such as The Pirate Bay. The other party can then grab the torrent and start downloading.
The rest of the day the court will go over the personal charges against Fredrik Neij and Gottfrid Svartholm. These are seperate cases, not related to TPB, and we will therefore not cover these on TorrentFreak. Our daily coverage on the proceedings in the TPB trial will continue on Monday.
Pirate Bay Witness’ Wife Overwhelmed With Flowers
When Professor and media researcher Roger Wallis left the stand yesterday, the court asked whether he wanted to be reimbursed for his appearance. “You are welcome to send some flowers to my wife,” he responded. In the hours that followed, many Pirate Bay supporters took this suggestion to hand.
Professor and media researcher Roger Wallis appeared as an expert witness at the Pirate Bay trial yesterday. He was questioned on the link between the decline of album sales and filesharing. Wallis told the court that his research has shown that there is no relation between the two.
He was heavily attacked by industry lawyers Danowsky, Pontén and Wadsted who did everything they could to discredit and slander his reputation. When Wallis was asked whether he wanted to be reimbursed for travel expenses etc, he light-heartedly suggested sending some flowers to his wife.
His statement was picked up by the large audience listening in to the live audio from the trial and flowers soon began arriving at the Wallis’ house.
Roger’s wife, Görel Wallis, wasn’t surprised by her husband’s whim in court:
“We have been married for 38 years. He proposed half an hour after we met and I said maybe. After a day, he had convinced me”, she said.
At a local flower store in Stockholm they had received 100 orders by 20.30 last night. Owner Kristian Skald said that two nearby stores had received an equal amount of orders.
“Last delivery was 33 bouquets Thursday night. There will be more to come on Friday,” the owner of the flower shop commented.
Today, Friday, the couple celebrates their wedding day anniversary and on Saturday it’s Görel’s birthday. Roger Wallis feels she is worth all the flowers she gets.
“She was very worried before the trial. They questioned my competence and that made her very sad. She hadn’t slept for two days,” Roger said.
A web page has been set up that collects what has been given so far, complete with an ever-growing stack of CDs that show how many sales the music-industry has lost by slandering the Professor.
Thus far, in an amazing show of generosity from a section of society labeled by the music industry as ‘thieves’, more than 4100 Euros worth of flowers, chocolate and gifts have been sent to the couple.
The Wallis’ soon ran out of vases for the flowers but Görel knows that sharing is caring and will distribute the flowers to all residents in their apartment building.
“We will make sure it will be beautiful here.”
Until next week,
Current Week In Review
Recent WiRs -
February 21st, February 14th, February 7th, January 31st
Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.
"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public." - Hugo Black
|26-02-09, 04:25 PM||#2|
my name is Ranking Fullstop
Join Date: Dec 2001
Location: Promontorium Tremendum
|Thread Tools||Search this Thread|
|Thread||Thread Starter||Forum||Replies||Last Post|
|Peer-To-Peer News - The Week In Review - February 14th, '09||JackSpratts||Peer to Peer||2||15-02-09 09:54 AM|
|Peer-To-Peer News - The Week In Review - January 24th, '09||JackSpratts||Peer to Peer||0||21-01-09 09:49 AM|
|Peer-To-Peer News - The Week In Review - May 19th, '07||JackSpratts||Peer to Peer||1||16-05-07 09:58 AM|
|Peer-To-Peer News - The Week In Review - December 9th, '06||JackSpratts||Peer to Peer||5||09-12-06 03:01 PM|
|Peer-To-Peer News - The Week In Review - September 16th, '06||JackSpratts||Peer to Peer||2||14-09-06 09:25 PM|