|24-02-10, 09:55 AM||#1|
Join Date: May 2001
Location: New England
Peer-To-Peer News - The Week In Review - February 27th, '10
"We're not a totalitarian society. We're not China. We expect a little freer discourse than that." – Pat Powers
"This bill will let people know—in a way that they can understand—that their personal files are being shared with complete strangers." – Sen. Amy Klobuchar (D-Minn)
"AT&T's 3G network is now the top performer in 13-city tests, with download speeds 67 percent faster than its competitors." – Mark Sullivan
February 27th, 2010
Google Execs Convicted in Italy for Down's Video
A Milan court convicted three Google Inc executives on Wednesday for violating the privacy of an Italian boy with Down's syndrome by letting a video of him being bullied be posted on the site in 2006.
Google will appeal the six-month suspended jail terms and said the verdict "poses a crucial question for the freedom on which the internet is built," since none of the three employees found guilty had anything to do with the offending video.
"They didn't upload it, they didn't film it, they didn't review it and yet they have been found guilty," said Google's senior communications manager, Bill Echikson, in Milan.
The court convicted senior vice-president and chief legal officer David Drummond, former Google Italy board member George De Los Reyes and global privacy counsel Peter Fleischer. Senior product marketing manager Arvind Desikan was acquitted.
The executives, none of whom are based in Italy, do not face actual imprisonment as the sentences were suspended, while an appeals process in Italy can take many years.
They were not in Italy for the hearing. Drummond is based in California, Fleischer in Paris and Desikan in London, while De Los Reyes has since retired, Echikson told Reuters.
The complaint was brought by an Italian advocacy group for people with Down's syndrome, Vivi Down, and the boy's father, after four classmates at a Turin school uploaded a clip to Google Video showing them bullying the boy.
"A company's rights cannot prevail over a person's dignity. This sentence sends a clear signal," public prosecutor Alfredo Robledo told reporters outside the Milan courthouse.
Down's syndrome is the most common genetic cause of mental retardation, occurring in about 1 out of 700 live births.
The video was filmed with a mobile phone and posted on the site in September 2006.
"Threat to Net Freedom"
Google argued that it removed the video immediately after being notified and cooperated with Italian authorities to help identify the bullies and bring them to justice.
It says that, as hosting platforms that do not create their own content, Google Video, YouTube and Facebook cannot be held responsible for content that others upload.
Drummond said in a statement the verdict "sets a dangerous precedent" and meant "every employee of any internet hosting service faces similar liability." He said the law was clear in Italy and the European Union that "hosting providers like Google are not required to monitor content that they host."
Fleischer said if employees were "criminally liable for any video on a hosting platform, when they had absolutely nothing to do with the video in question, then our liability is unlimited."
The prosecutors accused Google of negligence, saying the video remained online for two months even though some web users had already posted comments asking for it to be taken down.
Down's syndrome support group Vivi Down said in a statement that it was "very satisfied" with the guilty verdict.
Censoring of web sites has become a hot issue in Italy in recent months, following a spate of hate sites against officials including Prime Minister Silvio Berlusconi.
The government briefly studied plans to black out Internet hate sites after fan pages emerged praising an attack on the premier, but the idea was dropped after executives from Facebook, Google and Microsoft agreed to a shared code of conduct rather than legislation.
(Additional reporting by Emilio Parodi and Eleanor Biles; writing by Stephen Brown in Rome; Editing by Elizabeth Fullerton.)
German Court Throws the Book at RapidShare
Book publishers claimed a key legal victory over a large European website that was found to have hosted copies of pirated books.
A court in Hamburg, Germany, has declared that "copyrighted literary works are unlawfully being made publicly available in the context of a share-hosting system on the Internet," a group of six major publishers said Wednesday.
The court on Feb. 10 ordered RapidShare and its owners, Christian Schmid and Bobby Chang, to "promptly block access" to pirated books and "take precautions going beyond this in order to prevent ... further similar infringement."
The company, based in Switzerland, did not respond to an e-mail requesting a statement.
Book piracy has become a growing concern for publishers as they begin to distribute more of their titles in digital formats on devices such as Amazon.com's Kindle, Sony's Reader or the upcoming Apple iPad. To combat piracy, publishers have been quietly issuing so-called takedown notices to websites that host or facilitate the sharing of pirated books, requesting the sites to delete or shut down access to the files.
Their suit against RapidShare is among the industry's first concerted efforts to tackle a website and is akin to lawsuits lobbed by music labels against Napster in 1999 and 2000 for copyright violations. The six publishing plaintiffs were John Wiley & Sons, McGraw-Hill, Macmillan, Reed Elsevier, Cengage Learning and Pearson.
RapidShare accounted for 36% of the 53,000 takedown notices issued by publishers between July and December, according to a study by Attributor, a content monitoring consulting service based in Redwood City, Calif. The site attracts more than 42 million visitors a day, representing just under 3% of the world's Internet users, according Web traffic tracker Alexa Internet.
Both Google and RapidShare plan to appeal – Jack.
New Zealand Considering $15,000 Penalty for Web Downloads
Anyone caught breaching copyright by downloading films and music from the internet will face large penalties and could even be disconnected by their internet service under new legislation.
A three-strikes system will hand out formal warnings to offenders, and further illegal downloads could prompt copyright owners to apply for up to $15,000 compensation from the user.
The copyright owner could also ask the relevant internet service provider to cut off the customer's internet connection for up to six months.
The ban could happen only after a copyright owner, such as a media company, applies for a district court order for the internet service provider to suspend the user's internet access.
The Copyright (Infringing File Sharing) Amendment Bill was introduced to Parliament this week and is a replacement for last year's controversial proposals that would have banned downloaders from ever having an internet connection.
The former bill also included no process that allowed internet users to rebut a copyright owner's allegations.
Under the new law, however, internet users who feel they have been wrongly penalised will be able to take their case to the Copyright Tribunal, for free.
Bloggers say that even though the new bill is an improvement on the old one, it still shouldn't be passed.
Creative Freedom NZ director Bronwyn Holloway-Smith said internet termination was "quite extreme". A fine would be an adequate punishment. "The internet has become a core, vital service - you wouldn't terminate someone's right to post a letter. There's a three-strikes system, but there's a tribunal in place to judge cases. We've yet to see what scale they will be basing their fines on. We want something proportionate."
Dozens of bloggers throughout the country are planning to take their blogs down on Monday morning in an "internet blackout" protest against the bill. Similar protests a year ago led John Key's government to stall the previous Labour government's attempt to update copyright law.
The focus of protest was Section 92a of Labour's law, which would have put the onus on ISPs to disconnect copyright infringers.
Ms Holloway-Smith said one flaw in the new bill was that anyone disconnected from one internet service would be free to sign up with another.
Justice Minister Simon Power stressed that the bill's main purpose is to protect copyright owners from being ripped off by people continuously downloading and sharing files.
"Online copyright infringement is a problem for everyone, but especially for the creative industry, which has experienced significant declines in revenue as file-sharing has become more prevalent," he said.
"This bill is the result of extensive consultation with stakeholders and is an important step in addressing a complex issue."
The bill's explanatory notes say ongoing issues with people downloading movies, music and other software - and then distributing them to other internet users - were having a "negative and cumulative effect on New Zealand's music, film and software industries".
The bill is intended to deter people from illegally downloading and sharing files, as well as educating them about the consequences of doing so.
Big Brother Is Watching...
The new copyright law would require internet service providers (ISPs) to keep information about account holders' use of the internet for at least 40 days.
Account holders found to have persistently breached copyright rules face having their internet connections cut off by their ISP if instructed to do so by the district court.
ISPs must also keep for at least 12 months any information about infringements sent by copyright owners and copies of infringement notices issued to an account holder.
ISPs may not release the name or contact details of an account holder to a copyright owner unless the account holder gives permission or the ISP is required to by the Copyright Tribunal or a court.
Film Industry Appeals in iiTrial Case
AFACT has lodged a last minute appeal against a Federal Court judgement earlier this month which exonerated ISP iiNet for the copyright infringing activities of its subscribers.
The Australian Federation Against Copyright Theft, representing 34 of the world's largest film companies, filed an appeal in the Federal Court today on 15 grounds.
The legal community had anticipated that AFACT would appeal, if only to exhaust all potential avenues before calling for the Federal Government to intervene.
In a statement today, AFACT said the judgement "left an unworkable online environment for content creators and content providers" and "represents a serious threat to Australia's digital economy."
AFACT Executive Director Neil Gane said the judgement was "out of step" with established copyright law.
"The court found large scale copyright infringements, that iiNet knew they were occurring, that iiNet had the contractual and technical capacity to stop them and iiNet did nothing about them," he said.
Gane said that previous case law (such as Cooper and Kazaa) suggested iiNet's failure to act should have amounted to "authorisation" of copyright infringement - a notion that was rejected by Justice Cowdroy in his ruling.
Gane said the decision had rendered Safe Harbour provisions irrelevant.
"This decision allows iiNet to pay lip service to provisions that were designed to encourage ISPs to prevent copyright infringements in return for the safety the law provided," he said. "If this decision stands, the ISPs have all the protection without any of the responsibility.
"By allowing internet companies like iiNet to turn a blind eye to copyright theft, the decision harms not just the studios that produce and distribute movies, it but also Australia's creative community and all those whose livelihoods depend on a vibrant entertainment industry."
iiNet chief executive Michael Malone released a statement today saying that it was "more than disappointing and frustrating that the studios have chosen this unproductive path."
Malone again pressed the film industry to drop its litigation and talk to ISPs about finding better commercial models for its content.
"This legal case has not stopped one illegal download and further legal appeals will not stop piracy," he said.
Malone said services like Hulu or iiNet's content freezone represent the only effective means of combating online piracy.
"We stand ready to work with the film and television industry to develop, implement and promote these new approaches and models," he said. "We are ready to champion them in partnership with the studios, but court proceedings and more legal challenges only serve to delay this and in the meantime more copyright material will be stolen."
Leaked ACTA Draft Reveals Plans for Internet Clampdown
ISPs must snoop on subscribers or face being sued by content owners
The US, Europe and other countries including New Zealand are secretly drawing up rules designed to crack down on copyright abuse on the internet, in part by making ISPs liable for illegal content, according to a copy of part of the confidential draft agreement that was seen by the IDG News Service.
It is the latest in a series of leaks from the anticounterfeiting trade agreement (ACTA) talks that have been going on for the past two years. Other leaks over the past three months have consisted of confidential internal memos about the negotiations between European lawmakers.
The chapter on the internet from the draft treaty was shown to the IDG News Service by a source close to people directly involved in the talks, who asked to remain anonymous. Although it was drawn up last October, it is the most recent negotiating text available, according to the source.
It proposes making ISPs (internet service providers) liable under civil law for the content their subscribers upload or download using their networks.
To avoid being sued by a record company or Hollywood studio for illegally distributing copyright-protected content, the ISP would have to prove that it took action to prevent the copyright abuse, according to the text, and in a footnote gives an example of the sort of policy ISPs would need to adopt to avoid being sued by content owners:
"An example of such a policy is providing for the termination in appropriate circumstances of subscriptions and accounts in the service provider's system or network of repeat offenders," the text states.
Terminating someone's subscription is the graduated response enacted in France last year that sparked widespread controversy. The French law is dubbed the "Three Strikes" law because French ISPs must give repeat file sharers two warnings before cutting off their connection.
Other countries in Europe are considering similar legal measures to crack down on illegal file-sharing. However, EU-wide laws waive ISPs' liability for the content of messages and files distributed over their networks.
European Commission officials involved in negotiating ACTA on behalf of the EU insist that the text being discussed doesn't contradict existing EU laws.
"There is flexibility in the European system. Some countries apply judicial solutions (to the problem of illegal file-sharing), others find technical solutions," said an official on condition he wasn't named.
He said the EU doesn't want to make a "three strikes" rule obligatory through the ACTA treaty. "Graduated response is one of many methods of dealing with the problem of illegal file-sharing," he said.
He also admitted that some in the Commission are uncomfortable about the lack of transparency in the ACTA negotiations.
"The fact that the text is not public creates suspicion. We are discussing internally whether the negotiating documents should be released," he said, but added that even if it was agreed in Brussels that the documents should be made public, such a move would require the approval of the EU's 10 ACTA negotiating partners.
The participating countries are the US, the E.U., Canada, Mexico, Australia, New Zealand, South Korea, Singapore, Jordan, Morocco and the United Arab Emirates.
In a separate leak that first appeared on blogs last week, the European Commission updated members of the European Parliament on the most recent face-to-face meeting between the signatory countries, which took place in Mexico at the end of last month.
According to that leak, the internet chapter of the treaty was discussed, but no changes to the position suggested by the US last fall were agreed.
"The internet chapter was discussed for the first time on the basis of comments provided by most parties to US proposal. The second half of the text (technological protection measures) was not discussed due to lack of time," the memo said, adding:
"Discussions still focus on clarification of different technical concepts, therefore, there was not much progress in terms of common text. The US and the EU agreed to make presentations of their own systems at the next round, to clarify issues."
The Commission official refused to comment on the content of the leaked documents.
The next meeting of ACTA negotiators will take place in New Zealand in April.
EU Data Protection Supervisor Warns Against ACTA, Calls 3 Strikes Disproportionate
Peter Hustinx, the European Data Protection Supervisor, has issued a 20-page opinion expressing concern about ACTA. The opinion is a must-read and points to the prospect of other privacy commissioners speaking out. Moreover, with the French HADOPI three strikes law currently held up by its data protection commissioner, it raises questions about whether that law will pass muster under French privacy rules.
Given the secrecy associated with the process, the opinion addresses possible outcomes based on the information currently available. The opinion focuses on three key issues: three strikes legislation, cross-border data sharing as part of enforcement initiatives, and transparency.
On three strikes, the opinion begins by noting the privacy implications:
Such practices are highly invasive in the individuals' private sphere. They entail the generalised monitoring of Internet users’ activities, including perfectly lawful ones. They affect millions of law-abiding Internet users, including many children and adolescents. They are carried out by private parties, not by law enforcement authorities. Moreover, nowadays, Internet plays a central role in almost all aspects of modern life, thus, the effects of disconnecting Internet access may be enormous, cutting individuals off from work, culture, eGoverment applications, etc.
The opinion then assesses three strikes within the context of European data protection law, concluding that it is a disproportionate measure:
Although the EDPS acknowledges the importance of enforcing intellectual property rights, he takes the view that a three strikes Internet disconnection policy as currently known - involving certain elements of general application - constitutes a disproportionate measure and can therefore not be considered as a necessary measure. The EDPS is furthermore convinced that alternative, less intrusive solutions exist or that the envisaged policies can be performed in a less intrusive manner or with a more limited scope. Also on a more detailed legal level the three strikes approach poses problems.
Among the specific problems, Hustinx concludes that the benefits simply don't outweigh the costs:
The EDPS is not convinced that the benefits of the measures outweigh the impact on the fundamental rights of individuals. The protection of copyright is an interest of right holders and of society. However, the limitations on the fundamental rights do not seem justified, if one balances the gravity of the interference, i.e. the scale of the privacy intrusion as highlighted by the above elements, with the expected benefits, deterring the infringement of intellectual property rights involving - for a great part - small scale intellectual property infringements.
The opinion also considers the privacy implications of data sharing arrangements facilitated by ACTA for enforcement purposes:
It can be questioned first whether data transfers to third countries in the context of ACTA are legitimate. The relevance of adopting measures at international level in that field can be questioned as long as there is no agreement within the EU member states over the harmonisation of enforcement measures in the digital environment and the types of criminal sanctions to be applied. In view of the above, it appears that the principles of necessity and proportionality of the data transfers under ACTA would be more easily met if the agreement was expressly limited to fighting the most serious IPR infringement offences, instead of allowing for bulk data transfers relating to any suspicions of IPR infringements. This will require defining precisely the scope of what constitutes the 'most serious IPR infringement offences' for which data transfers may occur.
The opinion follows this with detailed recommendations on how ACTA can facilitate sharing of information and ensure appropriate privacy safeguards.
Hustinx is direct and to the point on the issue of transparency:
The EDPS strongly encourages the European Commission to establish a public and transparent dialogue on ACTA, possibly by means of a public consultation, which would also help ensuring that the measures to be adopted are compliant with EU privacy and data protection law requirements.
Europe 'Will Not Accept' Three Strikes in Acta Treaty
The EC will not support disconnection of unlawful file-sharers in the Acta global copyright-enforcement treaty, the trade commissioner has said
The European Commission has pledged to make sure the Acta global treaty will not force countries to disconnect people for unlawfully downloading copyrighted music, movies and other material.
The assurance from the office of the trade commissioner, Karel De Gucht (pictured), is the strongest statement on Acta (the Anti-Counterfeiting Trade Agreement) to emerge from the new Commission since it took office earlier in February.
"We are not supporting and will not accept that an eventual Acta agreement creates an obligation to disconnect people from the internet because of illegal downloads," John Clancy, De Gucht's spokesman, told ZDNet UK on Thursday.
"The 'three-strike rule' or graduated response systems are not compulsory in Europe. Different EU countries have different approaches, and we want to keep this flexibility."
The Acta negotiations, which have been taking place since 2007, aim to create a new global intellectual-property enforcement regime that builds on the 1994 Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs). Much of Acta is taken up with trademark protection and counterfeit goods, but the draft text also has a section on online copyright protection, according to published summaries.
All the countries involved have agreed to keep draft texts confidential until the final treaty is agreed. In these secretive circumstances, many people have expressed fears that Acta will force signatory countries to cut the broadband connections of individuals who download copyrighted music, films and software.
These fears have been stoked by leaked negotiation documents, notably an October 2009 EU commentary on a US proposal. The commentary indicated that the US wanted an account-termination system to be put in place, with "civil remedies, as well as criminal penalties" for copyright infringement.
"The EU Commission maintains that any criminal action should be for infringements on a large, commercial scale only," Clancy said on Thursday.
"[Acta] has never been about pursuing infringements by an individual who has a couple of pirated songs on their music player. For several years, the debate has been about what is 'commercial scale'. EU legislation has left it to each country to define what a commercial scale is and this flexibility should be kept in Acta."
Clancy also addressed concerns raised on Monday by the EU privacy chief, Peter Hustinx. Hustinx complained in an official opinion that he had not been consulted on the contents of Acta. He also expressed fears that data protection and privacy safeguards were not being built into the treaty from an early stage, and called for a public debate on the treaty.
"When we say that any future Acta agreement must respect existing European and national legislation, we mean exactly that," Clancy said on Thursday. "There will be no watering-down of the existing rights and protection afforded to our citizens.
"Rumours pretend that Acta would ignore civil liberties and data protection — we are neither willing nor able to do that. The EU already has very stringent laws that defend individuals' civil liberties and personal data protection — they have to be respected; they cannot be overruled or ignored by this international treaty."
De Gucht and the other new commissioners took office early in February. He has not publicly weighed into the issue of Acta since his confirmation hearing on 12 January, where he maintained that the treaty was concerned with "organised counterfeiting, in most cases by organised criminals" and stressed that "the idea is certainly not to limit the freedom of expression through the internet".
"I will abide by the Telecoms Package in relation to Acta; Acta should not be designed to be something of a key to close the internet," De Gucht said at the time.
The new commissioner for Europe's digital agenda, Neelie Kroes, also talked about Acta at her confirmation hearing. She said she had discussed the issue with De Gucht, adding that the Commission required other negotiating countries to "guarantee, number one, the same level of protection for intellectual-property rights that the EU currently applies with all the due guarantees provided by [European law]".
"The Commission is in line with the current level of harmonisation of [intellectual property rights] enforcement and there will be no harmonisation via the back door," Kroes said at the time. "We stick to the line; they have to move to our side, and that is it."
Countries involved in the Acta negotiations include the US, the EU, Switzerland, Japan, Australia, Canada, Jordan, Mexico, Morocco, New Zealand, South Korea, Singapore and the United Arab Emirates. The treaty is expected to be finalised by the end of 2010.
Mandelson Could Decide Length of Internet Suspensions for Filesharers
Minister – rather than Parliament – to determine timeframe for 'temporary suspension', leading to fears of indefinite bans
A government minister, not parliament, will decide on the maximum period for which people found guilty of illicit filesharing can have their accounts suspended if the Digital Economy bill becomes law.
Although the government insists that it would only implement "temporary suspension" of internet accounts of people deemed to have broken copyright law, it has not defined how long "temporary" is – and the definition does not appear in the bill now before Parliament.
Instead, the secretary of state at the Department of Business, Innovation and Skills (DBIS) will decide on how long it should be, based on a recommendations from the Ofcom, although the regulator's suggestions are not binding.
Presently, the person responsible would be Lord Mandelson, who has been particularly vociferous about the need to take action against persistent illicit use of the net.
The only brake on the "temporary" suspension being of unlimited length would be the Human Rights Act – whose applicability to internet access is untested – and the definition offered by DBIS was that "temporary suspension can't effectively mean termination of an internet connection". But there is no definition in the bill of what marks the legal difference between "suspension" and "termination".
On Monday the Guardian noted that Downing Street had responded to a petition calling on it to reject plans to disconnect people found guilty of illicit file sharing by saying: "We will not terminate the accounts of infringers ... [but] ... We added account suspension to the list of possible technical measures which might be considered."
The Department of Business, Innovation and Skills (DBIS) on Tuesday said that "suspension" meant "temporary suspension".
But the Open Rights Group said that this was "semantics" and that the government had simply chosen a different form of words to mean the same thing.
Asked for clarification, a DBIS spokesperson said: "Any move to using technical measures on internet connections would only be made as a last resort and only if our initial measures to deal with unlawful filesharing did not have the desired effect.
"If government decides to use technical measures the Secretary of State would be required to consider an independent report from Ofcom on whether they should be imposed, and on the most effective and proportionate measures."
The secretary of state would then decide the upper limit for a "temporary" suspension – which the DBIS indicated would be at least a few days.
The implementation of the upper limit would then be laid before parliament in the form of an order constituting secondary legislation amending what would be the Digital Economy act.
However, an Order cannot be amended by parliament; it can only be accepted or rejected. Any government with a working majority will be able to get an order passed – and so would be able to implement a "temporary" suspension of indeterminate length without any legislative review.
Ministers have repeatedly referred to "temporary suspension" rather than cutting off internet abusers, for example in a speech by Treasury secretary Stephen Timms on 21 January at the Oxford Media Convention.
TalkTalk, the ISP which has been most vocal in its opposition to the government plans over filesharing penalties, said on Tuesday: "The government's latest announcement on its copyright protection proposals is nothing more than semantics.
"It is still the case that on the say-so of record labels and film studios people will have their internet connections suspended (ie disconnected). All that the Government seems to be saying is that permanent disconnection will be reserved for the very worst offenders. But they have been saying that since day one. There is no change.
"This is simply spin which masks the real issue. The detection system will implicate innocent people whose connections have been hacked into. They will still be deemed 'guilty' and then have to prove their innocence.
"The Digital Economy Bill will give rights holders the power to act as a judge and jury, allowing them to demand that ISPs disconnect their customers without having to prove their case in a court of law. TalkTalk is the only major ISP that has said it will simply refuse to do this and will fight its case in every court in the land and in Europe if it has to.
"The proposed copyright protection measures are utterly futile. Determined filesharers will find other, undetectable ways to access material, leaving innocent people to bear the brunt of this oppressive legislation."
Consumers 'Confused by Copyright'
Consumers are confused by copyright laws that mean it is still illegal to copy a CD onto their computer, a watchdog says.
Consumer Focus said that copyright law was outdated and millions of people were unaware they were breaking laws.
But a legal expert has said that there was no danger of individual consumers being prosecuted for copying music and films for their own use.
Instead commercial operations are the focus of law enforcement.
The current state of the law means that it is illegal for somebody to copy a CD or DVD onto a computer or an Ipod for their own use. This copying to a different device is known as format shifting.
In a poll of 2,026 people, some 73% said that they did not know what they could copy or record.
Jill Johnstone, of Consumer Focus, called for the law to be updated to take the advance of technology into account.
"The world has moved on and reform of copyright law is inevitable, but it is not going to update itself," she said.
However, IT lawyer Nick Lockett, of DL Legal, said that nobody was being prosecuted for the technical breach in the law. Those who set up commercial operations were more at risk of prosecution, he added.
He said a similar issue arose when video recorders allowed people to record a television show and watch it later in the day - which at the time was illegal.
An amendment to copyright laws only came after video recorders had been on the market for some time.
One argument against allowing people to shift their music or films onto a different format was that the artists could claim that these works only had a limited lifespan and so people should pay them again for having the work on a different format.
'No quick fix'
Separate proposals to disconnect so-called peer to peer file-sharers has caused concern among internet campaigners.
This is when people share music or films even though only one of them has bought the original.
A spokesman for the Intellectual Property Office said that a short-term fix on copyright issues was not appropriate.
"We would welcome EU wide action to develop a copyright system that would bring real benefit to consumers. However, there would need to be fair compensation to creators and rights holders for any new exceptions to copyright," he said.
"While many European countries do this through imposing a levy on the price of electronic goods, we do not wish to push up the price of computers and MP3 players for cash-strapped consumers.
"The government has already consulted on a very narrow exception to copyright for format-shifting. The response to that consultation suggests that a format shifting exception is insufficient to meet either consumer or business needs in the digital age."
Lawyer: Joel Tenenbaum Only Caused $21 in Damages by Sharing Music
James "Dela" Delahunty
Lawyer: Joel Tenenbaum only caused $21 in damages by sharing music Charles Nesson, William F. Weld Professor of Law at Harvard Law School, who defends Joel Tenenbaum in his dispute with record labels, said that Joel has only caused $21 worth of damages from his activities. Tenenbaum was told to pay $675,000 in damages to record companies for downloading and sharing 30 songs using the Kazaa software. Nesson has described the damages as "monstrous and shocking."
"Had he purchased the 30 songs on iTunes, he would have paid 99 cents apiece, of which Apple would have passed on 70 cents to the record companies," Nesson argues. "Assuming, contrary to fact, that the record companies have zero costs so that every cent returned to them is profit, the total return would have been $21.00."
Record companies say that statutory damages are a fair way to deal with P2P file sharing, since nobody really knows how many times a user downloaded any of the 30 tracks from Tenenbaum, or from most P2P users. Nesson believes that the actual loss of revenue caused by Tenenbaum's actions should instead be the amount of money he would have paid for the songs had be opted to purchase them legally.
"Not a single person who downloaded these songs using Kazaa would have been impeded from obtaining them had Tenenbaum blocked access to his share folder. Tenenbaum was not a seeder of any of these songs... Once the initial seeds had proliferated, the addition of one more copy to the unlimited, easily-accessible supply could have had no economic consequence whatsoever. Plaintiffs would not have realized a single additional sale had Tenenbaum blocked access to his share folder," Nesson wrote in his final arguments on the issue of damages.
According to Nesson, statutory damages ought to bare some relation to actual damages, and cites the reduction of damages by a Judge in the Jammie Thomas-Rasset case (which is set to go to its third trial). "In 2008, one study reported that the average British teenager had 800 illegal tracks on his iPod. If $22,500 per infringement were constitutional, this would mean the average teenager is exposed to an $18 million dollar verdict against him, clearly an absurd, arbitrary, and unconstitutional result," Nesson argues.
"For additional absurdity, imagine further that the industry actually got judgments of $18 million in damages from roughly 30,000 teenagers, which is approximately the number of lawsuits they filed against consumers until the end of 2008. That would mean they had outstanding judgments for $540 billion dollars—or more than the total revenue the recording industry can expect to earn in about 50 years at its current size of $11 billion per year."
Pirates Buccaneer 10 Million Games
The ESA reports that December 2009 alone saw software pirates illegally download 9.78 million games according to My Gaming.
The ESA teamed up with the International Intellectual Property Alliance to track the online behaviour of 200 popular games across the most popular torrent sharing platforms (platforms that allow for peer-to-peer file sharing) in use.
The USA was the biggest culprit, but countries like Italy, Mexico and Brazil came under fire for not providing an adequate response to the problem.
The other major culprits include Italy at 20.3% of total pirate traffic, Spain with 12.5% France with 7.5% and China with 5.6%.
Former Teen Cheerleader Dinged $27,750 for File Sharing 37 Songs
Whitney Harper must pay the RIAA $27,750 for file sharing that began when she was 14
A federal appeals court is ordering a university student to pay the Recording Industry Association of America $27,750 — $750 a track — for file sharing 37 songs when she was a high school cheerleader.
The decision Thursday by the 5th U.S. Circuit Court of Appeals reverses a Texas federal judge who had ordered defendant Whitney Harper to pay $7,400, or $200 per song. The lower court had granted her an “innocent infringer’s” exemption to the Copyright Act’s minimum of $750 per track because she said she didn’t know she was violating copyrights and thought file sharing was akin to internet radio streaming.
The appeals court, however, said the woman was not eligible for such a defense — even if it was true she was between 14 and 16 years old when the infringing activity occurred on Limewire. The reason, the court concluded, is that the Copyright Act precludes such a defense if the legitimate CDs of the music in question provide copyright notices.
“Harper cannot rely on her purported legal naivety [sic] to defeat the … bar to her innocent infringer defense,” the New Orleans-based appeals court ruled unanimously, 3-0.
Harper, now 22 and a Texas Tech senior, said in 2008 interview that she didn’t know what she did was wrong when she file shared Eminem, the Police, Mariah Carey and others as a teen.
“I knew I was listening to music. I didn’t have an understanding of file sharing,” she said.
Scott Mackenzie, the woman’s attorney, said Friday that “She’s going to graduate with a federal judgment against her.” The RIAA, which has sued thousands of people for infringement, labeled Harper as “vexatious” when she refused to settle the case.
Harper’s case moved up the judicial ladder without a trial. Mackenzie said he was mulling whether to appeal to the U.S. Supreme Court.
Only two RIAA cases against individuals have gone to trial, both of which earned the RIAA whopping verdicts.
Most of the thousands of RIAA file sharing cases have settled out of court for a few thousand dollars. The RIAA is winding down its 6-year-old litigation campaign targeting individual file sharers and instead is working with internet service providers to adopt rules that could cut off or hinder internet access to copyright scofflaws.
The first RIAA case to go to trial against an individual concerned Jammie Thomas. A Minnesota jury ordered the woman to pay $1.92 million for file sharing 24 songs. The judge in the case reduced the award to $54,000 — $2,250 a track.
The second case concerns Joel Tenenbaum, a Boston University grad student who a jury ordered to pay $675,000 for file sharing 30 tracks last year. Tenenbaum has asked the judge in the case to lower the award. A decision is pending.
Senators Get Involved in Combating Online File-Sharing Dangers
Sens. Amy Klobuchar (D-Minn) and John Thune (R-SD), introduced legislation Wednesday to inform Internet users of the privacy and security risks associated with file-sharing software programs.
Their bill would require software developers to clearly inform users when their files are made available to other users over the Internet. Such software, known as peer-to-peer programs, are most commonly used to download music and movies and make up the largest portion of Internet traffic. Popular examples include BitTorrent and LimeWire.
But they can also lead to the inadvertent sharing of sensitive documents, the Federal Trade Commission found this week. The agency discovered widespread data breaches at 100 companies, where personal information of employees and customers—from drivers license numbers to social security numbers—were accidentally exposed while sharing other files.
Klobuchar says families run the “risk of unintentionally sharing all of their private files like tax returns, legal documents, medical records, and home movies when they are connected to peer-to-peer networks.”
“This bill will let people know—in a way that they can understand—that their personal files are being shared with complete strangers,” she added.
The bill would require file-sharing software to display a pop-up box alerting Internet users when they encounter such programs. The bill would also let consumers and employers block or disable file-sharing programs. Similar legislation passed the House in December.
FTC Warns 100 Companies About P2P Data Leaks
The U.S. Federal Trade Commission announced this week that it has notified nearly 100 organizations that sensitive data about their customers and employees has been exposed on peer-to-peer file-sharing networks.
The agency was able to find sensitive information such as health data, drivers’ license, Social Security numbers and financial records -- all of which could lead to identity theft -- available online to any user of certain peer-to-peer file-sharing networks. It told recipients of the notifications (a sample of which can be read here) to scan their corporate networks for unauthorized use of peer-to-peer networks that could be causing these data leaks about their customers or employees.
“Peer-to-peer technology can be used in many ways, such as to play games, make online telephone calls, and, through P2P file-sharing software, share music, video, and documents,” said the FTC. “But when P2P file-sharing software is not configured properly, files not intended for sharing may be accessible to anyone on the P2P network.” Examples of peer-to-peer networks are BearShare, LimeWire, KaZaa, eMule, Vuze, uTorrent and BitTorrent.
Among the notice recipients were public and private organizations including schools, governments, small businesses and large public corporations. The FTC also said that it has opened non-public investigations into companies in addition to these notice recipients that have customer or employee data openly available on peer-to-peer networks. It has also launched an educational campaign to help businesses manage the security risks brought about by the use of these file-sharing networks.
“Companies should take a hard look at their systems to ensure that there are no unauthorized P2P file-sharing programs and that authorized programs are properly configured and secure,” said FTC chairman Jon Leibowitz in a statement. “Just as important, companies that distribute P2P programs, for their part, should ensure that their software design does not contribute to inadvertent file sharing.” The agency also advised that organizations review the practices of their business partners and service providers that could have access to sensitive customer or employee data.
The FTC also recommended that these companies identify the customers and employees whose data has been available on file-sharing sites and consider notifying them of the breach. While there is no federal law requiring organizations to notify individuals when a breach involving unauthorized access to their information has occurred, the majority of U.S. states have already enacted their own data-breach notification laws --some that come with criminal sentences for violators.
Failure to protect sensitive or personally identifiable information could violate the Gramm-Leach-Bliley Act as well as Section 5 of the FTC Act, officials said. However, the FTC stressed that just because it sent a notice to a company regarding data found on peer-to-peer networks doesn’t mean that organization has violated any FTC laws.
US Government Consults Public On Illegal File-Sharing
The PRO-IP Act is a United States law that aims to combat copyright infringement by increasing civil and criminal penalties for offenders. Copyright czar Victoria Espinel is now seeking comments from the public on piracy’s apparent disastrous effect on the economy and health and safety, as well as proposed punishments and enforcement.
The Prioritizing Resources and Organization for Intellectual Property (PRO-IP) Act was one of the last pieces of legislation passed by President Bush back in 2008. The purpose of the act is to toughen current anti-piracy measures.
Among other things the act calls for harsher punishments, the creation of a dedicated FBI anti-piracy unit and a copyright czar who reports directly to the White House. Last year President Obama appointed Victoria Espinel as the new copyright czar and she is now going full steam ahead with the new anti-piracy plans.
For these new plans Espinel is now looking for comments and input from the United States public. Although this might come across as an open and transparent process, the czar already seems to have made up her mind, indicated by the leading nature of the questions.
Yesterday a request for written submissions from the public went out and the copyright czar wants answers to two basic questions, answers that may or may not be used for the development of the new anti-piracy plans. Let’s take a look at what the Government is asking.
In the request we read that the first question the public should respond to is “regarding the costs to the U.S. economy resulting from intellectual property violations, and the threats to public health and safety created by infringement.”
The second part deals with “detailed recommendations from the public regarding the objectives and content of the Joint Strategic Plan and other specific recommendations for improving the Government’s intellectual property enforcement efforts.”
To summarize, the copyright czar wants the public to come up with examples and ideas detailing how piracy affects society and how it should be combated. Unfortunately the request seems to indicate that it is already concluded that piracy has a negative impact and that tougher measures are needed.
It is not too late of course to prove the opposite and voice our concerns. Let’s elaborate a little on the two questions.
The first question is an easy one. Although piracy might hurt some parts of the entertainment industry there is no objective and conclusive report that proves how it negatively effects the entire industry, let alone the United States economy as a whole.
One of the most authoritative reports on the economic and cultural consequences of file-sharing on the music, movie and games industries was published last year. The report, which was commissioned by the government, estimated that file-sharing has a positive effect on the Dutch economy. While it was recognized that the entertainment industry suffers some losses, these don’t outweigh the positive effects of file-sharing.
Other academic publications mainly show that music piracy has no, or a positive effect on actual sales. The more people download through illegal channels, the more they tend to pay for music. This indicates that music fans do want to pay for music but that they download in addition, which could be due to the lack of unlimited download services.
The second question posed by the czar deals with the enforcement side of copyright infringement. One of the main questions here is how to deter people from downloading files illegally.
Again we’d like to start off with pointing to the Dutch report mentioned earlier. In the report it was concluded that measures to combat piracy should not be implemented before the entertainment industries have come up with sufficient legal online alternatives. This suggests that the entertainment industries are in part causing piracy by failing to offer decent competitive DRM-free products.
Furthermore, it is very doubtful that harsher punishments and stricter enforcement will have any effect. Last year the RIAA won two major lawsuits against individual file-sharers and this hasn’t changed the attitude or behavior of the average file-sharer at all. If anything, tougher enforcement will drive piracy underground, motivating the public to hide their identities online.
The bottom line is that the enforcement question is irrelevant. Technology will always stay ahead of any new type of legislation. The new three-strikes law in France for example can be easily circumvented and the same will be true for other measures. Much more can be done by focusing on the core of the problem, that is, taking away the incentive to download illegally.
The issues we have briefly touched on here are just the tip of the iceberg, and we assume that our readers can easily list many more. If so, please take this opportunity to have your voices heard. The US Pirate Party, who alerted us about this public consultation, has a mailing form which you can use, but regular email works fine too. For those who plan to comment we would advise to include as many credible references as possible.
F.C.C. Takes a Close Look at the Unwired
Brian Stelter and Jenna Wortham
For many Americans, having high-speed access to the Internet at home is as vital as electricity, heat and water. And yet about one-third of the population, 93 million people, have elected not to connect.
A comprehensive survey by the Federal Communications Commission found several barriers to entry, with broadband prices looming largest. The commission will release the findings on Tuesday and employ them as it submits a national broadband plan to Congress next month.
Of the 93 million persons without broadband identified by the study, about 80 million are adults. Small numbers of them access the Internet by dial-up connections, or outside the home at places like offices or libraries, but most never log on anywhere. In a world of digital information, these people are “at a distinct disadvantage,” said John Horrigan, who oversaw the survey for the F.C.C.
Julius Genachowski, the chairman of the F.C.C., is promoting faster and more pervasive broadband infrastructure as a tenet of economic growth and democracy.
The study, conducted last fall, interviewed 5,005 residents by telephone. It indicates that the gap in access is no longer between slower dial-up and faster broadband; the overwhelming majority of people who have Internet access have broadband.
“Overall Internet penetration has been steady in the mid-70 to upper 70 percent range over the last five years,” Mr. Horrigan said in an interview on Monday. “Now we’re at a point where, if you want broadband adoption to go up by any significant measure, you really have to start to eat into the segment of non-Internet-users.”
Those nonusers are disproportionately older and more likely to live in rural areas. Those with household incomes of less than $50,000 are “much less likely” to have broadband access, according to the F.C.C. report.
Asked about the reasons for not having broadband at home, almost half of respondents cited a prohibitive cost, and almost as many said they were uncomfortable using a computer. Forty-five percent answered “yes” to the statement, “I am worried about all the bad things that can happen if I use the Internet.” Others said they viewed the Internet as a waste of time.
Respondents were able to give multiple answers, and most did. Consequently, “policy solutions that provide comprehensive aid to people are most likely to have the most payoff,” Mr. Horrigan said.
Twelve percent of those surveyed who had not adopted broadband said that they could not connect to broadband where they lived. Because this figure is self-reported by the residents, it may not be entirely accurate.
The F.C.C. was mandated by Congress to produce a detailed plan with specific recommendations to hasten the national adoption of broadband in the United States. The plan is expected to be unveiled by the F.C.C. on March 17. It will recommend, among other elements, an expansion of broadband adoption from the current 65 percent to more than 90 percent, Mr. Genachowski said in a blog post on an F.C.C. Web site last week.
NGO Networks in Haiti Cause Problems for Local ISPs
Temporary networks are interfering with local networks and taking away potential business
While the communications networks that aid groups set up quickly following the earthquake in Haiti were surely critical to rescue efforts, the new networks have had some negative effects on the local ISP community.
Now, more than a month after the earthquake devastated the island nation, local ISPs (Internet service providers) in Haiti are starting to grumble about being left out of business opportunities and about how some of the temporary equipment -- using spectrum without proper authorization -- is interfering with their own expensive networks, causing a degradation of their services.
The aid organizations could better help Haiti in the long term by hiring the local companies, one Haitian with close ties to the ISP community said. "In order to help rebuild the economy, it would be better if they purchased from the local providers," said Stéphane Bruno, a Haitian IT consultant who works closely with the ISPs.
The local ISPs are struggling because so many of their business and consumer customers are simply out of business or can no longer pay for services. The ISPs would welcome the business of the NGOs, Bruno said.
The local companies may be disappointed to learn that late last week the NGO community asked the provider of a temporary network many of them use to give them another 30 days of service.
Inveneo, the company that built a temporary network using satellites and Wi-Fi that is being used by a group of NGOs in Haiti, defends its work and says it has done its best to be sensitive to the local ISPs. NetHope, a consortium of NGOs, approached Inveneo in the early days after the quake and asked it to build a network for the relief groups, said Mark Summer, chief innovation officer at Inveneo. "We felt at that point it made sense from a relief perspective to respond really fast," he said. It was critical for the NGOs to have reliable Internet connections so they could coordinate among relief workers and access resources like Google maps, he said.
Immediately after the earthquake the local ISPs were indeed overwhelmed, although because of the way their networks are built, they may have been able to respond relatively quickly to the NGOs' needs.
Despite the many early reports of a communications blackout in Haiti, the core Internet backbone in the country survived the quake.
Bruno had worked with a team of people, including Steve Huter, a project manager at the Network Startup Resource Center, to build Haiti's Internet Exchange Point. That project, which allows ISPs to route local traffic locally instead of sending it far away first, was just completed in May 2009. The NSRC is an organization that helps developing countries build international networking infrastructure and was initially funded by the National Science Foundation. The NSF as well as corporations continue to support the group, which works out of the University of Oregon's computing center.
From his base in Oregon, just hours after the quake, Huter remotely checked on the servers in the Internet Exchange Point in Haiti. "I was able to determine that none of them had lost connectivity or service in the earthquake," Huter said. "Those machines were operational."
What wasn't operational, however, were all the wireless antennas that ISPs in Haiti typically use to distribute Internet access across Port-Au-Prince. But repairing those wireless links in order to restore service is a lot easier than if they had to repair wired lines to each individual user.
Still, the local ISPs were quite busy trying to assess damage to the network, and they were short-staffed because some workers feared aftershocks and were reluctant to enter telecom facilities, Bruno said. In addition, with electricity down in most of the city, they were left with the issue of powering the wireless base stations. That meant they didn't have the capacity to reach out to the NGOs, Bruno said.
Now, the ISPs may have to wait even longer to begin to serve these new customers. Late last week, NetHope and Inveneo began talking to the NGOs about transitioning off the temporary network. "The feedback given was that they'd like to keep this network operational for another 30 days. They did not feel they were quite ready to take on that logistical effort of coordinating [a transition to a new network] when they are still involved with day-to-day food distribution, shelter work and other things," Summer said.
That does not mean, however, that the NGOs will all be using the temporary network for another 30 days, he said. Inveneo and NetHope are encouraging the NGOs to begin talking to and negotiating with the local ISPs now so that they can transition as soon as possible. "The NGOs are saying 'give us time to do this properly,'" Summer said.
"So we said yes, if you want that, fine, but we need to now say we can't do this for free because we can't expect the local companies to keep donating their services," Summer said. Two local ISPs, AccessHaiti and MultiLink, donated the backhaul bandwidth for the temporary network. But Inveneo wants to begin to pay a local provider for that service. He planned to arrive in Haiti on Monday to begin talks with all of the local ISPs to come to an agreement so that Inveneo can pay one or more of them for backhaul services in March.
The local ISPs are also now trying to devise ways to ensure that the NGOs know they are ready for business. A Haitian ISP association is preparing a Web page that will outline the services that they are prepared to offer with information about how NGOs and other organizations can contact the companies, Bruno said. "So they can determine if the area is covered by a local provider before making a different decision," he said. The group hopes to issue a press release soon about the resource.
Many NGOs do know that the local ISPs would like their business.
"The local operators are indicating clearly that they would prefer these NGOs bought capacity from them or subcontracted," said Cosmas Zavazava, the chief of the emergency telecommunications group of the International Telecommunications Union's telecommunications development bureau. Speaking from Haiti, where he is working on communications issues, he said that some NGOs already have started employing local ISPs and that others might still.
The NGOs, however, have caused another set of problems as well. Many began using their wireless and satellite equipment without getting approval to use the required frequencies. That's in part because the Haitian regulatory authority's office had collapsed. "Their ability to license people in 48 hours or so [after the quake] was nonexistent," said Zavazava. "So people came in and started switching on their equipment and operating."
That caused interference with local ISPs who are licensed to use the spectrum, thus degrading the service that they are offering to customers, Zavazava and Bruno said. It continues to be a problem.
"This is causing discomfort on the part of local operators who have invested quite a lot of money in getting licenses and buying the equipment they are using," Zavazava said.
Haiti's regulatory authority has issued a statement asking all visitors to indicate which frequencies they are using in an attempt to harmonize operations, but many have not stepped forward, Zavazava said.
Some of those may be organizations that are beginning to wrap up their operations in Haiti. "They may not really have the motivation to approach the regulatory authority," Zavazava said.
He said the situation is not uncommon in areas where NGOs are working to help after a disaster hits but that it could be avoided with disaster preparedness exercises.
Inveneo said it has exclusively used equipment that operates in unlicensed bands so as not to interfere with local licensed operations. But Summer has read announcements about other groups that are using WiMax to deliver temporary services and those networks may be interfering with the locals, he said.
Bruno hopes that the NGOs will start using the local ISPs soon. "If you want everything to go back to normal, the best thing to do is use the services of the local providers," he said.
Broadband Carriers Speak Out Against FCC Regulation
The nation's largest Internet service providers on Monday warned the Federal Communications Commission against any possible move that would put them more clearly under the agency's jurisdiction, saying that doing so could deter their investments in broadband networks.
The comments from AT&T and Verizon Communications come as the FCC awaits a pivotal decision from a federal appeals court that could undercut the agency's authority over those companies' Internet businesses. A ruling against the agency would likely derail FCC Chairman Julius Genachowski's signature policy objectives, including open-Internet rules and the reform of an $8 billion rural telephone fund to provide broadband access in underserved parts of the country.
Public Knowledge, a group that advocates digital rights, has urged the FCC to classify those Internet service providers alongside telephone services, which are firmly under the agency's purview. Some analysts say the agency would have to reclassify those services in order to remain relevant as the Web becomes a primary vehicle for communication and entertainment.
In a 14-page letter to the agency, AT&T and Verizon were joined by trade groups CTIA and the National Cable & Telecommunications Association. They argue that such a move would be "extremist," entailing too many onerous rules for the fast-moving broadband industry.
"The proposed regulatory about-face would be untenable as a legal matter, and, at a minimum, would plunge the industry into years of litigation and regulatory chaos," the companies wrote.
An FCC spokeswoman declined to comment. Genachowksi's senior adviser, Colin Crowell, has said the FCC can still win its court challenge. The case stems from Comcast's appeal of a 2007 ruling, which found that it violated open-access guidelines that prohibited network providers from slowing or blocking Web sites.
The agency has argued that its authority over ISPs derives from its supervision of other communications services, such as cable television. In an oral hearing early last month, three judges grilled an FCC attorney over whether that "ancillary authority" over broadband Internet services was enough to rule against Comcast.
Crowell told The Washington Post last month that if it didn't win its case, the agency would have to consider other options to clarify its authority, including a reclassification of Internet services. Advocates of such a change urge the agency to put ISPs under what is known as Title II common carrier services, which transport people or goods under regulatory supervision. Comcast, which is seeking the FCC's approval for its merger with NBC Universal, didn't sign the letter.
"We'll defer comment on reclassification until the D.C. Circuit decides our challenge to the actions of the previous FCC on due-process grounds," Comcast spokeswoman Sena Fitzmaurice said.
Broadband carriers said placing their services under Title II would be too restrictive.
"The commission should keep this Pandora's Box of Title II classification nailed shut," the companies wrote.
Consumer advocates argue the opposite. They say that previous FCC moves to ease regulation of broadband providers are now undermining the agency's attempts to address problems in the Internet age.
"The same lobbyists who purport to want 'Broadband for America' are now telling the FCC that the agency should not engage in rulemaking that would achieve it," said Ben Scott, policy director for Free Press, a public-interest group. "The commission must have the authority to promote universal access to affordable broadband."
Viewers Get a Channel of Their Own on Cablevision
New Yorkers who want to watch online videos or family photos on their TVs will soon be able to do so on their own personal TV channel if they are Cablevision Systems Corp subscribers.
It is the latest attempt by the U.S. cable TV industry to remain relevant as a rising number of subscribers spend more time watching videos and viewing photos and chatting online.
Cablevision, which serves 3 million homes in the New York area, will start a trial service in June for customers who buy both video and Internet access from the company.
Named PC to TV Media Relay, Cablevision said the technology will allow the customer to transfer anything available for display on their PC for viewing on a dedicated TV channel.
The customer will need to download software to their computer to enable the service. It will not work with Apple computers.
Cablevision will pitch the service to customers as enabling online viewing with only the touch of a button.
Pricing has yet to be decided, the company said. It is still working on software for Apple users and plans to extend the service to handheld devices connected to in-home wireless networks.
Cable companies are making various moves to avoid being replaced by video sites such as Hulu or Google Inc's YouTube.
Comcast Corp launched a service called On Demand Online last year that offers cable programing on its own website to subscribers to both its digital cable and Internet service.
Bernstein Research analyst Craig Moffett argued recently that cable investors should place more value on the broadband networks rather than the their video services cable provides.
The cable sector will change over the next five to 10 years, he predicted.
"Linear video will, no doubt, continue to exist, and even to thrive, but broadband will by then almost inarguably be the core business for the cable companies," Moffett said.
(Reporting by Yinka Adegoke, editing by Leslie Gevirtz)
AT&T Roars Back in 3G Wireless Performance Test
After generating disappointing results in tests last spring, AT&T's 3G network is now the top performer in 13-city tests, with download speeds 67 percent faster than its competitors
AT&T says it has worked hard to improve its much-maligned 3G network over the last eight months -- erecting hundreds of new cell towers, using better-performing wireless spectrum, and souping up its cell sites across the country -- and the results of our latest 13-city 3G network performance tests suggest that the network has indeed undergone a drastic makeover.
After registering the lowest average download speeds in our 3G performance tests last spring, AT&T's network turned in download speeds that were 84% better than the numbers from eight months ago; in our latest tests, AT&T's download speeds were 67% faster on average than those of the other three largest U.S. wireless providers -- Sprint, T-Mobile, and Verizon.
In tests last spring, AT&T posted an average download speed of 818 kbps (kilobits per second) across 13 cities. In our tests conducted in December 2009 and January 2010, AT&T's average download speed increased to 1410 kbps.
AT&T's download speeds in New York City were three times faster in our latest tests than in our tests last spring; in San Francisco, the AT&T's download speeds were 40% faster.
The AT&T network's reliability improved dramatically, too: Last spring, PC World testers obtained a usable broadband connection with AT&T only 68% of the time. In our latest tests, testers connected to AT&T successfully in 94% of their attempts.
Verizon Wireless, which turned in the best all-around performance in last spring's 3G network testing, and Sprint, which finished a close second, both continue to perform well, according to our latest test results. Our tests found that Sprint's network delivered download speeds nearly identical to those we measured eight months ago in the 13 test cities; Verizon's download speeds decreased by 8% overall.
In the past year, Sprint and Verizon -- like AT&T -- have seen a marked increase in the number of 3G smartphones that rely on their networks. Our speed results suggest that Sprint is upgrading its network capacity fast enough to meet the demand, while Verizon may be having trouble keeping up. Nevertheless, both networks' reliability (the likelihood that a user can connect to the Internet at a reasonable speed) improved in the most recent tests over how they fared last spring.
We tested the T-Mobile 3G network for the first time in December and January, and found that it supported download and upload speeds that were competitive with Sprint's and Verizon's in most of our test cities. In one city -- New York -- T-Mobile's network even delivered download speeds that are usually associated with 4G networks.
Before getting into the details of our test results, a few words about the testing and the data. During December and January, PC World and testing partner Novarum Inc. tested the download speeds, upload speeds, and network dependability of the AT&T, Sprint, T-Mobile, and Verizon 3G networks from 20 locations in each of 13 U.S. cities. Altogether we ran more than 51,000 separate tests covering 850 square miles of wireless cell coverage servicing 7 million wireless subscribers (see "How We Do the Testing").
At each testing location, we connected to the 3G network via both laptops and smartphones. The laptop tests accurately measured the capacity and performance potential of a given network, while the smartphone tests approximated the real-world connection speeds users of these popular devices might experience, given the less-powerful processors and 3G radios that the devices contain.
Reading the charts
The charts list the cities in the leftmost column; moving rightward across the chart, you can see the speed averages and reliability scores in that city for each of four 3G wireless networks. Speeds are expressed in kilobits per second (kbps); the figure for reliability represents the percentage of the total number of 1-minute tests we conducted of a given carrier's service during which the service maintained an uninterrupted connection at a reasonable speed.
Because we couldn't test every city in the country, we chose 13 that are broadly representative of the rest: Baltimore, Boston, Chicago, Denver, New Orleans, New York City, Orlando, Phoenix, Portland, San Diego, San Francisco, San Jose, and Seattle. Because wireless signal quality depends to a large extent on variables such as network load, distance from the nearest cell tower, weather, and time of day, our results can't be used to predict specific future performance in a specific area. Rather, they illustrate the relative performance of 3G service in a given city on a given day. Each speed number possesses a margin of error of plus or minus 5%.
The empire strikes back: AT&T's dramatic 3G makeover
Our most recent tests showed that the connection speeds delivered by AT&T's network -- both downloads and uploads -- increased considerably in every one of our test cities, compared with the speeds it registered in identical tests we conducted last spring. In Baltimore, New York City, New Orleans, Portland, and Seattle, AT&T's average download speeds in our tests more than doubled. The network's 13-city average download speed was 1.4 mbps; that's as fast as many home broadband connections. In our tests, none of AT&T's three biggest competitors registered average download speeds of better than 1 mbps.
In our Baltimore, Boston, and New York tests, AT&T's HSPA network delivered burst speeds exceeding 4000 kbps -- a top speed that Sprint and Verizon can't match with their current 3G technology, CDMA EvDO Rev. A.
AT&T's upload speeds were one of the few bright spots in its test results last spring, and the network continued to deliver the fastest upload speeds of the Big Four networks in our latest tests. AT&T upload speeds increased by 58% and now average 773 kbps -- that's 330 kbps faster than the average upload speed we clocked for Verizon, the second-fastest network.
Testing the iPhone on AT&T
Our smartphone-based tests of the AT&T network told the same story as our laptop-based tests, though they also revealed the speed limitations of smartphones in general, especially when the devices are uploading data to the network. The AT&T and iPhone combo turned in the fastest average speeds -- downstream and upstream -- of the four carrier/smartphone combinations we tested, outperforming its rivals in more than three-fourths of the cities we sampled. AT&T connected the iPhone at an average download speed of 1259 kbps, and an average upload speed of 215 kbps over the13 testing cities. The iPhone clocked download speeds of at least 1000 kbps in more than 60% of our testing locations, with burst rates often exceeding 3000 kbps, and we managed to obtain a reliable connection in 91% of our AT&T/iPhone tests.
AT&T appears to have added considerable data service capacity during a year when its wireless subscriber base grew considerably, as did the amount of data service those subscribers use. During 2009, AT&T's total subscriber count swelled from 77 million to more than 85 million, with a growing proportion of those subscribers -- 40%, AT&T says -- now using smartphones. And of AT&T's 85 million subscribers, 10.3 million now connect to the network using an iPhone, which seems to invite users to perform bandwidth-intensive activities such as Web browsing and video streaming. "On the AT&T network, we're seeing advanced smartphones like the iPhone driving up to 10 times the amount of usage of other devices on average," says AT&T spokesperson Jenny Bridges.
At the time of our first 3G network test last spring, AT&T's premier device -- the iPhone -- had become both a blessing and a curse: The company's coup of becoming the exclusive service provider for the iPhone undoubtedly helped swell the customer base for its wireless services. But the iPhone also seriously challenged AT&T's data network resources, especially in iPhone-happy places like San Francisco and New York City.
Shortly after we revealed the results of our Spring 2009 tests, AT&T announced plans to increase the speed of its 3G service. To achieve this goal, AT&T said, it would upgrade its networks to the faster High Speed Packet Access (HSPA) 7.2 technology (thereby doubling the maximum speeds of upgraded cell sites), utilize better-performing portions of the wireless spectrum, increase backhaul capacity, and add new cell towers.
In a recent conference call with investors, AT&T head of operations John Stankey said that the company had finished upgrading its network to HSPA 7.2 technology far ahead of schedule. "We have already turned up the 7.2 software on our 3G cell sites nationwide," Stankey said on the call. He also pointed out that AT&T had added 1900 new cell sites and had converted its network to the 850MHz spectrum band, during 2009.
The combination of these improvements probably accounts for the large speed increases we saw in our recent tests. "It is clear that at this time AT&T has the highest-performing network with the highest user capacity, based on our sample," says Novarum CTO Ken Biba, who conducted the tests. "With the additional investment in HSPA 7.2 base stations [last year] and high-speed backhaul infrastructure, AT&T has room for growth in demand." Biba says. "However, demand will only accelerate with the iPad, e-readers, streaming video, and new mobile applications. Will AT&T have enough capacity with HSPA 7.2? Will the transition to LTE happen fast enough? These are all key questions for 2010."
Verizon: Signal fading slightly
We measured Verizon's 13-city average download speed at 877 kbps, down 8% from its average of 951 kbps in our tests last spring. Verizon's average download speed decreased in seven of twelve of our testing cities compared to the figures we recorded last spring; and in five of those cities -- Chicago, New Orleans, Phoenix, San Jose, and Seattle -- Verizon's average dropped by 15% or more.
Verizon had the best-performing network in our tests last spring, with the fastest overall speeds and strong network reliability. But our recent test results suggest that Verizon may not be keeping up with demand in some markets.
Verizon is a bit later than its competitors to the game of supporting bandwidth-hungry smartphones on its network. Verizon says that only 15% of its postpaid customers (customers with service contracts) owned smartphones at the end of 2009, compared to AT&T's 40%.
But increasingly, Verizon subscribers are using smarter phones that demand more broadband data service. Verizon says that its data service revenues grew from $12 billion in 2008 to $16 billion in 2009. The company's data service business will continue to grow as phones like the Motorola Droid (which reached market in November 2009) proliferate and begin taking a toll on network capacity.
In our recent tests, Verizon's average upload speeds showed little change from our results last spring, averaging 434 kbps. Nevertheless, we saw significant changes for Verizon in some cities: Average upload speeds decreased by 21% in San Diego, but increased by 27% in Denver and by 16% in New York City.
Verizon's network reliability scores in our recent tests were a mixed bag: Verizon's scores fell by 12% in both Baltimore and San Diego, compared with its scores in those two cities in our tests last spring. Yet in Chicago, Orlando, Phoenix, Portland, and San Francisco, the company's reliability scores increased by more than 10%.
Verizon promises its wireless customers typical download speeds of between 600 kbps and 1.4 mbps--and in the vast majority of our tests, it delivered. Upload speeds were a different story, however. Verizon promises upload speeds of from 500 to 800 kbps, yet in only one of our 13 cities (New Orleans) did we record an average upload speed of more than 500 kbps during our laptop-based tests.
Testing the Droid on Verizon
Our smartphone-based tests revealed some significant performance limitations of the Verizon network when we connected to it with a Motorola Droid.
In our winter tests involving 280 testing locations in 13 cities, the Droid rarely approached Verizon's promised upload speed of 500 kbps. Overall, the Droid delivered an average upload speed of just 116 kbps, the lowest average of any carrier/phone combo in our smartphone tests. And in numerous tests using the Droid, we recorded upload speeds of less than 75 kbps -- painfully slow if you're trying to send data of any size up through the network.
We also had trouble establishing a reliable connection between the Verizon network and the Droid during our tests. Verizon delivered an uninterrupted signal at reasonable speed in only 76% of our tests--far below the success rates of the 90+ percent that the other three carriers achieved.
Download speeds to the Droid, on the other hand, were quite good, at an average of 1075 kbps; that's not far from the upper end of the speed range that Verizon promised its customers, and ranks as the second-highest average download speed in our smartphone-based tests--behind only AT&T. The Droid connected at near- or above-1000 kbps speeds in every testing city but Phoenix, where it averaged just 696 kbps on the downlink.
Verizon says that PC World's assessment of its 3G performance doesn't tally with the results it sees in its own tests. Moreover, Verizon points out, speed isn't everything. As in its controversial "there's a map for that" commercials, Verizon likes to emphasize the breadth of its 3G coverage. "Consistency, coverage, and reliability -- the ability to make and keep connections, and perform the tasks they want to over our wireless network at 3G speeds in more places -- is what sets Verizon Wireless apart," says Verizon corporate communications director Thomas Pica in an e-mail message to PC World.
Verizon's 3G network does indeed have easily the greatest coverage area of any network (the company says that it covers more than 90% of the United States). So Verizon can still brag about that.
Sprint Continues Dependable Ways
During 2009, Sprint began to support millions of data-hungry Palm Pre and Android phones on its 3G network. Sprint says that 49% of the handsets it sold during the fourth quarter of 2009 were smartphones or other touchscreen devices, up from 41% from the quarter before. To accommodate the increased demand for wireless broadband that those devices bring, Sprint says that it spent $1.2 billion on its wireless network during 2009.
Overall, our research suggests that, in the areas we tested, those investments were just enough to enable the network to keep up with the increased demand.
Sprint's network reliability scores suggest that customers aren't having many problems getting on the network. Sprint ranked first in our reliability tests eight months ago, and it improved on that measure in our latest tests. Last spring we obtained a solid connection to the Sprint network in 90.5% of our tests; that figure increased to 94% in our most recent tests. The network scored perfect reliability marks in Baltimore, Portland, and San Diego, meaning that we enjoyed solid, uninterrupted connections at all 20 testing locations in each of those cities.
The Sprint network isn't as speedy as it is dependable, however. The network registered download speeds of 795 kbps on average across our 13 testing cities -- virtually unchanged from the 808 kbps it averaged in our tests last spring. Upload speeds also remained steady: Sprint uploads averaged 396 kbps in our winter tests, up slightly from the average of 371 kbps average we recorded last spring. These speeds are well within the ranges that Sprint promises its customers -- upload speeds of 350 to 500 kbps and download speeds of 600 to 1400 kbps.
Sprint's speed results suggest a tale of two kinds of cities -- ones where the company upgraded its network in the past eight months, and ones where mobile broadband demand is outpacing any increase in capacity. We saw speed increases of 20% or more in Baltimore, New Orleans, and San Diego; but in Denver, Orlando, and Seattle, average download speeds decreased by more than 20% in our tests. The net result: Sprint had the lowest average download speed across all our tests among the Big Four carriers.
One possible explanation, according to Novarum CTO Ken Biba, is that Sprint is expanding its service city by city, upgrading networks where mobile broadband demand is greatest. The cities where it hasn't yet upgraded its network are dragging down its 13-city average speed in our test results.
Sprint says that it has added about 11,000 cell sites to its 3G network since 2006, but it won't disclose how many of those sites debuted in the past year. "As customer demand grows, we have to continue to upgrade our network on a cell site by cell site basis," says Sprint networks vice president Bob Azzi. "I think we've been doing a good job of staying ahead of that growth."
Testing the HTC Hero on Sprint
The speeds we saw in our smartphone-based tests of the Sprint network (using an HTC Hero) seem to corroborate Azzi's claim: Though the capacity of Sprint's network may vary from city to city, the performance that smartphone users see on the ground is fairly consistent across all 13 of our 13 testing cities. Download speeds for the Sprint/Hero combo consistently fell within the range of 700 kbps to 1000 kbps in most testing cities, yielding an overall download speed average of 851 kbps--significantly slower than the average connection speeds of 1000 kbps or better achieved by the AT&T/iPhone and Verizon/Droid combos. Sprint delivered download speeds in excess of 1000 kbps to the Hero in just 30% of our testing locations.
Upload speeds, while not impressive, were again consistent. We recorded upload speeds in the range of 100 kbps to 200 kbps--for an average of 145 kbps--across the Sprint network; those figures are in line with the uplink performance of the other phones in our study. As for reliability, Sprint established a solid connection with the Hero in 92% of our tests, the second-best reliability score in our smartphone-based tests.
Azzi says his company has seen a "double-digit" growth in mobile broadband usage via devices like the Hero over the past year, but he welcomes the increasing demand. "We want people to use as much data as they want, to use it in any way they want to use it; and we want make sure they can use any apps that they want to use," Azzi says. "My job is to stay just ahead of that."
T-Mobile: Playing with the big boys
In our tests, T-Mobile's 3G network delivered download speeds that matched Verizon's and Sprint's, and it clocked surprisingly fast speeds in New York City. Averaged across our 13 testing cities, the T-Mobile 3G network showed an average download speed of 868 kbps--very close to Verizon's average speed of 877 kbps--and delivered an average upload speed of 311 kbps. T-Mobile tells its subscribers that they can expect upload speeds in the "hundreds of kbps" and download speeds of up to 1 mbps.
T-Mobile clocked its fastest average download speeds in Chicago (1047 kbps), Phoenix (1201 kbps), Portland (1090 kbps), and New York City (1220 kbps). During one of our 1-minute speed tests in Manhattan, the T-Mobile network turned in an average download speed of 3 mbps, and registered burst speeds of up to 3.5 mbps. Speeds in the vicinity of 3 mbps are typically seen only in 4G networks.
T-Mobile's network didn't perform as well in other cities where we tested. The network reached its performance nadir in New Orleans, where we measured an average download speed of 570 kbps and an average upload speed of 181 kbps. Transfer speeds were so slow in tests conducted in the northern part of the city that the network was virtually unusable, according to Novarum's Biba, who performed the tests.
Slow upload speeds were a recurring theme of our test results for the T-Mobile 3G network. T-Mobile registered average upload speeds of less than 300 kbps in 8 of our 13 testing cities. In upload speeds T-Mobile ranked lowest among the Big Four in 11 of the 13 cities.
Testing the HTC G1 on T-Mobile
In our smartphone-based tests, T-Mobile's network connected with the HTC G1 reliably, but it didn't support especially fast connection speeds to the device. We successfully established a solid connection with the T-Mobile/G1 combo in 93% of our attempts--the best reliability score of the four carrier/smartphone combos in our tests.
But the T-Mobile network delivered an average download speed to the G1 of only 719 kbps in our 13 testing cities -- the slowest average in our smartphone test -- and it connected at speeds exceeding 1000 kbps at only 13% of our testing locations. Upload speeds were lackluster, too: The G1 posted an average upload speed of 134 kbps in our tests, the second lowest average in our smartphone study.
T-Mobile was the first carrier in the United States to support new Android phones like the G1, and it is the only 3G network available for the high-profile HTC/Google Nexus One. As Android phones continue to gain mainstream popularity, T-Mobile's network will have to support more and more wireless data use. T-Mobile says that monthly demand for mobile its broadband increased by 275% during 2009. At last report (October 2009), T-Mobile had 2.8 million 3G smartphones connected to its network.
Compared with A&T and Verizon, T-Mobile's wireless business is small, but the reach of its 3G network grew rapidly in 2009. At the beginning of that year, about 100 million people lived in areas where T-Mobile provides 3G service; today, the network is available to more than 200 million people in 271 U.S. cities, the company says. In January, T-Mobile announced that, like AT&T, it had finished converting its 3G network to the faster HSPA 7.2 technology, as promised.
The 3G network may be growing fast, but T-Mobile's subscriber count seems to be increasing at a more leisurely pace. The company reported that it 33.4 million wireless subscribers at the end of the third quarter of 2009, up marginally from the 32.8 million it reported at the end of 2008.
Upgrading the cell sites
When wireless carriers upgrade their networks in a particular market, what do they actually do? Usually the improvement isn't a matter of erecting more cell towers, a process that can be expensive and fraught with bureaucratic hassles with local government.
Often, wireless carriers focus on upgrading the software at their cell sites to increase speed and capacity. This is what AT&T did when it recently upgraded its cell sites from HSPA 3.6 technology to HSPA 7.2 technology.
The wireless carrier may also add capacity to an existing cell site by adding a new frequency band to the wireless spectrum already available for its subscribers to use in that cell zone.
When Sprint and Verizon added a new frequency band added to their cell sites, the sites theoretically gained 9.3 mbps in download speed and 5.4 mbps in upload speed. (After an upgrade, the actual speed that a wireless device can achieve depends on its distance from the cell tower, the number of other wireless subscribers using the cell, and the complex wireless protocols baked into the device itself--which determine its maximum connection speed.)
The same procedure is used by AT&T and T-Mobile engineers when they add a new frequency to cell sites in their GSM networks. These networks add three GSM radios to transmit and receive on the newly allocated slice of wireless spectrum. For the newest variation of the GSM data standard -- HSPA 7.2 -- this upgrade adds 21.6 mbps of maximum download capacity and about 17 mbps of maximum upload capacity to the cell.
There is, however, one big difference in the way new bandwidth is added to CDMA and GSM towers, respectively. On Sprint and Verizon CDMA radios, voice and data services occupy separate frequency bands. Because the radio can communicate on only one band at a time, CDMA subscribers can't carry on a voice call while surfing the Web.
In AT&T's and T-Mobile's GSM networks, voice and data service run on the same frequency band, so the caller can talk on the phone and surf the Web simultaneously.
4G is coming fast
Consumers are getting used to the idea of a mobile, connected, on-demand world. As consumer expectations of wireless networks rise, carriers are in a position to make a lot of money at a high profit margin. All four of the Big Four carriers are feeling the pressure to increase the speed and capacity of their wireless networks to accommodate the bold new world that consumers want.
On key component of the emerging mix is a new generation of wireless network called 4G, and mobile operators are scrambling to start building such networks.
For Sprint, 4G means providing WiMax service via the Clearwire network, which now reaches 27 U.S. cities and is spreading to new cities rapidly. This development puts Sprint well ahead of the other U.S. mobile operators in the move toward 4G. Sprint plans to release a number of dual-mode 3G/4G devices, including phones (the first of them this year), that will connect at superfast 4G speed when possible, and will fall back to a 3G connection when not.
For Verizon Wireless, 4G means adding an overlay network using LTE (Long Term Evolution) technology. New hybrid modems and phones will use the 4G LTE network for high-performance applications, but will continue using the existing 3G CDMA network for voice and in situations where 4G is unavailable. Verizon says that it will launch its new LTE network in 25 to 30 markets in 2010, and it hopes to have 4G coverage for almost all of its current nationwide 3G footprint by the end of 2013.
T-Mobile is moving toward 4G by converting its network from HSPA (High Speed Packet Access) to HSPA+, which T-Mobile says can pump out maximum download speeds of 21 mbps. T-Mobile has deployed HSPA+ in Philadelphia, where some users have reported obtaining download speeds of up to 19 mbps. The carrier says that it will continue to build out its HSPA+ network this year, achieving a "broad national deployment" during 2010.
AT&T says that it will use its new HSPA 7.2 technology to bridge its current GSM/UMTS mobile platform to its LTE future. AT&T plans to commence field trials of its LTE technology later this year, and then begin switching on commercially available LTE networks in 2011. If the speed of AT&T's recent upgrades is any guide, however, the company's LTE launch may happen sooner than expected.
How we do the testing
For our tests, we chose cities that broadly represent the population density, socioeconomic statuses, physical terrain, foliage, and building construction found in medium to large U.S. cities. Our testing cities include Baltimore, Boston, Chicago, Denver, New Orleans, New York City, Orlando, Phoenix, Portland, San Diego, San Francisco, San Jose, and Seattle.
In each city we tested from 20 locations situated in a grid over the center of the city. These locations are roughly 2 miles apart, allowing us to measure service levels among and between numerous cell towers. Overall, we performed more than 51,000 tests in December 2009 and January 2010.
At each testing location, we subjected the networks to industry-standard network stress testing using laptops and to Internet-based testing using smartphones.
Our laptop-based tests use a direct TCP connection to the network to test the network's capacity--that is, the speed and performance that the network is capable of delivering to subscribers. Using the Ixia Chariot 4.2 testing tool running on a laptop PC, we tested both the speed and the reliability of the network.
To measure download speed, Chariot requests a number of large, uncompressible files from a dedicated server in the network; it then measures the speed of each transfer during a 1-minute period. To measure upload speed, Chariot sends a number of files from the Chariot client on the laptop to the server on the network, again timing each transfer during a 1-minute period. We report the average of all of these transfers at each location as the location average. Then we average all tested locations to obtain an average city performance.
We also assign a reliability score to each test. If during a test our client device cannot connect to the network, or if the network drops the connection, or if the throughput speed is unacceptably slow (less than 75 kbps), we label that testing location "low quality." We then report the percentage of testing locations in a given city that are of good quality. Thus, if we successfully establish an uninterrupted connection of reasonable speed at 19 of our 20 testing locations for a given network, we award that network a reliability score of 95% for that city.
Our smartphone-based tests approximate the real-world connection between specific smartphones and specific networks. We perform the smartphone tests from the same locations that we use for the laptop tests, applying an Internet-based performance test designed by Xtreme Labs. The test sends a large test file back and forth between the smartphone and a network server, and then measures the speeds at which the data is transferred. We perform three upload tests and three download tests at each testing location.
We tested all 13 cities during December 2009 and January 2010, using the same locations, methodology, and personnel we used to test those cities in our April 2009 tests. Maintaining a consistent methodology allowed us to compare the performance of the networks across an interval of eight months and look for possible evolutionary changes.
We did not exhaustively survey every city. We tested from stationary locations only (no drive tests); we did not survey indoor performance; and we did not measure voice service.
Consider the network
U.S. consumers pay a lot for the convenience of mobile communications and computing. In 2009, Americans spent about $4.8 billion on wireless devices and service, and analysts project that they will spend an even bigger chunk of their paychecks on wireless during 2010.
Regardless of the type of connected device you use, you'll eventually pay more for the wireless service that connects the device than you will for the device itself. So deciding on a wireless provider is a big decision--and an unwise choice can be a costly mistake.
We hope that our latest study arms you with some real-world information to help you pick the wireless carrier that's best for you.
Virgin Bringing 100-Mbps to UK Homes
Virgin said it will roll out 100 megabit-per-second broadband connections to homes in the UK.
The company said users will experience speeds "very close" to what's advertised as it plans to deploy cable instead of ADSL used by competitors.
"There is nothing we can't do with our fibre optic cable network, and the upcoming launch of our flagship 100mbps service will give our customers the ultimate broadband experience," Virgin Media's chief executive officer, Neil Berkett, said.
Virgin will have increased its top broadband tier from 20mbps to 100mbps in less than two years. While currently the company offers a 50mbps connection, Virgin announced plans to extend a 200mbps pilot to Coventry.
Users can download a music album in as little as five seconds, as opposed to the 75 seconds the same download would take on a 24Mbps ADSL connection, it said.
Virgin Media has has 4.1 million broadband customers and added 46,000 over the last quarter.
Competitor BT has just over five million subscribers - some 25% more - and is currently growing at around twice the rate of Virgin Media, adding 102,000 customers in the last quarter.
"Virgin Media's announcement today of 100Mb broadband comes hot on the heels of BT's recent 'Infinity' launch," noted Michael Phillips of broadband comparison site Broadbandchoices.co.uk.
"It's encouraging to see Virgin Media and BT - the two largest broadband providers in the UK - both pushing forwards with the rollout of ever faster services."
The company declined to say when it would be available to customers, though it is expected to be deployed across Virgin's entire cable network by the end of 2011.
The Internet Will Make You Smarter, Say Experts
An online survey of 895 Web users and experts found more than three-quarters believe the Internet will make people smarter in the next 10 years, according to results released on Friday.
Most of the respondents also said the Internet would improve reading and writing by 2020, according to the study, conducted by the Imagining the Internet Center at Elon University in North Carolina and the Pew Internet and American Life project.
"Three out of four experts said our use of the Internet enhances and augments human intelligence, and two-thirds said use of the Internet has improved reading, writing and the rendering of knowledge," said study co-author Janna Anderson, director of the Imagining the Internet Center.
But 21 percent said the Internet would have the opposite effect and could even lower the IQs of some who use it a lot.
"There are still many people ... who are critics of the impact of Google, Wikipedia and other online tools," she said.
The Web-based survey gathered opinions from scientists, business leaders, consultants, writers and technology developers, along with Internet users screened by the authors. Of the 895 people surveyed, 371 were considered "experts."
It was prompted in part by an August 2008 cover story in the Atlantic Monthly by technology writer Nicholas Carr headlined: "Is Google Making Us Stupid?"
Carr suggested in the article that heavy use of the Web was chipping away at users' capacity for concentration and deep thinking. Carr, who participated in the survey, told the authors he still agreed with the piece.
"What the 'Net does is shift the emphasis of our intelligence away from what might be called a meditative or contemplative intelligence and more toward what might be called a utilitarian intelligence," Carr said in a release accompanying the study. "The price of zipping among lots of bits of information is a loss of depth in our thinking."
But Craigslist founder Craig Newmark said, "People are already using Google as an adjunct to their own memory.
"For example, I have a hunch about something, need facts to support and Google comes through for me," he said in the release.
The survey also found that 42 percent of experts believed that anonymous online activity would be "sharply curtailed" by 2020, thanks to tighter security and identification systems, while 55 percent thought it would still be relatively easy to browse the Internet anonymously in 10 years.
(Editing by Bob Tourtellotte and Peter Cooney)
For Chip Makers, the Next Battle Is in Smartphones
The semiconductor industry has long been a game for titans.
The going rate for a state-of-the-art chip factory is about $3 billion. The plants typically take years to build. And the microscopic size of chip circuitry requires engineering that practically defies the laws of physics.
Over the decades, legions of companies have found themselves reeling, even wiped out financially, from trying to produce some of the most complex objects made by humans for the lowest possible price.
Now, the chip wars are about to become even more bloody. In this next phase, the manufacturers will be fighting to supply the silicon for one of the fastest-growing segments of computing: smartphones, tiny laptops and tablet-style devices.
The fight pits several big chip companies — each trying to put its own stamp on the same basic design for mobile chips — against Intel, the dominant maker of PC chips, which is using an entirely different design to enter a market segment in which it has a minuscule presence.
Consumers are likely to benefit from the battle, which should increase competition and innovation, according to industry players. But it will be costly to the chip manufacturers involved.
“I worry about that,” said Ian Drew, an executive vice president at ARM Holdings, which owns the rights to the core chip design used in most smartphones and licenses that technology to manufacturers. “But ultimately, these chip makers are all pushing each other, and if one falls over, there are still two or three left.”
Intel, based in Santa Clara, Calif., has long been held up as the gold standard when it comes to ultra-efficient, advanced chip manufacturing plants. The company is the last mainstream chip maker to both design and build its own products, which go into the vast majority of the PCs and servers sold each year.
Most other chips, for items as diverse as cars and printers, are built by a group of contract manufacturers, based primarily in Asia, to meet the specifications of other companies that design and market them. Traditionally, these companies, known as foundries, have trailed Intel in terms of manufacturing technology and have handled chips with simpler designs.
But with mobile technology, an expensive race is on to build smaller chips that consume less power, run faster and cost less than products made at older factories.
For example, GlobalFoundries plans to start making chips this year in Dresden, Germany, at what is arguably the most advanced chip factory ever built. The initial chips coming out of the plant will make their way into smartphones and tabletlike devices rather than mainstream computers.
“The first one out there with these types of products is really the one that wins in the marketplace,” said Jim Ballingall, vice president for marketing at GlobalFoundries. “This is a game changer.”
The company, a new player in the contract chip-making business, was formed last year when Advanced Micro Devices, Intel’s main rival in the PC chip market, spun off its manufacturing operations. GlobalFoundries, based in Sunnyvale, Calif., has been helped by close to $10 billion in current and promised investments from the government of Abu Dhabi.
The vast resources at GlobalFoundries’ disposal have put pressure on companies like Taiwan Semiconductor Manufacturing, United Microelectronics and Samsung Electronics, which also make smartphone chips. The message from GlobalFoundries is clear: as the newcomer in the market, it will spend what it takes to pull business away from these rivals.
At the same time, Apple, Nvidia and Qualcomm are designing their own takes on ARM-based mobile chips that will be made by the contract foundries. Even without the direct investment of a factory, it can cost these companies about $1 billion to create a smartphone chip from scratch.
Recently, these types of chips have made their way from smartphones like the iPhone to other types of devices because of their low power consumption and cost.
For example, Apple’s coming iPad tablet computer will run on an ARM chip. So, too, will new tiny laptops from Hewlett-Packard and Lenovo. A couple of start-ups have even started to explore the idea of using ARM chips in computer servers.
“Apple was the first company to make a really aspirational device that wasn’t based on Intel chips and Microsoft’s Windows,” said Fred Weber, a chip industry veteran. “The iPhone broke some psychological barriers people had about trying new products and helped drive this consumer electronics push.”
Companies like Nvidia and Qualcomm want to get their chips into as many types of consumer electronics as possible, including entertainment systems in cars, and home phones with screens and Web access.
At the Mobile World Congress in Barcelona, Spain, last week, manufacturers displayed a wide range of slick devices based on ARM chips, including a host of tablets and laptops. In addition, HTC released its Desire smartphone, built on a Qualcomm ARM chip called Snapdragon, which impressed show-goers with its big touch-screen display.
Meanwhile, Intel is about to enter the phone fray, both to expand its market and defend itself against the ARM chip makers. Its Atom line of chips, used in most netbooks and now coming to smartphones, can cost two to three times as much as the ARM chips, according to analysts. In addition, the Atom chips consume too much power for many smaller gadgets.
Intel executives argue that consumers will demand more robust mobile computing experiences, requiring chips with more oomph and PC-friendly software, both traditional Intel strengths.
“As these things look more like computers, they will value some of the capabilities we have and want increasing levels of performance,” said Robert B. Crooke, the Intel vice president in charge of the Atom chip. “We’re seeing that from our customers in a number of spaces, including digital TVs and hand-held devices.”
Intel also has deep pockets. As of December, the company had more than $9 billion in cash and short-term investments.
Mr. Crooke said that Intel’s manufacturing expertise would allow it to produce a new crop of chips every 18 months or so that would be cheaper and use less power. As rivals shift to more cutting-edge chip-making techniques, he said, they are likely to run into problems that Intel solved years ago.
At the same time, competition from other chip makers will pressure them to lower their prices.
“I don’t know whether it will make it harder for these guys to invest in the future, but you certainly would think so,” Mr. Crooke said.
iTunes Sells 10 Billion Songs
M. Tye Comer
Apple's iTunes store sold its 10 billionth song on Wednesday (Feb. 24), thanks to massive sales of tracks by Black Eyed Peas, Lady Gaga and Coldplay.
Apple revealed that Black Eyed Peas' "I Gotta Feeling" is iTunes' all-time most-downloaded song, with "Boom Boom Pow" coming in at No. 3. Lady Gaga is the solo artist with the most songs in the top 25, with "Poker Face" (No. 2), "Just Dance" (No. 6) and "Bad Romance" (No. 22) all making the list. Other acts with tracks in the top 10 include Jason Mraz, Coldplay, Flo Rida, Taylor Swift, Leona Lewis and Ke$ha.
All of the top 25 tracks were released in the past five years (Ke$ha's "Tik Tok" is the newest) except for "Don't Stop Believin'" by classic rockers Journey, whose inclusion in popular TV shows "The Sopranos" and "Glee" helped the song reach No. 21 on the list.
But perhaps the real winner might be the person who actually bought the 10 billionth track -- they will recieve a $10,000 iTunes gift card.
Here's the Top 25 list of iTunes' most downloaded songs:
1. Black Eyed Peas, "I Gotta Feeling"
2. Lady Gaga, "Poker Face"
3. Black Eyed Peas, "Boom Boom Pow"
4. Jason Mraz, "I'm Yours"
5. Coldplay, "Viva La Vida"
6. Lady Gaga, "Just Dance"
7. Flo Rida, "Low"
8. Taylor Swift, "Love Story"
9. Leona Lewis, "Bleeding Love"
10. Ke$ha, "Tick Tock"
11. Rihanna, "Disturbia"
12. P!nk, "So What"
13. Katy Perry "I Kissed a Girl"
14. Beyonce, "Single Ladies"
15. Katy Perry, "Hot N Cold"
16. Kanye West, "Stronger"
17. T.I. feat. Rihanna, "Live Your Life"
18. Plain White T's, "Hey There Delilah"
19. Flo Rida, "Right Round"
20. Miley Cyrus, "Party in the U.S.A."
21. Journey, "Don't Stop Believin'"
22. Lady Gaga, "Bad Romance"
23. Kings of Leon, "Use Somebody"
24. Owl City, "Fireflies"
25. The Fray, "How to Save a Life"
Microsoft Phone System Hits Reset on Digital Music
It's been more than six years since then-Microsoft CEO Bill Gates admitted that Apple caught the company "flat-footed" in the digital music market and directed his team to make up for lost ground, according to recently surfaced internal e-mails.
To date, Microsoft's effort to address the digital music market has largely focused on its Zune player and Zune Pass subscription service, which have won favorable reviews but few customers. But with the recent unveiling of its Windows Phone 7 Series operating system at the Mobile World Congress conference in Barcelona, Microsoft hopes to reboot its struggling digital music strategy.
Even the well-received Zune HD device, introduced last fall, hasn't been enough to convince music fans to convert to the Zune Pass. The company says it has sold only 3.8 million players since 2006, and NPD Group estimated in November that it has a 2 percent share of the U.S. portable media player market, compared with 70 percent for Apple's iPod.
So Microsoft has made it a priority to expand the Zune service to other platforms. In November, it added the Zune's video service to its Xbox Live network, consisting of more than 20 million worldwide users of the Xbox 360 gaming console. Since then, Zune communications director Jose Pinero says the number of daily HD video downloads and streams has doubled. Now, Microsoft plans to use its Windows Phone 7 platform to bring Zune to mobile customers.
"Anybody who gets a Windows Phone 7 Series phone is going to get a Zune within that device," Pinero says.
The most immediate impact this has is to expand the Zune service to countries outside of the United States and Canada, which are the only markets where the Zune is sold. While Microsoft will continue to sell the original Zune player in the States, Pinero says it doesn't plan to expand it to other countries, instead relying on the mobile phone software to bring the Zune service to those markets.
For this strategy to work, Microsoft will have to turn around its equally struggling mobile phone business. According to technology research and consulting firm Gartner, Windows Mobile handsets rank fourth in worldwide smart-phone sales, at 7.9 percent, as of third-quarter 2009, down from 11 percent a year earlier and behind Nokia, BlackBerry parent Research in Motion and Apple.
But those rankings remain fluid, as analysts expect global smart-phone sales to double in the next three years.
"There's certainly opportunity for Microsoft and other players in this market to grab share in the smart-phone space," says Sue Kevorkian, an analyst at tech market research firm
Early reviews of Windows Phone 7 have been positive, with its simple interface and clean design winning high marks. But handsets featuring the new technology aren't expected to hit the market until the 2010 holiday season. By that time, Apple is expected to release an updated iPhone.
Zune will need to do more than piggyback on an innovative new mobile phone platform to generate the kind of momentum needed to elevate itself from the status of also-ran. It must compete with rival mobile music services sure to be created for handsets using Windows Phone 7, and the company hasn't yet detailed how developers will be able to integrate Zune functionality into their applications, if at all. Answers to those questions are expected in March at Microsoft's annual Web developer conference, Mix.
Zune will also need to increase its footprint to encompass more than mobile technology. That includes adding the music service to the Xbox Live network, as well as taking a larger stake of the subscription market and expanding that lackluster model beyond its current state.
Forrester Research analyst Sonal Gandhi estimates the entire U.S. music subscription market totals just 2.5 million users, and that includes not only Zune, Rhapsody, Napster and MOG, but also eMusic and those paying for the premium tiers of such streaming services as Pandora and Live365.
While Microsoft's recent moves may lend an important boost to Zune, the company will have to look beyond the subscription model if it is to have much of an impact on overall digital music revenue.
A Pickle Beats Nickelback in Facebook Contest
Does a pickle or the Canadian band Nickelback have more fans? It may sound like a joke but the answer might not amuse the musicians.
A group on Facebook called “Can this pickle get more fans than Nickleback?” hit its goal over the weekend, notching up 1.4 million fans. The post-grunge band Nickelback had 1.38 million fans on Facebook but this number has since topped 1.4 million.
The group was started earlier this month by a Facebook user called Coral Anne as a joke, with the band’s name deliberately spelt wrongly on the page to get around copyright infringement. She wrote that the page was inspired by another Facebook page called “Can this onion ring get more fans than Justin Bieber?” which she had found amusing. Canadian singer Bieber, 15, who was discovered on YouTube, is enjoying enormous success with singles from his debut album “My World.”
“This is all strictly intended for humor and nothing more or less,” she wrote. “I am not using this page to endorse any hate towards the band Nickelback .. I do not wish Nickelback or any other bands any ill will and hope they would see the same humor in making this page as I have,” Coral Anne said.
Nickelback could not be immediately reached for comment but some media outlets were reporting that singer Chad Kroeger — or at least someone posting Facebook messages using that name – was not amused.
Doing it His Way
No record label, no radio play, plenty of fans
Corey Smith doesn't mind his fans sharing his music. In fact, he encourages file sharing and encourages some of the things the more popular artists in the industry are quick to shy away from.
So far, things have worked for Smith, who has sold 600,000 singles and 100,000 albums without any radio play or help from a record label.
"It's rewarding, especially since it allows me the freedom to make my own choices," Smith said of his independent success. "I get to make the music I want to make and don't have to worry about pleasing some corporation."
The flip side of that is a lot of people only think you succeeded in music if you're on the radio and on TV, he said.
"Sometimes it's a bit challenging," he said. "In our world, we're really happy and I've been blessed and feel accomplished."
Corey sells thousands of tickets in venues throughout the Southeast and has begun touring nationwide due to popularity caused by word-of-mouth, high energy shows and relatable lyrics.
"It's cool when there's a sold-out show. I just always want to be positive and grateful," he said. "Anytime there's an enthusiastic crowd, I want to give those people the best show I can."
As for file sharing, he feels people will ultimately buy albums but the main thing is to get people to hear his music.
"I think records have been overpriced for a long time," he said.
"When CDs were rare, they cost $15 to $20 and we all thought the cost would go down. Now for the same price, you can buy a DVD and think of the millions of dollars that go into making a movie?"
He said the people who have shunned file sharing are the ones who lose from it. He said many bands and artists figure they can retire after selling so many records. For the little guy, this helps find that niche.
"I'm happy and blessed I can make a good living doing something that makes me happy. I try to keep it in perspective," he said. "I'm not out to exploit them (fans)."
Smith hopes his lyrics deliver the meaningful music he believes people look for as opposed to the popular over-hype music that gets radio play solely because of its popularity.
"I believe people want to hear meaningful music in their lives, whether they hear it from TV (or wherever)," he said. "I think radio has largely failed to deliver what people need and they've looked in other places."
Smith has written and produced all of his five albums. His sixth and current release "Keeping Up with the Joneses" debuted at No. 1 on iTunes singer/songwriter charts ahead of artists like James Taylor, Amos Lee and Simon & Garfunkel.
He said he appreciates hearing the numbers, but that doesn't get him overly excited. His goal is to reach a wider audience with each song and with future records.
"I try not to get too much credence into it - it's cool," he said. "What concerns me more is how it will take on a life of its own now and a year or two from now. I hope people will be moved by it."
UK Makes Abbey Road Studios a Historic Building
The Beatles' Abbey Road Studios were officially declared a historic building Tuesday, a move that will help preserve the cultural landmark that is a magnet for fans worldwide.
The crosswalk outside the iconic north London studios draws tourists with cameras daily, and the facilities have also hosted Pink Floyd, Jeff Beck and Radiohead and are still popular with orchestras.
But their cash-strapped owner EMI Group Ltd. says the studios have been losing money for years and has only recently shelved plans to sell them. While EMI now says it's looking for money to help revitalize the studios, news that it was seeking to offload Abbey Road sparked dismay among music fans.
Former Beatle Paul McCartney said he hoped it could be preserved, while English Heritage — the body that oversees buildings of historic interest — appealed to the government to name it a historic building.
English Heritage spokeswoman Helen Bowman said government's move "has probably been sped up" by recent speculation over the studios' future.
In a statement, English Heritage Chief Executive Simon Thurley said the Georgian building housing the studios "acts as a modern day monument to the history of recorded sound and music."
"Some of the most defining sounds of the 20th century were created within the walls of the Abbey Road Studios," he said. "It contains, quite simply, the most famous recording studios in the world."
EMI Files "Down Under" Royalties Appeal
Record company EMI will appeal against a court ruling that Australian Grammy-award winning band Men at Work stole a section of the famous 1980s hit, "Down Under," from a popular folk song.
Australia's Federal Court this month ruled that part of the song's melody came from the children's ditty "Kookaburra Sits in the Old Gum Tree," written 70 years ago by Australian teacher Marion Sinclair for a Girl Guides competition.
But EMI filed papers on Thursday seeking orders that songwriters Colin Hay and Ron Strykert did not breach copyright with their work, arguing the inclusion of two bars from the popular tune was at most a form of tribute.
EMI said while the similarities "might be amusing or of interest to the highly sensitized or educated musical ear," they were unlikely to be noticed by an ordinary listener.
"Down Under" has become a de facto anthem for Australians and was a hit in the U.S. charts, with quirky lyrics about Vegemite spread and drugged travelers in a "fried-out Kombi, on a hippie trail, head full of zombie."
The court's ruling meant the band and EMI could have to pay millions of dollars in royalties to Kookaburra copyright owners Larrikin Music, who launched the legal case.
Men at Work are the only Australian band to have a No.1 album and single simultaneously in U.S. charts with "Down Under" and the album "Business as Usual."
The song, about a land Down Under "where beer does flow and men chunder," was used as a motivator for Australia's 1983 America's Cup yachting victory in the United States.
The judge this month ordered both sides to enter mediation on royalty payments and reappear in court on February 25 to discuss whether Larrikin should receive compensation from Hay and Strykert.
(Editing by Miral Fahmy)
Digital Sales Down In '09
A study from NPD Group found that a million less digital downloads were bought in 2009, compared to the year before. NPD analyst Russ Crupnick discussed the study at the Digital Music East conference yesterday, saying that the industry should not be worried by this data.
According to CNet, Crupnick claimed the customers who didn't buy music digitally were mostly older music fans, who tried buying digitally in '07 and '08 but lost interest. He added that the average amount of money spent on digital downloads has risen from $33/year to $50/year.
"You got some maturity in the marketplace," Crupnick said. "If I ran a record label, the first thing I would do is go out and hire a consumer promotion person from Kraft or Colgate. The consumer is saying they wanted to be promoted to and persuaded to come try this."
In related news, Apple has announced that the ten billionth track has been sold via the iTunes Music Store. Louie Sulcer of Woodstock, GA purchased Johnny Cash's "Guess Things Happen That Way" yesterday. He wins a $10,000 iTunes gift card for being the lucky customer.
Sony, LG, Samsung, Hitachi, Toshiba Accused of Price Fixing
U.S. Department of Justice has filed subpoenas against Samsung
A home electronics retail store has filed a class-action lawsuit against Sony Corp., Samsung Electronics Co. Ltd., Toshiba Corp., LG Electronics Inc., Hitachi Ltd. and several subsidiaries, accusing the electronics manufacturers of colluding to fix prices in the U.S. optical disc drive (ODD) market.
The lawsuit, filed Wednesday, also claims the disc drive manufacturers used trade organization forums to meet and discuss agreements to keep prices of CD, DVD and Blu-ray drives in products like the Sony PlayStation 3 and PCs artificially high.
"When the price of ODD began to dip, the Defendants entered into an illegal agreement to prevent competitors from entering into the market and to keep prices at a supracompetitive level," the lawsuit states.
Prisco Electric Co. Inc. filed its 31-page complaint in the U.S. District Court for the Northern District of California. In it, Prisco calls the companies co-conspirators in attempts to "fix, raise, maintain and stabilize the price of Optical Disk Drive Products sold in the United States."
"The Defendants and their co-conspirators in this case control over 90% of this multibillion dollar a year market," the lawsuit states. "These Defendants have a long history of engaging in anticompetitive conduct, such as Dyanmic Random Access Memory (DRAM), Thin Film Transistor Liquid Crystal Display (TFT-LCD) and Cathode Ray Tube (CRT)."
Prisco did not specify an amount it is seeking, but the store did say it wants triple damages and an injunction against the companies to stop future price fixing activities.
Samsung, which has received subpoenas from the DOJ, said it had "no comment regarding price fixing on optical drives." Officials at Hitachi and Toshiba could not be reached for comment. According to a report in the Wall Street Journal last fall, Hitachi and Toshiba also received subpoenas regarding the probe into ODD price fixing.
An investigation was launched last October by the U.S. Department of Justice (DOJ) into the market for optical disk drives for anticompetitive conduct. The DOJ subpoenaed Sony Optiarc America, which at the time said it intended to cooperate fully with the DOJ and other agencies in this inquiry."
According to one published report, the investigations goes well beyond just Sony, and involves other electronics manufacturers.
In its complaint, Prisco, a retailer located in East Haven, Conn., said that the conspiracy to fix prices began at least as early as Oct. 1, 2005 and is continuing.
Rick Saveri, a partner in the San Francisco law firm of Saveri & Saveri, which is representing Prisco, said price fixing "cartels" are common in Asia and that it has become a cyclical practice to keep new technology prices high until older technologies can be phased out.
Saveri & Saveri is also lead counsel in several other price fixing lawsuits against Samsung and other electronics manufacturers involving DRAM, SRAM, flash memory and LCD displays.
"These are big Asian smoke-stack industries where they're investing in big fabrication plants. You can't have a technology destroy the business," Saveri said. "If you fire up a big fab plant with CRT tubes, and the next generation technology destroys it, then you have a big fab plant manufacturing buggy whips.
"So they have to make sure the price points for these [newer] technologies ... don't destroy existing markets," he added. "Price fixing always occurs when there's pressure on markets and prices are falling. You've got to prevent the falling market."
Savari said one civil litigation investigation case led to another as witnesses came forward. So the DRAM litigation lead to the SRAM litigation, which opened up into the flash memory litigation, "which we're the lead counsel on," he said.
Saveri said five or six other smaller retail stores have filed similar lawsuits over optical drive technology; he expects the number of plaintiffs in the case to grow.
"It's so predictable how these new technologies always roll out on a six-month window. You can almost set your clock to it. It's the way they operate," Savari said.
The suit includes Sony Optiarc America Inc., Sony NEC Optiarc Inc., Hitachi-LG Data Storage and Toshiba Samsung Storage Technology Corp.
Conroy's Website Removes References to Filter
Communications Minister Stephen Conroy during Question Time in the Senate / Ray Strange Source: The Australian
THE minister in charge of the Government's web censorship plan has been caught out censoring his own website.
The front page of Communications Minister Stephen Conroy's official website displays a list of topics connected to his portfolio, along with links to more information about each one.
All the usual topics are there – cyber safety, the national broadband network, broadcasters ABC and SBS, digital television and so on.
All except one.
It was revealed today a script within the minister's homepage deliberately removes references to internet filtering from the list.
In the function that creates the list, or "tag cloud", there is a condition that if the words "ISP filtering" appear they should be skipped and not displayed.
The discovery is unlikely to do any favours for Senator Conroy's web filtering policy, which has been criticised for its secrecy.
According to Google's cache records, the exception has been included on the minister's homepage since at least February 14.
A message on the page says it was last updated in October last year.
Melbourne web developer David Johnson told news.com.au the code was intended to remove references to internet filtering.
"The code is a quick fix," said Mr Johnson of creative agency Lemonade.
"If the developers of the minister’s site had wanted to do it properly they would have placed the 'ISP filtering' keyword exclusion on the server side where it is inaccessible to the public, instead of the front-end code which can be seen by anyone and understood by people with even a basic knowledge of scripting."
Senator Conroy's office has been contacted for comment.
The Awful Anti-Pirate System That Will Probably Work
So, when I read that Assassin's Creed 2 for the PC would fight piracy by requiring a live internet connection all the time when you were playing, I thought it was a joke. Sort of a dry, post-modern satire of the whole idea of DRM. Then I learned that, if your internet connection broke while playing it, the game would freeze. What's more, if the connection didn't return soon enough, the game would quit and your progress would be unsaved. This convinced me that the whole thing was a joke.
Then I learned, as explicitly confirmed by Ubisoft representatives, no. Not a joke. Not at all.
Of course, this is pretty harsh medicine, and the many reasons this set-up is hostile have been ably discussed. What if you have an inconsistent internet connection? What if servers ever go down? (Due to malfunction, bankruptcy, or no longer wanting to pay to maintain them.)
Also, you don't hold onto your saved games anymore. They do. This part is really significant. That's why the game needs the net connection all the time. It's not just for their amusement. The constant contact is necessary because your game is saved on their machine. Not yours. They are claiming that this is for your convenience, because then you can get at your saved games from any computer anywhere, but nobody is fooled.
But, in all the writing and bitching on the topic, everyone seems to be missing the most significant detail of this new system. Everyone always assumes that all DRM will be broken immediately and pirated versions will appear instantly and anti-hacker measures never work. But this system (and I know saying this will immediately get me written off as an idiot, but bear with me) is the one that will finally do a good enough job of holding off pirates. It won't hold them off forever (I think) but it will hold them long enough for the game to get its sales.
Here's why ... This is how hacking usually works. A game (or word processor, or operating system) is programmed to, say, check in at launch with the home server to make sure it's a legal copy. The hacker goes through the code and looks for that line of software and disables it. Snip. And the program is cracked and ready to be sent to the Torrents. This is a bit of a simplification, but it gets at the heart of the thing. Most hacking require disabling a small chunk of the program, and that is not hard.
But Assassin's Creed 2 is different. Remember, all of the code for saving and loading games (a significant feature, I'm sure you would agree) is tied into logging into a distant server and sending data back and forth. This vital and complex bit of code has been written from the ground up to require having the saved games live on a machine far away, with said machine being programmed to accept, save, and return the game data. This is a far more difficult problem for a hacker to circumvent. What are the options?
1. Make your own, free saved game server and alter the application code to use it.
This means a lot of work and expense, both to duplicate Ubisoft's game saving code and to set up and maintain the servers. Won't happen.
2. Trick the Ubisoft servers into believing you have a legit copy, so that they will let you save your game.
OK, the hackers will probably eventually come up with a keygen program. This is tricky, because the software that generates the keys will be in Ubisoft's hands, far from prying eyes. But they could possible do it, given a bit of time. But if they ever figure out you have a fake or duplicate key (and I bet they have their ways), poof. Your account and saved games disappear. I don't think this will work.
3. Hack the game to not need to save games on a remote server.
This means a hacker has to figure out the saved game format, somehow jam into the application new code to write the saved data and new code to read it, TEST IT, and get it to work. Doable. But it will take time, and I bet you'd get some bugs in the process.
So this will be a tough nut to crack.
Remember what it takes to get DRM to work. It doesn't have to be uncrackable. Nothing is. All it has to do is delay the hackers long enough for the game to get a couple months worth of sales. And by turning a key part of their game into a MMO ("We, like WoW, control the saved game, not you."), they have come up with a clever and brutal way to do just that.
But this will make everyone hate them.
Perhaps. Make no mistake. Ubisoft will lose customers and earn much nerdrage over this. But they are engaged in a grand experiment. They are seeing if an adequately pirate-proof game can make money. Will keeping cracked copies off the Torrents for a month make extra sales? And enough extra sales to make writing PC games worthwhile? Because the current system, where 90% of the copies out there are pirated and only megahits that could turn a profit on that 10%, doesn't seem to be working.
But it's an amazingly harsh system. As much as I hope for someone to come up with an anti-hacker measure that can reliably hold off the thieves for a few months without ticking the entire planet off (so that I can start using it), well, I wouldn't buy a game with the system Ubisoft is using. I really sympathize with what they're trying to do, and I can't join in with the (almost) unanimous chorus of rage. But this doesn't feel like the answer.
People might buy more copies of Assassin's Creed 2, but this is the sort of measure that can sour people on PC gaming as a whole. And that hurts everyone. Including me.
Edit: Thanks everyone for the comments! A couple responses.
Yes, of course there are solutions for making your own authentication server. But for the DRM to work, all it has to do is 1. delay the cracking, and 2. make it difficult/unreliable for the bulk of non-super-technically-apt gamers. Making people set up their own servers (on their own machines or not) is enough of a barrier to entry to get the job done.
Remember, I didn't say it was uncrackable, only that it was difficult/slow enough to give a profitable first few months.
As for the game making local copies of the saved games. IF this turns out to be the case, and IF the game also has easily accessible features in place for loading those saves (as opposed to only caching them there and only being able to load from the distant server), then yes, it's a dumb and easily crackable system. But even if this is the case, that doesn't change the fact that the next game to use this system will be slow to crack for the reasons given above, and all of the factors and consequences given above still apply.
Edit 2: One quick question for the "Anything can be cracked right away." crowd. Where do I get my cracked copy of World of Warcraft that can play the real game (not some cobbled together emulation server) without paying. Answer: You can't.
Once you accept the need for a constant internet connection, the developer can just load more and more of the game logic onto the servers. Right now, they're just trying it with saved games. (And who knows what else? Do any of us really know what the game is using that constant internet connection for?) But they can put more and more of the game onto their end until cracking the game will involve rewriting the damn thing.
Oh, and by the way, people accept their game needing a constant internet connection all the time. WoW. CounterStrike. Team Fortress. So saying people won't accept it for single player games is a bit of a stretch. They'll get used to it soon enough.
Carol Kaye Misunderstands TorrentFreak
Carol Kaye, the 74 year old bass guitar legend recently discovered that some of her sheet music and courses are available on numerous torrent sites. She is determined to stop this blatant piracy, but unfortunately she’s targeting the wrong person.
Kaye decided to contact the owners of various torrent sites and for some reason she also contacted (our parent blog) TorrentFreak with the following message.
You are illegally offering my COPYRIGHTED EDUCATIONAL ITEMS as downloads on your website: (linking to fenopy.com)
CEASE AND DESIST! This is totally UNAUTHORIZED AND ILLEGAL! Remove this download of MY Internationally Copyrighted items on your website. You are in Violation of the Copyright Law – I am the ONLY ONE to sell my own items!
We don’t host any torrent files or TorrentFreak of course, and I kindly replied to her explaining that we are a weblog covering BitTorrent news. For some reason she wasn’t quite convinced, as we received the following reply after having exchanged a few more emails.
You’re a liar and a thief, CEASE AND DESIST!
Again, trying to be polite I explained that she was targeting the wrong person. Because I sympathized with her I even offered to help her out and get her in contact with the people who could remove the torrents. This wasn’t helping much though.
You’re a liar, a pirate, and a thief and a no-body – you’re coming down buddy, don’t give me that run-around BS! Whoever you are, you’re THIEF AND A DELIBERATE PIRATE!
I’m known to 100s of 1,000s of musician world-wide and they’re going to know about you too, posting you on my website which gets 100s of 1,000s of hits all the time! CEASE AND DESIST!
After one more attempt to explain that TorrentFreak has nothing to do with the site where she found her content I eventually gave up. But Carol didn’t.
A few hours later my inbox was starting to fill up with friends/fans or colleagues of Carol who were spitting out more false accusations. Below is one I received from Deb Hastings.
Ernesto – you are offering free downloads of intellectual property? I am referrng to Carol Kaye’s jazz improv books and accompanying cd’s. This is STEALING my man. What is wrong with you? You don’t steal from other people. This is illegal and in violation of the copyright law. What the hell kind of a preson are you? That you steal from other people? I am going to do everything I can – with as many other people as I can – to bust up your ugly little business. I am tenacious and you have now become my focus. You are a thief. Take her stuff off your site.
Sigh.. I give up.
Apple iPad to Get File Sharing At Launch
After playing with the iPhone OS 3.2 SDK simulator, 9to5 Mac claims to have run into a file sharing mode on the iPad that will allow you to transfer application files to and from your iPad and computer. On top of that, there’s an empty header that reads “Applications” that could be saved for just this task.
Piracy Isn’t Killing The Movie Industry, Greed Is
At the box-office the major movie studios are raking in record profits, but their continuing refusal to widely adopt online business opportunities are hindering progress. According to the head of the Blockbuster video chain, the movie industry’s greed is to blame for holding back innovation.
First off, we have to make it clear that the major movie studios are doing great at the box-office, despite movie piracy riding at an all-time high. Other parts of the movie industry, such as video rental outlets, do seem to struggle and they have the studios to thank for this, not piracy.
In January of this year Warner Bros. announced that new DVDs will not be available at online rental outlet Netflix for the first month after they are released in stores. Warner Bros. hoped that this would increase DVD sales. However, the most likely side effect is an increase in piracy and a loss of income to Netflix.
It is a step back in a time where consumers are screaming for on-demand access and the flexibility to choose the option they want for their video consumption. The studios are clearly skeptical of all these ‘new’ technologies and are frantically adding restrictions to maximize their revenues, ignoring all market signals.
The greed of the music studios hasn’t gone unnoticed by Paul Uniacke, head of the Video Ezy and Blockbuster video rental chains. “Studio greed is what’s holding back video-on-demand,” he said in response to the studios demands to pay huge sums of money upfront if they want to offer on-demand streams.
“Movie studios are still as arrogant as the music moguls were before digital downloads and piracy destroyed them. The only thing that’s protecting the movie studios (from more widespread illegal downloading) now is file size,” Uniacke added.
Much like the big music labels, the studios are trying to control how people consume media to an extent where it becomes impossible for innovative retailers to offer a product that can compete with piracy. By this process they are killing their own business and that of many retailers, while blaming piracy for the damages.
Consumers demand convenience, availability and a high quality product for a fair price. Still, the decisions of the music labels and movie studios are mostly heading in the opposite direction as they cling to their old business of trying to safeguard their monopolies.
UK Movie Chain Boycotts 'Alice' in DVD Dispute
British cinema chain Odeon will not show Tim Burton's fantasy adventure ''Alice in Wonderland'' in Britain, Ireland and Italy because of a dispute over the timing of the film's DVD release, the company said Tuesday.
Odeon objects to Walt Disney Pictures' decision to leave only 12 weeks between the film's theatrical and DVD releases in those countries, rather than the usual 17 weeks.
Odeon said it had invested ''considerable sums of money'' in digital projection equipment to show 3D films, and a shorter window to screen films would undermine its investment.
The company said it feared Disney's proposals would ''inevitably set a new benchmark'' and a 12-week gap would become common.
Disney said it wants the shorter window in part to fight piracy, but does not plan to introduce it for every film.
Odeon is one of Europe's largest cinema chains, with 110 Odeon and UCI-branded theaters in Britain. Its screens in Germany, Austria, Spain and Portugal will show ''Alice in Wonderland'' because there is a longer gap between theatrical and DVD release.
Other cinema chains in Britain have expressed disquiet about Disney's move, but so far none has said it will not show the movie.
The Cineworld chain said last week it had reached a ''satisfactory compromise'' with Disney and would show ''Alice in Wonderland'' on more than 150 screens in Britain.
Burton's 3D movie stars Johnny Depp, Anne Hathaway and Helena Bonham Carter. It has its world premiere in London on Thursday and opens in Britain and the United States on March 5.
AMC May Boycott Tim Burton’s Alice in Wonderland
Looks like the inhabitants of Wonderland aren't the only ones revolting against an evil tyrant. Theatre owners across the world are reportedly none too happy about Disney's recently announced plans to push the DVD release date of Tim Burton's Alice in Wonderland up to within 12 weeks of the theatrical run, and they're doing something about it. Disney hopes to increase sales of the DVD by having it out in time for the summer, a full 5 weeks earlier than the usual theatrical-to-DVD release window. In response, several cinema chains have threatened to boycott the film, voicing concerns about a potential loss of big screen revenue.
In the U.K., both Vue Entertainment and Odeon Cinemas have taken up arms, along with four of the major exhibitors in The Netherlands who have already decided not to carry the film. Now AMC in the U.S. is also staring down Disney; with less than two weeks before the movie's scheduled release, they still have not agreed to screen it. If AMC were to boycott the movie it would be a huge blow, considering that they account for over 4500 screens worldwide. I don't know if extra DVD sales could make up for a loss like that. It is expected that an agreement will be reached in the coming days, but it is unclear who will blink first. Who do you side with on this one, and does a shortened DVD release window make you less likely to see a movie in theatres?
How Avatar is Creating a 3-D Hell for Movie Theaters
You may be thrilled that so many 3-D films—including Alice in Wonderland, Clash of the Titans and How to Train Your Dragon—will soon be hitting theaters. But as for the studios and theater owners, well ... the glut of 3-D product has them more than a little nervous.
According to msnbc, by the end of March there'll only be about 3,900 to 4,000 3-D-ready screens available in the U.S. and Canada. But since a movie in wide release in North America will typically be shown on 3,000 to 10,000 screens, that leaves those three new 3-D movies going to war for your eyeballs. Each will likely end up with less of a chance to catch your attention because some theaters with only one or two 3-D screens will have to choose which movies to show in 3-D.
"One or all three are going to suffer in some way," said Patrick Corcoran, director of media and research for the National Association of Theatre Owners. "It makes it a much harder decision on exhibitors on what to keep or what to drop or what to add and probably should have been avoided."
There are 19 3-D movies scheduled for release this year, including Toy Story 3, Shrek Forever After and Megamind. But don't worry—by the time Tron Legacy hits theaters Dec. 17, the number of 3-D screens in North America should reach around 5,000.
So—which movie do you most want to see in 3-D?
"Avatar" Top Film Overseas for 10th Weekend
Showing minimal box office fatigue at the foreign box office, "Avatar" logged a 10th consecutive weekend at No. 1 after a $51 million round.
Director James Cameron's record-setting blockbuster has earned $1.78 billion internationally, with its worldwide tally weighing in at $2.47 billion. In addition to its worldwide record in current dollars, "Avatar" has now beaten 1997's "Titanic's" global box office milestone on an inflation-adjusted basis as well.
The top market remains France where "Avatar" claimed $6.3 million on the weekend, raising its local total to $165.1 million.
Martin Scorsese's "Shutter Island," which opened No. 1 in the U.S. and Canada, also earned $9.1 million in nine overseas markets. Spain ($3.4 million) led the way, followed by Australia ($2.5 million) and Russia ($1.3 million).
"Percy Jackson & the Olympians: The Lightning Thief" was No. 3 overall with a $23.2 million weekend; its foreign total rose to $67.9 million. "Valentine's Day" was a close No. 3 after seducing $23 million, taking its total to $71.3 million.
"The Wolfman" was No. 4 after scaring up $16 million. Its overseas total rose to $46.7 million. "The Princess and the Frog" grabbed the No. 5 slot after a $12.1 million weekend; its overseas total rose to $131.5 million.
Opening in 15 new markets was "The Lovely Bones," which generated $7.2 million on the weekend from a total of 21 territories. Director Peter Jackson's fantasy thriller opened at No. 4 in the U.K. ($2.7 million). Its international total stands at $25.4 million.
Other foreign totals: "Sherlock Holmes," $266 million; "Alvin and the Chipmunks: The Squeakquel," $215.5 million; "Cloudy With A Chance of Meatballs," $107.7 million; "It's Complicated," $84.4 million; "Invictus," $63 million. "The Tooth Fairy," $36.4 million; and "Fantastic Mr. Fox," $21.6 million.
Cat-and-Mouse for a Trashy Trailer
They’re not the kind of things people say in polite society, or even impolite society. Saying them, even in jest, can get a drink tossed in your face and the glass with it.
Yet there they are, roaring out of the mouth of a cute little 11-year-old girl.
A trailer for the forthcoming film “Kick-Ass” that depicts the girl wielding a gun and using highly, highly profane language is igniting debate about how Hollywood advertises its R-rated films on the Web.
Movie marketers in recent years have increasingly relied on raunchy ads known as “red-band” trailers to stir interest in their films. While most trailers are approved for broad audiences by the Motion Picture Association of America, the red-labeled variety usually include nudity, profanity and other material deemed inappropriate for children. Many theaters refuse to run these trailers, but they are widely distributed online — and that is at the root of the current dust-up.
One R-rated trailer for the movie, about a teenage boy who tries to become a superhero, was released by Lionsgate in late December and has become a Web phenomenon. The trailer primarily focuses on Hit Girl, an 11-year-old sword- and gun-wielding vigilante played by Chloë Moretz (who just turned 13 in real life). Nicolas Cage plays her father, an equally menacing oddball named Big Daddy.
In the trailer Hit Girl salts her conversation with language so graphic that it would make a biker blanch; it’s well beyond the kind of garden- variety profanity that has seeped into mainstream culture. She then shoots a man in the face and uses a whip to kill another.
Lionsgate, which acquired the North American distribution rights to this independently produced film, released another red-band trailer on Friday. This one adds references to masturbation in the boy’s voice and has another cascade of under-age cursing.
In both instances Lionsgate complied with industry rules for red-band trailers. The Motion Picture Association of America, the trade organization that bestows ratings and regulates movie advertising, restricts release of these ads to sites that require viewers to pass an age-verification test, in which viewers 17 and older have to match their names, birthdays and ZIP codes against public records on file.
The problem is that the raunchy trailers pop up on sites without age restrictions almost instantaneously. Fans copy them to their own blogs and Facebook profiles and post them outside of YouTube’s so-called age gates. All movie trailers go viral, but the red-band ones speed across the Internet with an added velocity because of their “can you believe what they just said” nature.
“Studios hide behind the notion of an age requirement for these trailers, but it’s pure fiction,” said Nell Minow, a lawyer who reviews films for radio stations and Beliefnet.com under the name Movie Mom. “It’s easy for kids to access, and that’s exactly how the industry wants it.”
Moreover, the severity of age policing varies, with some sites — including the Trailer Park section of MySpace, which had the red-band version as of Tuesday — seemingly leaving it to the honor system and asking for only an easily lied-about birth date. (A MySpace spokeswoman, Tracy Akelrud, said the site used other controls to detect under-age users. “If you are under 17, you will be blocked,” she said.)
The global nature of the Internet poses another challenge: foreign Web sites, which do not fall under control of the motion picture association, are easily reached through Google.
Red-band trailers had such a bad reputation in some studio circles that as recently as 2007, Warner Brothers wouldn’t even do them, saying it cost too much to make trailers for such a niche audience. But at the moment, one of the hottest trailers on the Web is a red-band variety for Warner’s “Cop-Out,” featuring a cursing 10-year-old. The Hollywood Reporter wrote about its “all new potty-mouthed flavor!”
Ms. Minow, who is also a shareholder activist and the daughter of Newton N. Minow, a former chairman of the Federal Communications Commission, has been stewing about red-band trailers for years, but the particularly graphic ones for “Kick-Ass” have brought her to a boil. She said she had lodged multiple complaints with the motion picture association in recent weeks. Other family advocacy groups — including one as far-flung as Australia — are rallying around her.
“These particular trailers are even worse than normal because they depict a child and so are more interesting to children,” Ms. Minow said. She is also upset that the movie showcases a child engaging in such behavior in the first place, adding, “Isn’t there a limit to what we can ask children to do on screen?” (Similar questions were raised in 1976 when a 13-year-old Jodie Foster played a teenage prostitute in “Taxi Driver.”)
The film at the center of the new controversy, directed by Matthew Vaughn, with a budget of around $35 million and set for release in the United States on April 16, is based on the popular — and equally violent — comic book series of the same title by Mark Millar. Mr. Vaughn’s company, Marv Films, and Plan B Entertainment, a company owned by Brad Pitt, financed the movie. Advance interest in the film is enormous, according to pre-release tracking surveys, and Hollywood widely expects it to be a hit.
The motion picture organization acknowledges the problem of “bleed” — the term the industry uses for marketing materials that spread beyond their specific target audience — but bristles at the notion that it could do more to protect children from inappropriate movie advertising.
“We devote enormous resources to making certain that kids don’t encounter these trailers,” said Marilyn Gordon, the organization’s senior vice president for advertising. “That said, we can’t scrub the entire Internet.”
She said the association proactively searched for sites that provide unrestricted access to red-band trailers and, working with studios, demanded their removal. Since the Hit Girl trailer was released in December, Ms. Gordon said the organization had found 86 sites providing unrestricted access. As of Monday, all but a few had removed the video. One of the remaining was out of the organization’s jurisdiction: a fan site in Eastern Europe.
Lionsgate, which gained notoriety as the studio behind the violent “Saw” franchise, in many ways prides itself on button-pushing marketing. But with this film, studio executives say they are simply using red-band trailers to educate moviegoers about exactly what awaits. Because of Motion Picture Association of America restrictions, the “green band” trailer approved for broad audiences features little swearing or sex references and depicts comparatively little violence.
In a statement the studio said, “It’s really important for people to know what kind of movie this is so they can make an appropriate decision about whether or not they want to see it.”
Wal-Mart Buying Vudu Movie Service
Sure, you took the plunge and bought that expensive high-definition television. But does it connect to the Internet?
Analysts estimate that fewer than 5 percent of the HDTVs sold in the United States last year can go online to pull in movies and television shows, bypassing traditional cable and satellite TV service. Now, however, the idea of an Internet-ready home entertainment setup has a powerful new backer: Wal-Mart.
The retail giant said on Monday that it had agreed to buy Vudu, a Silicon Valley start-up whose three-year-old online movie service is being built into an increasing number of televisions and Blu-ray players.
Terms of the acquisition were not disclosed, but a person briefed on the deal said the price for the company, which raised $60 million in capital, was over $100 million. Other companies, including Best Buy, Amazon.com, Comcast and the satellite company EchoStar, had also expressed interest in acquiring Vudu, according to this person, who asked for anonymity because the terms of the deal were private.
The two companies began informing Hollywood studios and television manufacturers of the deal on Monday, and Wal-Mart said it was expected to close within a few weeks.
The acquisition adds a forceful player to what is already a crowded field of companies aiming to deliver streamed entertainment to the living room.
Microsoft, Sony, Amazon, Netflix, Blockbuster and Roxio CinemaNow, a division of Sonic Solutions, all offer online movie stores for Internet-connected devices like HDTVs, Blu-ray players or video game consoles.
Apple sells movies and TV shows alongside music in its iTunes store. But iTunes is accessible only from computers and Apple’s own mobile devices, as well as on televisions through the Apple TV set-top box, which has not sold well and which the company has referred to as a “hobby.”
“It’s getting increasingly cheap to put Internet connections into televisions, and there are definitely financial opportunities to doing it,” said Riddhi Patel, an analyst at the research firm iSuppli, which estimates that over 60 percent of high-definition televisions will connect to the Internet by 2013.
This shift could shake up the television business, analysts say. Consumers will have more reasons to watch entertainment from sources other than their cable or satellite company, potentially motivating a greater fraction of them to cancel those monthly subscriptions.
Movie stores like Vudu’s also compete directly with the video-on-demand services of the cable companies, and generally have better selection, more high-definition content, friendlier menus and fuller descriptions of the programs.
More immediate, if Wal-Mart puts its marketing power behind the Vudu service, it could give a lift to sales of Internet-ready televisions and disc players, which generally cost a few hundred dollars more than devices without such capabilities.
Wal-Mart stocks fewer such televisions than its rivals Best Buy and Amazon, according to James McQuivey, an analyst at Forrester Research. “At the very least this shows Wal-Mart understands that has to change, because the DVD is eventually going away,” Mr. McQuivey said. “They are making a bet on connected devices.”
Wal-Mart has so far lacked a way to deliver movies digitally to people’s homes — but it hasn’t been for lack of trying. In 2007, Wal-Mart started a movie and TV show download service with the help of Hewlett-Packard. But customers never embraced it, and Wal-Mart shuttered the site the following year after H.P. closed the division that was providing the technology.
Wal-Mart introduced a digital music download store in 2004, but the effort has badly lagged behind iTunes and even Amazon’s MP3 store.
Vudu, based in Santa Clara, Calif., and backed by the Silicon Valley venture capital firms Benchmark Capital and Greylock Partners, has not turned a profit. It first emerged in 2007 pushing a sleek black set-top box, which people connected to their TVs to gain access to thousands of Hollywood films.
But like other Silicon Valley companies including TiVo and Roku, Vudu found it a challenge to persuade mainstream consumers to connect yet another box to their already cable-snaked televisions.
In 2008, Vudu’s chief executive left the company and was replaced by Alain Rossmann, a co-founder who was an early Apple executive and a pioneer in making the Web accessible from cellphones. Last year, Vudu stopped making hardware and instead began offering its movie store and simple interactive service as a feature that the largest consumer electronics manufacturers could build into their devices.
That effort has gained visible traction over the last few months. At the International Consumer Electronics Show in January, Vudu announced deals to put its service into devices made by Samsung, Sanyo, Sharp and Toshiba and said it was expanding its older relationships with LG Electronics, Vizio and Mitsubishi.
Panasonic and Sony are the only major manufacturers that have not yet added the Vudu service to their devices. With Wal-Mart, one of their biggest retailers, taking it over, manufacturers will now have another reason to include Vudu.
Vudu competitors like Netflix, of course, are cutting similar deals with manufacturers, who are happy to build multiple services into their devices and make them more versatile.
Vudu has sought to distinguish itself from its rivals by bragging about its large catalog of high-definition movies, its simple user interface and its integration of other Internet services like Facebook, Twitter, Flickr and Pandora.
The Vudu deal could allow Wal-Mart to one day sell a variety of other merchandise through people’s televisions via the Vudu service. One person who has been briefed on Wal-Mart’s thinking said that the retailer would keep the Vudu brand.
But the retailer will make one change. Vudu also has a plentiful selection of pornographic movies available to its customers. A person briefed on the Wal-Mart deal said the retailer would close down that category “immediately.”
The Sad History and (maybe) Bright Future of TiVo
For a company that hasn’t announced a new hardware platform in years, TiVo seems to be all abuzz in recent weeks.
It’s been a long time since TiVo released a major new hardware product – about 3 1/2 years since the last major DVR release, the high definition Series3. Sure, they also released the TiVo HD and HD XL, but those were just variations on the Series3 with no significant new features.
Investors need something to cheer about. For pretty much the last three years, TiVo has been losing subscribers every quarter.
Fans like me were waiting for something new. I speculated on (or rather dreamed about) what might be coming prior to the start of CES. But TiVo disappointed us and announced nothing new at the big show.
But now TiVo looks like it’s waking up from it’s hibernation and is ready to do something. Oh, but what? …
The Big Announcement
For starters, TiVo has scheduled an announcement – a big announcement to be held at nothing less than the top of the Empire State Building. (I wonder if they’ll have a statue of King Kong holding a TiVo doll in his hand.)
And they do need to announce something big. TiVo’s subscriber numbers are down – way down. In just the last reported quarter (Q3 2009) TiVo lost about 10% of their subscribers – and nearly 40% of their peak reported numbers versus January 2007. That puts them at about 2.7 million subscribers at the end of last October (At the peak they were at about 4.4 million.)
TiVo the icon
This is the story of an iconic company that’s losing it’s icon status. very few companies have had their company or product named turned into a verb:
I Xeroxed the memo. (Even though I used a Canon)
I Googled a review of that movie and it doesn’t look interesting.
I Tivo’d Heroes (even though one might have used the cable company’s crappy DVR)
But “to TiVo” is seemingly losing favor. And people are actually starting to use the generic “to DVR”. If you Google for these terms and look at the number of pages matched, you’ll see. “I DVR’ed Heroes,” just doesn’t sound right – it’s that extra syllable.
False starts and missteps
When you look at that graph of subscription gains/declines, it makes you wonder just what TiVo has been doing for the last three years? Even prior to that, how can a brand be so well recognized, yet so unpopular? Even at 4.4 million, that’s less than 4% of US TV households. And it’s been almost 11 years since the first TiVo went on sale.
A lot of fighting against the Man
When I think about it myself, I wanted a TiVo ever since I first heard about it. In it’s early days, cable TV was virtually all analog. But, anyone that subscribed to premium channels or upper tier packaged channels was forced to use a descrambler box that they could only get from their cable company. So for TiVo to record a program on a scrambled channel, it had to somehow control the cable box – customers had to deal with two boxes to watch TV. To gain control, TiVo users had to install a MacGyver type contraption to allow the TiVo box to send infrared remote control signals through a series of mirrors to the cable set top box. That intimidated a lot of potential customers.
The next problem was that using a cable box may nullify one of TiVo’s best features – the ability to watch a live program while recording another. If the channels that you want to watch and record are both scrambled, you couldn’t do it since the typical cable box could only descramble one channel at a time. And my cable company at the time scrambled everything except the over the air channels. So, myself and others waited.
Then, a little into the new century, digital cable started to become popular with the promise of a myriad of new channels to choose from. Initially, this only made matters worse for TiVo. Many of those who might have not subscribed to premium channels now found themselves enticed by some of the new digital-only channels: DIY Network, Fine Living, Home and Garden Television, BBC America. Those not willing to give up dual tuner functionality went elsewhere.
Even more trouble for TiVo: high definition over-the-air digital broadcasting was also charging forward. Missing the boat on HD was entirely TiVo’s fault (at least when talking about over-the-air HD). HD broadcasts were set to begin on a wide scale in 2002. Cable companies weren’t quite ready for the HD launch, but neither was TiVo. The availability of HD (which is inherently digital) helped push cable companies to move further into the digital space.
The inability to tune over-the-air high definition and scrambled cable (digital or analog) became major un-selling points for TiVo – especially amongst the most profitable videophile market as well as much of America – the non-technical crowd – that don’t want to mess around with remote control hacks and stuff like that.
For both crowds it would be until the Fall of 2006 before an answer came from TiVo.
In order to address digital cable, TiVo would have to wait for the cable industry to comply with a Congressional mandate to open up to third party set top box makers. The idea, as mandated by Congress was well intentioned, but the follow through by the FCC, was disastrous. The mandate came in 1996, but it wasn’t until 2005 that it became a reality. The outcome, was a clunky, feature lacking system called CableCard. At first it was welcomed with open arms by consumer electronics manufacturers. Many TV’s from Sony, Panasonic, Samsung, and other companies were rolled out with slots for CableCards.
In theory, CableCard allowed any company to make a device (televisions, recorders, simple tuners, etc) which would absorb the functionality of a traditional cable TV set top box – the key feature being the ability to descramble scrambled content. The cable company would provide a CableCard to customers for low or no cost which would make the CableCard device compatible with the cable companies decryption systems. So without the need to rent a set top box from the cable company, one could use a device bought online or at a local electronics store which would be able to descramble channels itself.
In the case of TiVo, CableCard meant that the TiVo box could finally record one scrambled channel while letting you watch another – or you can just record two programs at once while you watch a totally different, pre-recorded program.
So while CableCard solved some problems, there were still a few other problems for TiVo and other CableCard implementers:
1. CableCards still had to be rented from the cable company. The hopes of saving money while not not needing to rent a box were not there. (In my case, Cox Communications charges me $2 per card).
2. CableCards had to be professionally installed – not DIY . So, customers would still have to pay an installation fee on top of the monthly fee – no different than the old fashioned set-top-box environment. (Update 2010-2-25 7:30PM PST: Commenter Max Williams (see comments below) indicates that some cable operators in some regions do allow for a self-install of the CableCard and at no-cost for rental)
3. The initial CableCards only let devices decode one channel at a time. For TiVo, that meant to record one show while watching another live show, TiVo had to have two CableCard slots. And customers would have to rent two CableCards.(In my case, that’s $4 a month)
4. CableCard was not compatible with the interactive features of digital cable systems – anything that required sending a signal to the cable company and awaiting a response: namely Video On Demand/Pay-per-view.
Add it up and consumers are largely not interested in CableCard – maybe with the exception of the videophile market. But the cost and complexity are too much for most people.
To this day, many of these problems remain in the cable TV market. A couple of things have changed. For one, a new iteration of the CableCard standard came out allowing for decoding on multiple channels while using a single CableCard. This was called CableCard 2.0, aka the M-Card card. But M-Cards didn’t eliminate the need for a CableCard rental or professional installation. And they still didn’t provide compatibility with Video On Demand and other two way services.
The big rise in the chart was due to the existence of the so-called “DirecTiVo” boxes – TiVo devices which were compatible with the DirecTV satellite TV system. The vast majority of the rented subscriptions were DirecTV customers. As opposed to retail cable TV TiVo users, DirecTiVo users had very few problems with their TiVos.
But sales went down when DirecTV stopped working with TiVo in 2004/2005 and delivered its own DVR made by NDS (At the time Rupert Murdoch’s News Corp had gained controlling ownership over both DirecTV and NDS). On top of that, DirecTV was in the middle of launching new satellites to expand their lineup of HD and local market channels. The new satellites began using a new compression system (MPEG4) for the broadcast of signals. That meant that happy TiVo users would not be able to access and record the new channels unless they switched to a new NDS DVR.
This was horrible news for TiVo. At the peak of subscriptions, roughly 2/3 of TiVo’s subscribers were DirecTiVo users.
You can also see that retail TiVo box subscriptions for cable TV users was still climbing after the loss of new DirecTV sales. But, with the problems that we just mentioned regarding TiVo and cable, sales and subscriptions weren’t climbing fast enough to make up for the sharp loss of DirecTV users. Now, even cable TV user subscriptions are falling.
Yet more future solutions
Naturally, the remaining problems with cable have sparked complaints from both consumers and consumer electronics manufacturers who claim that the cable industry intentionally designed CableCard to be a flawed system to protect their set-top-box rental revenue as well as control over content that consumers could access (like Internet delivered content). The FCC called for some changes. CableLabs, the collaborative technology development arm of the cable television industry attempted to allay complaints with two, still controversial, proposals.
The first solution is called tru2way. tru2way would give CableCard using devices access to two way services. But, yes, it will still require the use of a CableCard – so consumers still have the two unpalatable problems of requiring an installation appointment as well as the monthly rental fee associated with CableCard. But cost and installation time aside, it claims to bring tru2way compatible devices up to functional parity for the first time ever with cable company rented equipment.
But tru2way is not without it’s complaints. First, the biggest cable operators in the US pledged to have their network tru2way compliant by July 1 of last year. As of today, there still aren’t any tru2way ready cable system (aside from some small test markets). In TiVo’s financial statement issued last November, TiVo said that it was working with Comcast on tru2way implementation but was still a year away (maybe 9 months now?).
Additionally, many electronics makers, TiVo included have complained that tru2way ceded too much control over the user interface to the cable companies. Instead of being a set of standards for defining a request/response protocol between a tru2way box and the cable system, tru2way specifies that two way services will be activated by requiring the device to download a Java based software application, which would take control of the screen and interact with the user. For die-hard TiVo fans, that’s extremely nerve wrecking to risk giving up a beloved user interface for one created by the not-so-beloved cable company.
The biggest problem with tru2way seems to be that it’s still largely vaporware. Only Comcast seems to have a working tru2way cable system – and only in about three cities nationwide.
There’s yet one more future technology in the air. A second proposed solution from CableLabs is called Downloadable Conditional Access System (DCAS). DCAS promises to clear up all of the problems of CableCard. Like tru2way, it would provide a DCAS compliant device with two way capabilities. But, unlike tru2way, it will eliminate the need to rent a card (or anything) from the cable company. Sounds great, but the proposal was first made to the FCC in 2005. The cable industry’s lobbying group, the National Cable & Telecommunications Association (NCTA) promised nationwide support by July 2008. Today it’s nowhere in sight and is presumably dead.
Oh yeah. One more problem: SDV
The latest wrinkle giving heartburn to TiVo, Moxi, and anyone else that still has a stomach for the third party set-top-box market is called Switched Digital Video (SDV). The concept here is that in order to cram even more channels in to the cable lineup, a new technique is needed to avoid running out of frequency space. On traditional cable systems, all channels (scrambled or not) are sent down the cable to every home concurrently. But now, with all of those pay-per-view channels using up space, there is no more space for new channels. So, to add additional channels, digital cable set top boxes can use a concept called Switched Digital Video. Most channels will still be sent down the pipe as usual, but additional channels may be moved into an SDV tier which will only be sent if the digital set top box requests it. As a result, cable companies can theoretically offer an unlimited number of channels. This is a competitive move against satellite providers DirecTV and Dish Networks which have typically offered many more channels than cable – and more coveted high definition channels too.
But now the problem is that third party boxes like TiVo aren’t compatible with SDV. Some cable companies are offering to provide TiVo users with the installation and rental of what’s called an SDV tuning adapter which would share the incoming cable with TiVo and plug into TiVo’s USB port. To tune an SDV channel, TiVo will make that request via USB to the SDV adapter. The SDV adapter, in turn, would then signal the cable system to send down the selected channel over the cable input to the TiVo (or other STB). But, this adds the extra complexity of an extra box supplied by the cable company – which is something that many folks hoped to avoid by going with TiVo in the first place.
Lots of Lemons. Now, how does TiVo make some Lemonade?
Turning TiVo around is going to take some hard work. There have been some positive pseudo-announcement coming from TiVo over the past couple of years. But no new major products yet.
The most interesting news, for disappointed DirecTV users is that DirecTV and TiVo are working together again to deliver a new TiVo box for the satellite service. But that was announced in 2008. The claim is that the new DirecTV will finally see the light of day in the first half of this year – perhaps as soon as March 2.
Way back in 2006, TiVo announced similar partnerships to develop proprietary DVRs for the Comcast and Cox cable systems. In the case of Comcast and Cox, TiVo is working to make their software (parts of it at least) available for download onto certain Motorola manufactured DVRs – not terribly exciting.
More recently, TiVo announced a similar joint development effort with RCN. This seems to be a lot more interesting as this is both a hardware and software project. TiVo will be promoted and sold by RCN as the premier DVR for the RCN system. Additionally, TiVo, RCN, and a third party technology company, SeaChange, worked to engineer both two-way capabilities (for Video on Demand and other interactive services) as well as SDV support without using tru2way or an SDV tuning adapter.
What about TiVo’s retail offerings?
This could be where things get very interesting. The wording on TiVo’s invitation for their March 2 announcement (“Inventing the DVR was just a warmup.”) makes you think that they have something fun planned. That wording makes for an incredibly bold claim – and sets them up for high expectations. A simple downloadable version of TiVo for Cox and Comcast doesn’t sound that exciting. Even, the RCN endeavor (while great news for RCN users), doesn’t sound inventive enough to make the TiVo’s founding invention look like a “warmup.”
But two-way and switched digital video are probably way off the table – otherwise TiVo wouldn’t have just filed a complaint about the cable industry to the FCC last week (PDF of complaint here).
Of course, there’s still plenty of room for innovation in the retail market as I posted numerous potential new features back in early January. And I’m sure TiVo’s engineers have been dreaming up other ideas over the last few years.
Then there’s always the accidentally leaked TiVo Premier product – which doesn’t look it would anything new. But if introduced in conjunction with a radically new retail product, then we might be looking at something good. The TiVo Premier could play the role of an inexpensive entry-level product. With the new technology product on the high end – with a high end price.
One of the past complaints about TiVo is it’s price. The Premier, while sporting largely no improvements over the existing HD product, could be enticing to many new TiVo customers if it came in at a lower price. Being a new design, the Premier could take advantage of new, lower cost components which match the power of the components in the older HD in order to come in at a new lower price point. This would especially useful if they finally pull the plug on the current low end $149, standard definition only Series2 box. With HDTV’s dominating the store shelves – even on the low end – TiVo is going to need a low end HD DVR if it wants to play better in the low end of the market.
Advertising and Data Services
Another area for potential growth are their advertising and data services. TiVo displays links to ads on the TiVo main menu, when you hit the pause button in the middle of a program, and in other spots. Regular TV commercials can also get TiVo specific services that work in conjunction with actual airing of the commercial – even if you’re fast forwarding through it. For instance, while fast forwarding though a commercial, TiVo will sound a chime and display a TiVo superimposed message asking you to press the “Thumbs Up” button on the remote control for more information. TV networks can also use the service to enable the Thumbs Up button to instants schedule a recording for a show that’s being recorded in a commercial.
The data service is quite fascinating. It’s like the old Nielsen TV ratings system on steroids. Being a pretty smart computer, a TiVo box can track viewing habits down to a resolution of only one second. The data can be sold to advertisers who want to know things like: what commercials do people watch versus skip or fast forward over, are certain parts of a program watched more than once, or in segment based shows (like Saturday Night Live) do people only watch certain parts of the show (maybe the “Weekend Update” segment of SNL is the most popular and might have a higher value for commercials than the opening monologue).
But for advertising and data services to really make money, TiVo needs more subscribers. I frequently hear people complain about the monthly fee as being a reason not to get a TiVo. But if the price of TiVo monthly service can be brought down, TiVo might be able to attract more subscribers and boost the value of the data and advertising businesses.
There’s a couple of independent ways to achieve this. It’s key to note that TiVo probably doesn’t make much money from the retail sale of the box – the service revenues are needed to bring TiVo over the top. If advertising and data sales can be used to better monetize the service, then that revenue can be used to bring down the monthly service fee – which will in-turn bring in more customers. One recent monetization route was inked with Google. Google, which now sells traditional television commercial air time though its AdWords service now has access to TiVo’s usage data – which helps ad buyers make more informed decisions before bidding on advertising slots.
Secondly, if TiVo is better able to drive down the manufacturing cost of the presumed TiVo Premier, then it can be more profitable at the time of sale. With a more profitable box, there’s less of a need to rely on monthly service fees as a tool to recoup a loss on the box. So, that gives TiVo a second way to bring the monthly service down – and bring down the cost of the set-it-and-forget-it lifetime subscription fee. They should even be able to sell a moderately priced retail product that simply includes lifetime service (taking a page from the ARRIS Group’s Moxi DVR playbook).
When TiVo was younger, most of their customers used the TiVo device’s built in modem to dial-in (yuck) to the TiVo service nightly to get updates program guide data and such. As broadband Internet connectivity has gotten more popular in the decade since TiVo’s creation, dial-in service is less important. Dial-in modem banks are also quite laborious and expensive to maintain. I would hope that TiVo’s newest products completely abandon support for dial-in data access. Since I would suspect that most of TiVo’s recently acquired customers use their Internet broadband connection to access the TiVo service, this should be an easy feature to drop (otherwise, how did Netflix Watch Now, Roku, Boxee, and Popcorn Hour all get so popular?). Any infrastructure TiVo was maintaining to provide the dial-in data service can be eliminated to save money.
When the service can be purely driven over the Internet, the marginal cost of providing the service goes down to near zero. But the value for advertising and data goes up.
The Road Ahead
Going forward, TiVo definitely has some potential bright spots in the future. The biggest near term revenue contributor will likely be the new DirecTiVo box. While presenting TiVo with a much smaller market of potential customers, TiVo’s box for RCN looks like it’ll be a winner. As for Cox and Comcast, while they are both huge operators, a software only download for their existing DVR rentals doesn’t sound all that promising – especially since, in Comcast’s case, the operator seems to be giving TiVo service away for free to their Comcast DVR users. At a price of nothing, how would TiVo make money from that? Maybe TiVo is getting a cut of the $9.95 monthly fee for the Comcast DVR rental. Or TiVo is hoping that data and advertising make the endeavor profitable – which is n0t entirely a bad idea if it attracts millions of new service users.
As far as retail goes, that seems to be the most closely guarded part of TiVo’s plans. But it sounds like we can expect a new high end product and maybe also see the Premier unveiled as the new low end product at the March 2 event. And if TiVo can lower their monthly service fees they can achieve a critical mass to make their advertising and data services more popular.
TiVo’s future still has some bumps in the road to overcome, but with the right moves it should eventually be pretty smooth.
Cablevision to Roll Out Network-DVR in April
Cablevision Systems Corp said it will roll out its controversial remote storage digital video recorder in April, doing away with the need to buy and install DVR boxes in subscribers' homes.
The RS-DVR technology enables subscribers to store TV programs on the cable operator's computer servers and then play them back at will. When plans for the network-based DVR were first announced in 2006, several major program owners sued the cable operator claiming it was illegal.
Cablevision won the case on appeal and last June the U.S. Supreme Court rejected a counter appeal by the film studios and television networks, opening the way for the DVR to be launched.
"By year-end we intend to cease buying physical DVRs as we begin deploying our network-based DVR solution throughout our footprint," Cablevision Chief Operating Officer Tom Rutledge said Thursday on a conference call with Wall Street analysts.
Investors believe such systems could save cable companies significant amounts of money on buying DVR boxes, as well as the cost of sending employees out to install the boxes.
Other cable companies including Comcast Corp and Time Warner Cable Inc have said they would launch similar systems over time, once it became clear the RS-DVRs were legal.
Cablevision has made a big drive to provide innovative services to compete with the advanced digital TV and Internet features offered by Verizon Communications Inc in its area.
Earlier in the week, Cablevision announced its plans to trial a PC-to-TV relay technology which would enable a subscriber to watch online videos and family photos on their own personal TV channel.
The company has also launched Wi-Fi Internet access in the local Cablevision area to enable its Internet subscribers to use wireless devices outside of the home. It also plans to install Wi-Fi inside commuter rail cars this year once it gets approval from transit authorities.
Rutledge told analysts the company is testing phones that switch from Wi-Fi to cellular and back as the user moves in and out of a Wi-Fi zone.
"The test is so far proving to be good and consistent with our view of what is possible and gives us some hope that we will be able to launch additional products using the Wi-Fi network that will look like what some people think of as cellular telephone," said Rutledge.
(Reporting by Yinka Adegoke; Editing by Tim Dobbyn)
Networks Wary of Apple’s Push to Cut Show Prices
If Apple cut the price of each TV episode in half — to 99 cents, from $1.99 — would sales on iTunes increase enough to offset the price drop?
Experiments are under way to find out, and the head of the nation’s No. 1 television network, CBS, indicated last week that some shows, at least, would be priced under a dollar in the future.
Apple wants to ignite TV show sales, especially as it prepares to introduce the iPad tablet computer next month. But its proposals to lower prices across the board are being met by skepticism from the major networks.
Television production is expensive, and the networks are wary of selling shows for less. They are equally wary of harming their far more lucrative deals with affiliates and cable distributors, who may feel threatened by online storefronts like Apple’s and those operated by Amazon, Microsoft and Sony.
But the networks do not want to ignore the 125 million customers with credit cards who have iTunes accounts, either. “We’re willing to try anything, but the key word is ‘try,’ ” said a TV network executive who requested anonymity because his company had declined to comment publicly on talks with Apple.
With the iTunes pricing debate, the television industry is facing the same question that music labels and publishers are: just how much is our content worth in a digital world?
It is especially complicated for TV, given that most people already pay for TV through their cable or satellite service — and they can watch most network shows free on streaming sites like Hulu, albeit with advertisements.
The notion of selling individual TV episodes straight to the consumer is still a relatively new one. Apple added video to its music store in late 2005 and sold episodes of the ABC shows “Lost” and “Desperate Housewives” without ads for $1.99, twice the price of a single song at the time.
The store soon expanded to include virtually all of the major TV providers, with Apple insisting on a uniform $1.99 price for episodes. NBC Universal bristled and removed its shows at the end of 2007 but returned nine months later, after Apple slightly relaxed its pricing structure.
For the most part, though, standard-definition episodes from NBC and other networks remain $1.99, and high-definition episodes are $2.99, whether it is a brand-new hourlong drama or a five-year-old half-hour sitcom. Movie sales and rentals are more flexible.
But iTunes remains, predominately, a music store. Consumers have downloaded nearly 10 billion songs and about 375 million TV episodes.
Analysts say the TV revenue from iTunes has been marginal for producers and distributors.
“It’s still a niche portion of the marketplace,” said Doug Mitchelson, a television analyst for Deutsche Bank Securities, who characterized iTunes and competing digital stores as an extension of DVD sales of TV shows.
Separately, Apple has proposed to some networks that the store sell a subscription package of popular TV shows. At a price some reports have set at $30 a month, the subscription service would be a direct threat to entrenched cable and satellite providers. Apple has encountered trepidation from some networks, but the proposal is not off the table, according to executives at two of the networks.
What would make the iTunes sales more significant? That is where pricing science comes in. In conversations with networks, Apple representatives have cited 99 cents as the magic price point that brought digital music sales into the mainstream. The company says the same price could propel TV sales, according to the network executives. But the networks have little data about what effect 99-cent sales would have, making them more apprehensive about a change.
“If you took five things at Wal-Mart and sold them for a nickel, they’d sell really well, because they’d stand out. But if you took everything in the store and made it a nickel, nothing stands out anymore. Essentially all you’ve done is lowered the value of your content,” said a senior executive at a TV network owner.
Most other media companies declined to comment on TV pricing at iTunes last week, as did a spokesman for Apple.
Pricing is coming up now in part because Apple is keen — some TV executives privately say desperate — to line up content for the iPad, the tablet computer to be available in March.
Mr. Mitchelson said prices under a dollar were “very appealing to the consumer,” and he said the key for Apple was, “Can you draw in this huge swath of folks who aren’t using iTunes at all to purchase TV shows?”
The networks appear willing to try lower prices in a limited way. The Financial Times reported this month that some networks had agreed to a test of lower prices, but it did not name any.
Asked about iTunes on an earnings call with analysts on Thursday, Leslie Moonves, the chief executive of the CBS Corporation, said that “certain shows” would be sold for 99 cents, but “I don’t know yet which will be.” People in the industry doubt the discounts would apply to the newest episodes of marquee shows like “NCIS” or “Two and a Half Men.” A CBS spokesman said Friday that no new deal with Apple was imminent.
Among the few 99-cent TV shows on iTunes last week were NBC recaps of Olympic events and episodes of the PBS Kids shows “Arthur,” “Martha Speaks” and “WordGirl.”
The PBS pricing is temporary. Andrew Russell, a senior vice president of the service, called it a three-week experiment to “generate buzz.”
“At this point we still feel the $1.99 price point is right for us, our audiences and our producers to help support creation of more outstanding kids’ content,” Mr. Russell said by e-mail. “But we’ll remain flexible in this fast-changing environment.”
Water-Cooler Effect: Internet Can Be TV’s Friend
Remember when the Internet was supposed to kill off television?
That hasn’t been the case lately, judging by the record television ratings for big-ticket events. The Vancouver Olympics are shaping up to be the most-watched foreign Winter Games since 1994. This year’s Super Bowl was the most-watched program in United States history, beating out the final episode of “M*A*S*H” in 1983. Awards shows like the Grammys are attracting their biggest audiences in years.
Many television executives are crediting the Internet, in part, for the revival.
Blogs and social Web sites like Facebook and Twitter enable an online water-cooler conversation, encouraging people to split their time between the computer screen and the big-screen TV.
The Nielsen Company, which measures television viewership and Web traffic, noticed this month that one in seven people who were watching the Super Bowl and the Olympics opening ceremony were surfing the Web at the same time.
“The Internet is our friend, not our enemy,” said Leslie Moonves, chief executive of the CBS Corporation, which broadcast both the Super Bowl and the Grammy Awards this year. “People want to be attached to each other.”
Seeking to capitalize on the online water-cooler effect, NBC showed the Golden Globes live on both coasts for the first time this year, and the network reportedly wants to do the same for the Emmy Awards this fall, so the entire country can watch (and chat online) simultaneously.
But sometimes the effect works even when the program is not live. Rachel Velonza, a 23-year-old from Seattle, knew that Johnny Weir failed to win a medal in figure skating long before she ever turned on a television last Thursday, but she stayed up until almost midnight, enduring NBC’s much-ridiculed tape delay because she wanted to see for herself why he wound up in sixth place. She knew all her friends were watching because they were talking about it on Twitter (which says it counts 50 million posts every day) and Facebook (which says it surpassed 400 million members this month).
“Even though knowing ahead spoils the program, you just can’t help but see for yourself what all these people are talking about,” she said.
NBC says it thinks the habits of people like Ms. Velonza partly explain why the ratings for the Olympics are up noticeably.
“People want to have something to share,” Alan Wurtzel, the head of research for NBC Universal, said from Vancouver. He said the effects of online conversations were “important for all big event programming, and also, honestly, for all of television going forward.”
If viewers cannot be in the same room, the next best thing is a chat room or something like it.
That’s what MTV found last fall during the Video Music Awards: the Twitterati were in a tizzy when Kanye West snatched a microphone from Taylor Swift in the middle of her acceptance speech. The show had an average of nine million viewers, its best performance in six years.
The Recording Academy, which presents the Grammys, mounted a digital campaign to promote the awards show this year, signing up Facebook fans and monitoring Grammy-related Twitter messages.
Peter Anton, the academy’s vice president for digital media, said it was not a coincidence that the awards show notched a 35 percent gain over last year’s audience totals.
Watching the Olympics, Della Lee, a disabled mother of twins in Springfield, Ore., found herself joking on Twitter about curling with dozens of fellow viewers, and was much more deeply engaged in the broadcast as a result. “I really got into curling yesterday! It’s a fun sport,” she wrote to a friend later.
The effect is obviously not limited to television. Online conversations can also help or hinder opening weekends for movies and the ratings for politicians. Recent studies of online social networks have affirmed what researchers have long recognized: people seek to be around and be influenced by like-minded individuals.
There are other factors contributing to the ratings spikes: attention-grabbing shows (the Super Bowl featured the New Orleans Saints, a popular underdog), gradual population growth and an economic contraction that some analysts say is leading to more people spending more time at home in front of their TV and computer screens.
Along with those reasons, “increased usage of social media is definitely driving the ratings,” said Jon Gibs, a vice president at Nielsen. He said the Olympic data showing simultaneous TV-and-Web viewing signaled the growing importance of interactivity to the television experience.
Some of the marquee Olympic events are tape-delayed this month, even though Olympic results are instantly available on the Web. But people are still watching the Games in prime time.
Brad Peterson, a lighting designer in New York, heard about the skier Lindsey Vonn’s crash before Thursday’s replay of it on NBC, but watched regardless. After all, he said, “I didn’t know when, how and who won.”
For Mr. Wurtzel, the Olympics are a lab, and so far he said he has found that people who follow the Olympics both on TV and online wind up being heavier viewers of television.
Media companies are starting to consider how to incorporate that water-cooler effect — and how to harness it for day-to-day TV shows, too. For the Olympics, NBC is promoting something called “You Be the Judge,” which lets viewers submit their own scores for figure skaters through a Web application and compare their scores to other viewers. The network’s Web site also features a gadget that tracks Twitter opinions about the Games.
Chloe Sladden, director of media partnerships for Twitter, said sites like Twitter let people feel plugged in to a real-time conversation.
“In the future, I can’t imagine a major event where the audience doesn’t become part of the story itself,” Ms. Sladden said.
Watching the Games? Switch on Your Cellphone
Cellphones and the internet are muscling in on more traditional media as ways to see the Olympic Games, and the trend will only deepen, organizers said on Tuesday.
Timo Lumme, head of TV and marketing for the International Olympic Committee, said non-traditional media had already matched the 20,000 hours from traditional broadcasters so far these Games, contributing to a total audience he expects to reach 3.5 billion -- or half the world's population.
"We've had a continuing digital explosion," Lumme told a news conference. "We now have the same amount of hours covered globally on digital media -- internet, mobile -- as we have on the old media broadcasting, and a quarter of that is mobile."
"People are accessing this in different ways during different times. It does mean more is being consumed."
Lumme said organizers were pleased with national broadcasters that include Canada's CTV and U.S. network NBC.
NBC, which paid a record $2.2 billion for U.S. broadcast rights to the Beijing and Vancouver Olympics, has said it will lose money on the winter Games. But Lumme declined to speculate if that meant bids would come in lower next time.
"The IOC has never forced any broadcaster to pay, it's a bidding process," he said. "Whoever wishes to pay the most gets the deal. That's the way it's been and that's the way it will be in the future."
He added: "We are very confident that the Olympic games will retain premium status as a world event and I think it will command a premium price. What that price is will be decided by market forces that are out of our control."
NBC said on Tuesday that half of all Americans had watched at least some of its Olympic coverage.
But NBC online coverage of the Vancouver Winter Olympics drew just 33 million viewers. Alan Wurtzel, president of research at NBC Universal, said TV was "still king."
"Multiplatform consumption is emerging and going to become extraordinarily important. But the mothership is -- and will remain for a very long time -- television," he said.
(Additional reporting by Paul Thomasch, editing by Ossian Shine)
Don't Let These Olympians Infect Your Computer
Be careful which Olympian you search for online during the 2010 Winter Games -- it might infect your computer with a virus, spyware or other malicious software, according to Internet security company McAfee. Here are the Top 10 most dangerous Olympians, followed by the likelihood, in percentage, of infecting your computer.
Video of Twitter Phishing: The BZPharma 'LOL this is Funny' Attack
Twitter users are being warned about a widespread phishing attack spreading across the system, designed to steal the usernames and passwords of unsuspecting members.
Lol. this is me??
lol , this is funny.
Lol. this you??
followed by a link in the form of
where 'example.com' can vary. As we have seen many variations of the URL in its entirety, you would be wise to avoid clicking on any links which refer to bzpharma.net at the very least.
Although Twitter has urged users to be vigilant about the threat being distributed via private direct messages, it's clear that dangerous links are also being posted in public feeds. This means that you can stumble across the links even if you aren't sent it directly, or even if you are not a signed-up user of Twitter.
It appears what is happening is that the messages are being shared more widely because of third-party services like GroupTweet which extend the standard Twitter direct message (DM) functionality and allow private messages to be sent to multiple users *and* optionally made public.
As a result, as you can see in the video above, we have found Twitter accounts that have warned their followers about the phishing attack, only to subsequently fall victim to it themselves!
Regardless of how you come to click on the dangerous link, if you do enter your username and password on the fake Twitter login page your details will be phished and placed in the hands of hackers.
The page then displays a "fail whale" screen, claiming that Twitter is over capacity, before taking you back to the real Twitter main page. As a result, compromised Twitter users may not realise that their login details have been stolen.
Interestingly, the bzpharma.net site doesn't just appear to have been set up for Twitter phishing. It appears to also have been created for stealing the online identities of the Bebo social networking site too:
If you have been tricked by the phishing attack and accidentally handed over your username and password, change your password immediately.
We're going to see many more attacks against social networks in the future I'm afraid. Last month, Sophos published its Security Threat Report revealing that there had been an astonishing 70% rise in the number of users reporting spam and malware attacks via social networks in the last year.
Sophos at RSA
PS. If you're attending the RSA Conference in San Francisco next month, please come and hear me talk about the growing problem of cybercrime on social networks.
I'll be showing some live demonstrations of attacks and discussing how the problem has grown in the last year.
I'm also roped into giving regular presentations on the Sophos booth on the subject of social networking security, and I'm giving a conference paper "Web 2.0 Woe: Cybercrime on social networks" (Session ID: HT1-204 1pm, 3 March 2010).
I look forward to seeing some of you there.
Chuck Norris Botnet Karate-Chops Routers Hard
If you haven't changed the default password on your home router, you may be in for an unwanted visit from Chuck Norris -- the Chuck Norris botnet, that is.
People who read this also read:
Discovered by Czech researchers, the botnet has been spreading by taking advantage of poorly configured routers and DSL modems, according to Jan Vykopal, the head of the network security department with Masaryk University's Institute of Computer Science in Brno, Czech Republic.
The malware got the Chuck Norris moniker from a programmer's Italian comment in its source code: "in nome di Chuck Norris," which means "in the name of Chuck Norris." Norris is a U.S. actor best known for his martial arts films such as "The Way of the Dragon" and "Missing in Action."
Security experts say that various types of botnets have infected millions of computers worldwide to date, but Chuck Norris is unusual in that it infects DSL modems and routers rather than PCs.
It installs itself on routers and modems by guessing default administrative passwords and taking advantage of the fact that many devices are configured to allow remote access. It also exploits a known vulnerability in D-Link Systems devices, Vykopal said in an e-mail interview.
A D-Link spokesman said he was not aware of the botnet, and the company did not immediately have any comment on the issue.
Like an earlier router-infecting botnet called Psyb0t, Chuck Norris can infect an MIPS-based device running the Linux operating system if its administration interface has a weak username and password, he said. This MIPS/Linux combination is widely used in routers and DSL modems, but the botnet also attacks satellite TV receivers.
Vykopal doesn't know how big the Chuck Norris botnet is, but says he has evidence that the hacked machines "are spread around the world: from South America through Europe to Asia. The botnet aims at many networks of ISP [Internet service provider] and telco operators," he said.
Right now Chuck Norris-infected machines can be used to attack other systems on the Internet, in what are known as distributed denial of service attacks. The botnet can launch a password-guessing dictionary attack on another computer, and it can also change the DNS (Domain Name System) settings in the router. With this attack, victims on the router's network who think they are connecting to Facebook or Google end up redirected to a malicious Web page that then tries to install a virus on their computers.
Once installed in the router's memory, the bot blocks remote communication ports and begins to scan the network for other vulnerable machines. It is controlled via IRC.
Because the Chuck Norris botnet lives in the router's RAM, it can be removed with a restart.
Users who don't want to be infected can mitigate the risk -- the simplest way of doing this is by using a strong password on the router or modem. Users can also address the problem by keeping their firmware up-to-date and by disabling remote-access services.
In recent years, hackers have started looking at devices such as routers, which are often not properly secured, Vykopal said. "They are not regularly patched and updated, even though the patches are available." The devices "are also continuously connected to the Internet and they are up for days and months," he said.
In the future, he expects that even more malware will target these devices.
Despite their rarity, router-based botnets are not particularly hard to create, said Dancho Danchev, an independent cyber threats analyst, speaking via instant message. "Router-based botnets are not rocket science given a common flaw can be exploited, and every then and now [one] appears."
75 Percent of Enterprises have been Hit by Multi-Million Dollar Cyber Attacks
Wow. That's quite a statistic, but there it is in front of me jumping off the pages of the latest global State of Enterprise Security study from Symantec. The two lines shining so brightly and grabbing my attention read "75 percent of organizations experienced cyber attacks in the past 12 months" and "these attacks cost enterprise businesses an average of $2 million per year". I'll say it again, wow!
Maybe that is not so surprising when you consider that the report states that every enterprise, yes 100 percent, experienced cyber losses in 2009. The top three losses being intellectual property theft, customer credit card data theft and the theft of other personally identifiable customer data. These losses translated into a financial cost 92 percent of the time mainly in terms of productivity, revenue, and tanking customer trust.
Of course, as I have said before the math is always hard on the brain when you read these reports. That 75 percent figure is revealed immediately after we are informed that apparently 42 percent of organisation consider that security is the number one consideration for their business, beating off competition from such things as natural disaster and terrorism and traditional crime. In fact, it is a bigger concern than all three of those things combined. The disparity between the two could, of course, be partly down to another revelation in the report: enterprise security is becoming more difficult due to understaffing, new IT initiatives that intensify security issues and IT compliance issues.
When it comes to understaffing, network security is the biggest problem for 44 percent of those responding, with endpoint security sharing the honours also on 44 percent. There there are the initiatives that IT rated as most problematic from a security standpoint include infrastructure-as-a-service, platform-as-a service, server virtualisation, endpoint virtualisation, and software-as-a-service. And not forgetting compliance, with your typical enterprise having to explore no less than 19 separate IT standards or frameworks and employ around eight of them.
"Protecting information today is more challenging than ever" said Francis deSouza, senior vice president, Enterprise Security, Symantec Corp. "By putting in place a security blueprint that protects their infrastructure and information, enforces IT policies, and manages systems more efficiently, businesses can increase their competitive edge in today’s information-driven world.”
Microsoft Wins Court Approval to Topple "Botnet": Report
Software giant Microsoft Corp has won a U.S. court approval to deactivate a global network of computers that the company accused of spreading spam and harmful computer codes, the Wall Street Journal said.
A federal judge in Alexandria, Virginia, granted a request by Microsoft to deactivate 277 Internet domains, which the software maker said is linked to a "botnet," the paper said.
A botnet is an army of infected computers that hackers can control from a central machine.
The company aims to secretly sever communications channels to the botnet before its operators can re-establish links to the network, the paper said.
Microsoft on Monday filed a suit that targets a botnet identified as Waledac, the paper said.
Judge Brinkema's order required VeriSign Inc, an Internet security and naming services provider, to temporarily turn off the suspect Internet addresses, the paper said.
Microsoft could not be immediately reached for comment by Reuters outside regular U.S. business hours.
On February 18, Internet security firm NetWitness said in a report that a new type of computer virus is known to have breached almost 75,000 computers in 2,500 organizations around the world, including user accounts of popular social network websites.
(Reporting by Sakthi Prasad in Bangalore; Editing by Jon Loades-Carter)
Man in the Browser
Man in the Browser a.k.a MITB is a new breed of attacks whose primary objective is to spy on browser sessions (mostly banking) and in that process intercept and modify the web page contents transparently in the background. In a classic MITB attack, It's a very likely that what the user is seeing on his/her browser window is not something which the actual server sent. Similarly, what server sees on the other end might not be what user was intending to send. Why MITB? How different is it from conventional browser hijacking? I'll explain that shortly.
As we have also seen in the past, browser hijacking is an evasive technique being used by modern password stealer and banking Trojans. Conventional hijacking is there to steal the user's credentials as they are entered into web forms. A shift from the conventional Key loggers who used to capture every key from the victim's keyboard and in that process used to send lots of irrelevant information to the attacker.Things were going quite well with conventional browser hijacking before modern banks thought of introducing multi-factor authentication. With multi-factor authentication, conventional user name and passwords alone are not enough to login to a banking account, banks would require more than that. Let me explain multi-factor authentication by taking 'XYZ bank' as an example:
XYZ bank has an added security check with the name of 'Safety Pass'. Whenever a user tries to log in to any XYZ account from a machine which hasn't been used in the past to access this account, XYZ as a precaution will send a random code (via SMS) to the user's mobile phone and would require it before prompting for a password. This pass code, with a two minute expiry, will make sure that it is indeed the real user trying to log in . Once the user successfully pass through this added check, XYZ bank by default will not insist on the 'Safety Pass' again and user would go through conventional user name/password based authentication from next time onwards.
What this means is that without 'Safety Pass', logging into a compromised banking account from a different location, even with stolen credentials like user name and password will be nearly impossible. But "where there is a will there is a way". Here come the MITB attacks.
Case 1: (an imaginary situation)
Bob, an accountant in a local firm is a passionate guy who loves to work, even during off hours. One day while browsing the internet from home in search of some freeware software, he suddenly found his browser crashing unexpectedly. Hmm.. weird, may be it's some kind of software bug, he thought, and opened another browser window to continue his search.
At this point Bob had no idea that his machine had been infected with a nasty banking Trojan, Zeus/Zbot. Zeus had just exploited a vulnerability in his browser and installed itself silently on his PC.
Later that evening he thought about finishing some more of his work assignments so he decided to log into his company's XYZ bank account and do some wire transfers. Upon accessing the login page, he found something weird. Login page was asking him for the 'Safety Pass' as an added security measure. He was a little confused by this situation (as he had already gone through this check earlier). Then he thought maybe the bank is just being overcautious, so he requested for the 'Safety Pass' again and logged into his online banking account as normal. He was completely unaware of the situation that 'Safety Pass' was never actually requested by the banking server. It was the Zeus Trojan installed inside his browser that changed the server response coming in the form of html (just before the browser renders it) and injected this additional field.
The real purpose of this 'Safety Pass' injection was to snatch the 'Pass code' and send it to the attacker via IM along with other credentials. Now attacker with all the authentication factors in hand has two minutes (Pass code normally expires within minutes) to log into Bob's online bank account from other remote locations. Once logged into Bob's banking account, attacker is in the position to start transferring the available balance to any other account. Normally for transferring stolen funds, "money mule" accounts are used. Money mules most of the time are innocent people who are recruited by the criminals behind the malcious software, via the internet & usually without letting the "money mule" know the actual type of activities they will be engaging in. The primary job of these money mules would be to transfer any amount transferred into their accounts into the attacker's account of choice, mostly via 'Western Union' in remote locations like Russia, Ukraine etc. Money mule in turn keeps a small portion of this money as a commission.
How does Zeus do all this? All attack methodology is controlled through the Zeus configuration file. This configuration file is transferred on the wire in encrypted form, after decryption it looks like this.
This configuration file is prepared by the attacker and distributed to all the Zeus zombies. Based on this data, Zues then decides on the different banks to target. This configuration file also contains the html which needs to be injected into legitimate banking pages, fooling the user into answering all the secret questions/Safety pass, etc.
Case 2: (an imaginary situation)
It's Bob again, and this time his PC is infected with yet another MITB malware URLZone/Bebloh. After finishing his freeware search, he decides to log int to his XYZ Bank account to conduct some wire transfers. He logged into his account as usual, the login page did not ask him for the safety pass this time. He successfully makes a wire transfer in the amount of $10,000 to his client (Mr. David) and logged out afterwards. He had no clue that something really bad has happened during the wire transfer process. This is what happened in the background:
As Bob entered Mr. Davids account/wire information and submitted the transfer request to the bank, URLZone silently replaced Mr.David's account information with info from one of its money mule (Mr. Jerry) accounts. The bank, as instructed, transferred the $10, 000 and sent back a confirmation page indicating the successful transfer of $10, 000 to Jerry's account. Well, Bob should have noticed this fraud after seeing the bank's response. Unfortunately this did not happen, URLzone played another trick and changed the bank's response, this time replacing Jerry's account information with David's and displayed it to Bob, leaving no tell-tale clues behind that the $10,000 never reached his client Mr. David.
Bob URLZone Bank
Transfer funds to Mr. David ---> [Request modification] ---> Transfer funds to Jerry
Transferred funds to Mr. David <--- [Response modification] <--- Transferred funds to Jerry
Like Zeus, URLZone is also controlled through its configuration file. This configuration file contains complete information about the money mule accounts, names, etc. This file also contains the names of the banks to target etc.
Just like Zeus, URLZone is also created using a toolkit (available in underground markets). What this means is that the buyer of this toolkit can then create customized malware or botnets with different CnCs and configurations but having all the flexibility and power of the original toolkit. Having such a tool kit in the hands of multiple criminal group paints a scary picture. It's simply not enough to eliminate a particular botnet and criminal group to solve this problem. Worst of all, Zeus and URLZone are not the only MITB malware currently active, other malware like Bzub, Torpig etc also fall in the same category.
May be its the time for all of us to closely monitor the security currently being offered by our banks. Can they protect us against MITB ?
For those who may want to read about a recent incident where Zeus was involved in a $ 150, 000 robbery:
Spies and Hackers Exploit World Cyber Rule Void
The best weapon against the online thieves, spies and vandals who threaten global business and security would be international regulation of cyberspace.
Luckily for them, such cooperation does not yet exist.
Better still, from a hacker's perspective, such a goal is not a top priority for the international community, despite an outcry over hacking and censorship and disputes over cyberspace pitting China and Iran against U.S. firm Google.
Nations are thinking too parochially about their online security to collaborate on crafting global cyber regulation, an EastWest Institute security conference heard last week.
Policy statements from governments around the world are dominated by the need to heighten national cyber defenses. As a result, too many cyber criminals are getting a free ride.
"Nations are in denial," Indian cyber law expert Pavan Duggal told Reuters, saying national legislation was of limited use in protecting users of a borderless communications tool.
"It may take a big shock of an event to wake people out of their complacency, something equal to a 9/11 in cyberspace," he said referring to the 2001 coordinated attacks on U.S. cities.
With a quarter of humanity connected to the Internet, cyber crime poses a growing danger to the global economy.
Target the Perpetrator
The FBI tallied $264 million in losses from Internet crime reported by individuals in the United States in 2008 compared to $18 million of losses from 2001: These were probably a fraction of the losses caused to companies and government departments.
The menace extends to many sectors including control systems for manufacturing, utilities and oil refining, since many are now tied to the Internet for convenience and productivity.
A priority for regulators is to find ways of tracking down criminals across borders and ensuring they are punished, a tough task when criminals can use proxy servers to remain anonymous.
"We cannot postpone the debate until we are in the midst of a catastrophic cyber attack," former U.S. Homeland Security Secretary Michael Chertoff told the conference.
"We must formulate an international strategy and response to cyber attacks that parallels the traditional laws governing the land, sea, and air."
Security experts say the ability to conduct disastrous mass cyber attacks is the preserve of some governments, well beyond the capacity of militant guerrilla groups like al Qaeda.
But it cannot be assumed that international organized criminal networks, long practiced at mass online fraud and theft, are not developing an interest in gaining this ability.
"Cyber crime is a very sophisticated crime with very sophisticated players and it takes a multinational effort to make sure we can enforce the law," Dell Services President Peter Altabef told Reuters.
"Once you have identified who is at fault you really want to make sure, as a deterrent, that you can go to those jurisdictions and enforce the laws on the books."
James Stikeleather, Dell Services Chief Technology Officer, told Reuters that tracking own criminals across borders could pose legal issues for drafters of multilateral regulation.
Giving an example, he said the more companies added the technology needed to give investigators the ability to attribute a crime, the more users' privacy and anonymity would be reduced.
"Playing with Fire"
"Probably the sticking point among the governments will be 'where is the appropriate level of attribution versus anonymity or privacy for what people are doing (online)'."
Datuk Mohammed Noor Amin, chairman of the U.N.-affiliated International Multilateral Partnership Against Cyber Threats, said failure to regulate could perpetuate cyber "failed states."
He cited impoverished countries where customers can purchase unregistered SIM cards with mobile Internet capability, giving them the ability to commit online crime such as identify theft against people in rich nations without fear of being traced.
He said it was in the interest of rich nations to help poorer countries develop the capacity to crack down on this kind of abuse, because their own citizens were being targeted.
"Governments tend to look at their self-interest. But it's actually in their own interest to collaborate," he said.
Altabef said the growing rate and scale of international cyber attacks threatened to undermine the trust between nations, businesses and individuals that was necessary for economies and societies to act on the basis of the common good.
Complacency was also a problem, delegates said. "Nations take for granted the Internet is going to be 'on' for the rest of our lives. It may not necessarily be so," said Duggal.
"Imagine the Internet being down for two to four weeks," he said. This would "rain disaster" on online businesses as well as transport, industry and governmental surveillance systems.
"People have realize the Internet is an integral part of every country, politically, socially and business-wise."
"Not to focus on cybersecurity is playing with fire."
(Editing by Charles Dick)
A Snitch in Your Pocket
Law enforcement is tracking Americans' cell phones in real time -- without the benefit of a warrant.
Amid all the furor over the bush administration's warrantless wiretapping program a few years ago, a mini-revolt was brewing over another type of federal snooping that was getting no public attention at all. Federal prosecutors were seeking what seemed to be unusually sensitive records: internal data from telecommunications companies that showed the locations of their customers' cell phones -- sometimes in real time, sometimes after the fact. The prosecutors said they needed the records to trace the movements of suspected drug traffickers, human smugglers, even corrupt public officials. But many federal magistrates -- whose job is to sign off on search warrants and handle other routine court duties -- were spooked by the requests. Some in New York, Pennsylvania, and Texas balked. Prosecutors "were using the cell phone as a surreptitious tracking device," said Stephen W. Smith, a federal magistrate in Houston. "And I started asking the U.S. Attorney's Office, 'What is the legal authority for this? What is the legal standard for getting this information?' "
Those questions are now at the core of a constitutional clash between President Obama's Justice Department and civil libertarians alarmed by what they see as the government's relentless intrusion into the private lives of citizens. There are numerous other fronts in the privacy wars -- about the content of e-mails, for instance, and access to bank records and credit-card transactions. The Feds now can quietly get all that information. But cell-phone tracking is among the more unsettling forms of government surveillance, conjuring up Orwellian images of Big Brother secretly following your movements through the small device in your pocket.
How many of the owners of the country's 277 million cell phones even know that companies like AT&T, Verizon, and Sprint can track their devices in real time? Most "don't have a clue," says privacy advocate James X. Dempsey. The tracking is possible because either the phones have tiny GPS units inside or each phone call is routed through towers that can be used to pinpoint a phone's location to areas as small as a city block. This capability to trace ever more precise cell-phone locations has been spurred by a Federal Communications Commission rule designed to help police and other emergency officers during 911 calls. But the FBI and other law-enforcement outfits have been obtaining more and more records of cell-phone locations -- without notifying the targets or getting judicial warrants establishing "probable cause," according to law-enforcement officials, court records, and telecommunication executives. (The Justice Department draws a distinction between cell-tower data and GPS information, according to a spokeswoman, and will often get warrants for the latter.)
The Justice Department doesn't keep statistics on requests for cell-phone data, according to the spokeswoman. So it's hard to gauge just how often these records are retrieved. But Al Gidari, a telecommunications lawyer who represents several wireless providers, tells Newsweek that the companies are now getting "thousands of these requests per month," and the amount has grown "exponentially" over the past few years. Sprint Nextel has even set up a dedicated Web site so that law-enforcement agents can access the records from their desks -- a fact divulged by the company's "manager of electronic surveillance" at a private Washington security conference last October. "The tool has just really caught on fire with law enforcement," said the Sprint executive, according to a tape made by a privacy activist who sneaked into the event. (A Sprint spokesman acknowledged the company has created the Web "portal" but says that law-enforcement agents must be "authenticated" before they are given passwords to log on, and even then still must provide valid court orders for all nonemergency requests.)
There is little doubt that such records can be a powerful weapon for law enforcement. Jack Killorin, who directs a federal task force in Atlanta combating the drug trade, says cell-phone records have helped his agents crack many cases, such as the brutal slaying of a DeKalb County sheriff: agents got the cell-phone records of key suspects -- and then showed that they were all within a one-mile area of the murder at the time it occurred, he said. In the fall of 2008, Killorin says, his agents were able to follow a Mexican drug-cartel truck carrying 2,200 kilograms of cocaine by watching in real time as the driver's cell phone "shook hands" with each cell-phone tower it passed on the highway. "It's a tremendous investigative tool," says Killorin. And not that unusual: "This is pretty workaday stuff for us."
But there is also plenty of reason to worry. Some abuse has already occurred at the local level, according to telecom lawyer Gidari. One of his clients, he says, was aghast a few years ago when an agitated Alabama sheriff called the company's employees. After shouting that his daughter had been kidnapped, the sheriff demanded they ping her cell phone every few minutes to identify her location. In fact, there was no kidnapping: the daughter had been out on the town all night. A potentially more sinister request came from some Michigan cops who, purportedly concerned about a possible "riot," pressed another telecom for information on all the cell phones that were congregating in an area where a labor-union protest was expected. "We haven't even begun to scratch the surface of abuse on this," says Gidari.
That was precisely what Smith and his fellow magistrates were worried about when they started refusing requests for cell-phone tracking data. (Smith balked only at requests for real-time information, while other magistrates have also objected to requests for historical data on cell-phone locations.) The grounds for such requests, says Smith, were often flimsy: almost all were being submitted as "2703(d)" orders -- a reference to an obscure provision of a 1986 law called the Stored Communications Act, in which prosecutors only need to assert that records are "relevant" to an ongoing criminal investigation. That's the lowest possible standard in federal criminal law, and one that, as a practical matter, magistrates can't really verify. But when Smith started turning down government requests, prosecutors went around him (or "judge shopping," in the jargon of lawyers), finding other magistrates in Texas who signed off with no questions asked, he told Newsweek. Still, his stand -- and that of another magistrate on Long Island -- started getting noticed in the legal community. Facing a request for historical cell-phone tracking records in a drug-smuggling case, U.S. magistrate Lisa Pupo Lenihan in Pittsburgh wrote a 56-page opinion two years ago that turned prosecutors down, noting that the data they were seeking could easily be misused to collect information about sexual liaisons and other matters of an "extremely personal" nature. In an unusual show of solidarity -- and to prevent judge shopping -- Lenihan's opinion was signed by every other magistrate in western Pennsylvania.
The issue came to a head this month in a federal courtroom in Philadelphia. A Justice Department lawyer, Mark Eckenwiler, asked a panel of appeals-court judges to overturn Lenihan's ruling, arguing that the Feds were only asking for what amounted to "routine business records." But he faced stiff questioning from one of the judges, Dolores Sloviter, who noted that there are some governments, like Iran's, that would like to use such records to identify political protesters. "Now, can the government assure us," she pressed Eckenwiler, that Justice would never use the provisions in the communications law to collect cell-phone data for such a purpose in the United States? Eckenwiler tried to deflect the question, saying he couldn't speak to "future hypotheticals," but finally acknowledged, "Yes, your honor. It can be used constitutionally for that purpose." For those concerned about what the government might do with the data in your pocket, that was not a comforting answer.
Utah A.G. May Gain Broader Power to Demand Internet, Cell-Phone Records
A proposal that would broaden the power of the Attorney General's Office to demand Internet and cell phone companies turn over information about customers won broad approval Tuesday from a Utah House committee.
HB150, sponsored by Rep. Brad Daw, R-Orem, would grant prosecutors the ability to obtain an administrative subpoena, compelling Internet and phone companies to turn over the names, addresses, phone numbers, and bank information of customers using an Internet address or cell phone number at a given time.
No judge would have to review the information. Representatives from the Attorney General's Office say it is intended to develop an initial lead when investigating a crime.
Last year, the Legislature granted prosecutors e subpoena power when they suspect a child-sex crime has been committed. Since the bill took effect, more than 200 such subpoenas have been issued, or slightly more than one a day.
Daw's bill initially had sought to expand the authority to any crime, but committee members balked at such broad power last Friday. His amended bill limits the power to suspected felonies and two misdemeanors -- cyber-stalking and cyber-harassment.
"If we charge our law-enforcement folks with trying to protect us and trying to catch these people," Daw said, "we need to always be trying to review the capabilities these criminals have and the tools technology gives to them and make sure we have adequate tools in place."
This time, the committee approved the bill 10-1, sending it to the full House.
"I was uncomfortable with how far we were going before," said Rep. Eric Hutchings, R-Kearns. "But we don't have any choice but to address the fact that we have new tools being used for new crimes."
Spy Cameras Won't Make Us Safer
On January 19, a team of at least 15 people assassinated Hamas leader Mahmoud al-Mabhouh. Dubai police released video footage of 11 of them. Although it was obviously a very professional operation, the 27 minutes of video is fascinating in its banality.
Team members walk through the airport, check into and out of hotels, get into and out of taxis. They make no effort to hide themselves from the cameras, sometimes seeming to stare directly into them. They obviously don't care that they're being recorded, and -- in fact -- the cameras didn't prevent the assassination, nor as far as we know have they helped as yet in identifying the killers.
Pervasive security cameras don't substantially reduce crime. This fact has been demonstrated repeatedly: in San Francisco, California, public housing; in a New York apartment complex; in Philadelphia, Pennsylvania; in Washington; in study after study in both the U.S. and the U.K. Nor are they instrumental in solving many crimes after the fact.
There are exceptions, of course, and proponents of cameras can always cherry-pick examples to bolster their argument. These success stories are what convince us; our brains are wired to respond more strongly to anecdotes than to data. But the data are clear: CCTV cameras have minimal value in the fight against crime.
Although it's comforting to imagine vigilant police monitoring every camera, the truth is very different, for a variety of reasons: technological limitations of cameras, organizational limitations of police and the adaptive abilities of criminals. No one looks at most CCTV footage until well after a crime is committed. And when the police do look at the recordings, it's very common for them to be unable to identify suspects. Criminals don't often stare helpfully at the lens and -- unlike the Dubai assassins -- tend to wear sunglasses and hats. Cameras break far too often.
Even when they afford quick identification -- think of the footage of the September 11 terrorists going through airport security or the July 7 London transport bombers just before the bombs exploded -- police are often able to identify those suspects even without the cameras. Cameras afford a false sense of security, encouraging laziness when we need police to be vigilant.
The solution isn't for police to watch the cameras more diligently. Unlike an officer walking the street, cameras look only in particular directions at particular locations.
Criminals know this and can easily adapt by moving their crimes to places not watched by a camera -- and there will always be such places.
And although a police officer on the street can respond to a crime in progress, someone watching a CCTV screen can only dispatch an officer to arrive much later. By their very nature, cameras result in underused and misallocated police resources.
Cameras aren't completely ineffective, of course. Used properly, they're effective in reducing crime in enclosed areas with minimal foot traffic. Combined with adequate lighting, they substantially reduce both personal attacks and auto-related crime in multistory parking garages. And sometimes it is cost-effective for a store to install cameras to catch shoplifters or a casino to install cameras to detect cheaters.
But these are instances where there is a specific risk at a specific location.
The important question isn't whether cameras solve past crime or deter future crime; it's whether they're a good use of resources. They're expensive, both in money and in their Orwellian effects on privacy and civil liberties. Their inevitable misuse is another cost; police have spied on naked women in their own homes, shared nude images, sold best-of videos and even spied on national politicians. Though we might be willing to accept these downsides for a real increase in security, cameras don't provide that.
Despite our predilection for technological solutions over human ones, the funds now spent on CCTV cameras would be far better spent on hiring and training police officers.
We live in a unique time in our society: Cameras are everywhere, but we can still see them. Ten years ago, cameras were much rarer than they are today. Ten years from now, they'll be so small, you won't even notice them.
Already, people can buy surveillance cameras in household objects to spy on their spouses and baby sitters -- I particularly like the one hidden in a shower mirror -- or cameras in pens to spy on their colleagues, and they can remotely turn on laptop cameras to spy on anyone. Companies are developing police state-type CCTV surveillance technologies for China, technology that will find its way into countries like the U.S.
If universal surveillance were the answer, lots of us would have moved to the former East Germany. If surveillance cameras were the answer, camera-happy London, with something like 500,000 of them at a cost of $700 million, would be the safest city on the planet.
We didn't, and it isn't, because surveillance and surveillance cameras don't make us safer. The money spent on cameras in London, and in cities across America, could be much better spent on actual policing.
Widespread Data Breaches Uncovered by FTC Probe
FTC Warns of Improper Release of Sensitive Consumer Data on P2P File-Sharing Networks
The Federal Trade Commission has notified almost 100 organizations that personal information, including sensitive data about customers and/or employees, has been shared from the organizations’ computer networks and is available on peer-to-peer (P2P) file-sharing networks to any users of those networks, who could use it to commit identity theft or fraud. The agency also has opened non-public investigations of other companies whose customer or employee information has been exposed on P2P networks. To help businesses manage the security risks presented by file-sharing software, the FTC is releasing new education materials that present the risks and recommend ways to manage them.
Peer-to-peer technology can be used in many ways, such as to play games, make online telephone calls, and, through P2P file-sharing software, share music, video, and documents. But when P2P file-sharing software is not configured properly, files not intended for sharing may be accessible to anyone on the P2P network.
“Unfortunately, companies and institutions of all sizes are vulnerable to serious P2P-related breaches, placing consumers’ sensitive information at risk. For example, we found health-related information, financial records, and drivers’ license and social security numbers--the kind of information that could lead to identity theft,” said FTC Chairman Jon Leibowitz. “Companies should take a hard look at their systems to ensure that there are no unauthorized P2P file-sharing programs and that authorized programs are properly configured and secure. Just as important, companies that distribute P2P programs, for their part, should ensure that their software design does not contribute to inadvertent file sharing.”
As the nation’s consumer protection agency, the FTC enforces laws that require companies in various industries to take reasonable and appropriate security measures to protect sensitive personal information, including the Gramm-Leach-Bliley Act and Section 5 of the FTC Act. Failure to prevent such information from being shared to a P2P network may violate such laws. Information about the FTC’s privacy and data security enforcement actions can be found at www.ftc.gov/privacy/privacyinitiatives/ promises_enf.html.
The notices went to both private and public entities, including schools and local governments, and the entities contacted ranged in size from businesses with as few as eight employees to publicly held corporations employing tens of thousands. In the notification letters, the FTC urged the entities to review their security practices and, if appropriate, the practices of contractors and vendors, to ensure that they are reasonable, appropriate, and in compliance with the law. The letters state, “It is your responsibility to protect such information from unauthorized access, including taking steps to control the use of P2P software on your own networks and those of your service providers.”
The FTC also recommended that the entities identify affected customers and employees and consider whether to notify them that their information is available on P2P networks. Many states and federal regulatory agencies have laws or guidelines about businesses’ notification responsibilities in these circumstances.
Samples of the notification letters can be found at: http://www.ftc.gov/os/2010/02/100222sampleletter-a.pdf, http://www.ftc.gov/os/2010/02/100222sampleletter-b.pdf, http://www.ftc.gov/os/2010/02/100222sampleletter-c.pdf. The fact that a company received a letter does not mean that the company necessarily violated any law enforced by the Commission. Letters went to companies under FTC jurisdiction, as well as entities such as banks and public agencies over which the agency does not have jurisdiction.
The FTC appreciates the assistance of the Department of Health and Human Services, the Securities and Exchange Commission, the Federal Reserve Board, the Federal Deposit Insurance Corporation, the Office of Thrift Supervision, and the Office of Comptroller of the Currency.
The new business education brochure – titled Peer-to-Peer File Sharing: A Guide for Business – is designed to assist businesses and others as they consider whether to allow file-sharing technologies on their networks, and explain how to safeguard sensitive information on their systems, and other security recommendations. This information is available at http://www.ftc.gov/bcp/edu/pubs/busi...eft/bus46.shtm. Tips for consumers about computer security and P2P can be found at www.onguardonline.gov/topics/p2p-security.aspx.
The Federal Trade Commission works for the consumer to prevent fraudulent, deceptive,
and unfair business practices and to provide information to help spot, stop, and avoid
them. To file a complaint in English or Spanish, click http://www.ftccomplaintassistant.gov
or call 1-877-382-4357. The FTC enters Internet, telemarketing, identity theft, and other
fraud-related complaints into Consumer Sentinel, a secure, online database available to
more than 1,700 civil and criminal law enforcement agencies in the U.S. and abroad. For
free information on a variety of consumer topics, click http://www.ftc.gov/bcp/consumer.shtm.
Chinese Schools Deny Link to Google Attack
A prestigious Chinese university and a lesser-known vocational school have denied a report they were the source of recent cyber attacks on Internet giant Google and other U.S. corporations, Xinhua news agency said on Saturday.
A representative of Shanghai Jiaotong University, considered one of China's best, said the allegations in a New York Times report were baseless and even if the school's computers appeared to be involved, it did not mean the hackers were based there.
"We were shocked and indignant to hear these baseless allegations which may harm the university's reputation," Xinhua quoted the unnamed Jiaotong University spokesperson saying.
"The report of the New York Times was based simply on an IP address. Given the highly developed network technology today, such a report is neither objective nor balanced."
The Communist party boss at Lanxiang Vocational School, the other institution fingered in the report, also denied any role.
"Investigation in the staff found no trace the attacks originated from our school," Li Zixiang, party chief at the school in coastal Shandong Province, was quoted as saying.
The New York Times said Lanxiang was established with support from the Chinese military and has trained computer scientists who later joined the military, but Li said there was no relationship with the military, Xinhua reported.
He also disputed the statement that investigators suspected a link to a computer science class taught by a Ukrainian professor.
"There is no Ukrainian teacher in the school and we have never employed any foreign staff," Li told Xinhua. "The report was unfounded. Please show the evidence."
Lanxiang, founded in 1984, has about 20,000 students learning vocational skills such as cooking, auto repair and hairdressing.
Google announced in January that it had faced a "highly sophisticated and targeted attack" in mid-December, allegedly from inside China, and declared that it was no longer willing to censor search results in the country as required by Beijing.
The attacks have been a source of friction in Sino-U.S. relations at an already tense time.
(Reporting by Edmund Klamann and Emma Graham-Harrison in BEIJING; Editing by Michael Roddy and Sanjeev Miglani)
Hacking Inquiry Puts China’s Elite in New Light
With its sterling reputation and its scientific bent, Shanghai Jiaotong University has the feel of an Ivy League institution.
The university has alliances with elite American ones like Duke and the University of Michigan. And it is so rich in science and engineering talent that Microsoft and Intel have moved into a research park directly adjacent to the school.
But Jiaotong, whose sprawling campus here has more than 33,000 students, is facing an unpleasant question: is it a base for sophisticated computer hackers?
Investigators looking into Web attacks on Google and dozens of other American companies last year have traced the intrusions to computers at Jiaotong as well as an obscure vocational school in eastern China, according to people briefed on the case.
Security experts caution that it is hard to trace online attacks and that the digital footprints may be a “false flag,” a kind of decoy intended to throw investigators off track.
But those with knowledge of the investigation say there are reliable clues that suggest the highly sophisticated attacks may have originated at Jiaotong and the more obscure campus, Lanxiang Vocational School in Shandong Province, an institution with ties to the Chinese military.
Last weekend, the two schools strongly denied any knowledge of the attacks, which singled out corporate files and the e-mail accounts of human rights activists.
A spokesman for Jiaotong told local news outlets that school officials were “shocked and indignant” to learn of the allegations. And a Lanxiang spokesman called the reports preposterous.
But analysts say Jiaotong and Lanxiang are certain to come under close scrutiny.
Jiaotong is one of China’s top universities, and one charged with helping transform this country into a science and technology powerhouse.
The school has exchange programs with some of the world’s leading universities. Early this year, Duke said that with the help of Jiaotong, it would build its own campus near Shanghai.
Michael J. Schoenfeld, a spokesman for Duke, said on Friday that the university was troubled by the allegations.
“We’re going to have to explore that with Shanghai Jiaotong and understand the situation,” he said. “It’s a very complex situation.”
One of Jiaotong’s strongest departments is computer science, which has garnered support from some of America’s biggest technology companies, including Cisco Systems. Microsoft has collaborated with Jiaotong on a laboratory for intelligent computing and intelligent systems at the university.
Two weeks ago, Jiaotong students won an international computer programming competition sponsored by I.B.M., known as the Battle of the Brains, beating out Stanford and other elite institutions. It was the third time in the last decade that Jiaotong students had taken the top prize.
Jiaotong is also home to the School of Information Security Engineering, which specializes in Internet security. The school’s dean and chief professor have both worked on technology matters for the People’s Liberation Army, according to the school’s Web site.
The school, which has received financing from a high-level government science and technology project, code-named 863, has also regularly invited world-famous hackers and Web security experts to lecture there.
The latest clues do not answer the question of who was behind the attacks. But it is likely to put added pressure on Beijing to investigate a case that has prompted Google to threaten to pull out of China.
Beijing has not announced an investigation, but Web security experts emphasize that the Chinese government would need to be involved to find the ultimate perpetrators of the attacks.
“The U.S. would not be able to trace this” back to the source, said O. Sami Saydjari, the founder of the Cyber Defense Agency, a private Web security firm based in Wisconsin. “We cannot trace it beyond borders. We’d need the cooperation of the Chinese.”
Xiao Qiang, an expert on Chinese Internet censorship and control, says Jiaotong is studying not just Web security but also how to filter content that the government may deem unhealthy.
“Computer security may sound neutral, but in China, it also includes content, including content the government doesn’t like and wants to get rid of,” he says.
Scott J. Henderson, the author of “The Dark Visitor: Inside the World of Chinese Hackers,” said that in 2007, a prominent Chinese hacker with ties to China’s Ministry of Security also lectured at Jiaotong.
“He gave a lecture called ‘Hacking in a Nutshell,’ ” said Mr. Henderson, whose research was partly financed by the American military.
In a statement on Sunday, Microsoft said it could not comment on reports that some hacking had been traced to Jiaotong.
But the statement also said: “We condemn cyberattacks and industrial espionage no matter who is ultimately responsible. We hope officials will conduct a full investigation and cooperate fully with international authorities to get to the bottom of this situation.”
Google and other companies that were victims of the attacks have declined to comment.
Investigators are also looking into whether some of the intrusions originated at Lanxiang Vocational School, in the city of Jinan.
Lanxiang, which has 30,000 students studying trades like cosmetology and welding, was founded in 1984 by a former military officer on land donated by the military, according to Jinan’s propaganda department.
On its Web site, the school records visits to the campus by military officers and boasts of sending “a large batch of graduates to the army” and says “those graduates become the backbone of the army.”
Graduates of the school’s computer science department are recruited by the local military garrison each year, according to the school’s dean, Mr. Shao, who would give only his last name.
School officials also insist that Lanxiang students are not capable of sophisticated hacking.
“It’s impossible for our students to hack Google and other U.S. companies,” Mr. Shao said in a telephone interview. “They are just high school graduates and not at an advanced level.”
Little information is publicly available about the school’s computer science department. But the school says its computer laboratory is so enormous that it was once listed in the Guinness World Records book.
Bao Beibei and Chen Xiaoduan contributed research.
China Seeks Identity of Web Site Operators
Web site operators will need to offer photographs of themselves and meet Internet service providers in person under new guidelines announced by the Chinese government this week, according to published reports.
The "trial regulations" were issued by China's Ministry of Industry and Information Technology under the auspices of an ongoing anti-porn campaign, but they will also help the government create records of all sites in the country and could be used to block other types of online content, the IDG News Service reported on Tuesday.
The regulations, which were dated February 8 and posted on sites of the Chinese telcom regulator on Monday, require ISPs to meet people applying to register new Web sites and to collect photographs of them. They also require applicants to provide a description of the site's content, along with other information, the report said.
Web sites without government records will lose their domain name resolution by the end of September, effectively pulling them off the Internet, the news service reported. More than 130,000 sites have been pulled offline recently for not having records with the government, according to the official Xinhua news agency.
China is the largest Internet market, with more than 384 million users of the global network, Xinhua reports.
The tightening of China's clampdown on Internet use comes as government officials there resume talks with Google over the search giant's plans to stop censoring Web searches in that country, according to The Wall Street Journal.
Google and China have been in a showdown since Google announced last month that it was targeted by a hacker attack that appeared to originate in China and which targeted Gmail users who are human rights activists. At the time, Google said it would stop censoring its searches in that domain and might even pull out of the country entirely.
SD Nixes Move to Identify Anonymous Blog Defamers
A move to help identify people who anonymously post libelous messages on blogs and other Internet sites was rejected Monday by South Dakota lawmakers after opponents said the state would have trouble regulating the worldwide network.
The House State Affairs Committee voted 10-3 to kill a bill that would have required those who operate Internet sites to keep logs of Internet Protocol addresses so they could identify people who contribute libelous messages anonymously or under false names.
House Republican Leader Bob Faehn of Watertown, the committee's chairman, said he agreed with the measure's intent but doubted it would accomplish much.
"This is a global issue, and I doubt that South Dakota is going to have an effect," said Faehn, a longtime radio broadcaster.
Committee members said current law already allows people to seek the identity of those who have libeled them anonymously.
The bill's main sponsor, Rep. Noel Hamiel, R-Mitchell, said his measure would not limit free speech rights.
Many comments posted on blogs and other Internet sites are written by people who remain anonymous or use false names, Hamiel said. If those comments amount to libel or slander, a victim might have a tough time finding out the writer's real identity, he said.
"If you anonymously write something on the site and it's defamatory, the person you defame must have recourse in finding out who you are," Hamiel said.
The bill would have required operators of Internet sites to keep logs that would provide the identification and location of those who post comments without giving their true names. They would have been compelled to provide that information only in response to a court order in a libel lawsuit.
Hamiel and other lawmakers said a federal law protects operators of Internet sites from liability in lawsuits dealing with defamation.
Pat Powers of Brookings, operator of the South Dakota War College blog, said the government should not force bloggers to keep information on those who post comments because that could discourage people from debating political ideas. The bill also could have forced children to keep records to identify those who visit their blogs, he said.
"Much of the concern is this is a government mandate to collect the information of people who come on your Web site," Powers said. "We're not a totalitarian society. We're not China. We expect a little freer discourse than that."
Steve Sibson of Mitchell, operator of a blog called Sibby Online, said he supported the bill because it would protect anonymous free speech while holding accountable those who commit libel.
Dave Bordewyk of the South Dakota Newspaper Association spoke against the bill. The state's 11 daily newspapers generally allow readers to comment anonymously on news stories, but the papers block those comments that are profane, contain threats or are libelous, he said.
Bob Miller, a lobbyist for South Dakota Funeral Directors, said funeral homes also do not want to collect Internet addresses to identify those who post anonymous messages of sympathy when someone dies.
Rep. Brock Greenfield, R-Clark, said state law should help hold people accountable if they libel others under the cloak of anonymity. He said a blogger falsely accused him of throwing someone out of the family convenience store because the person wore a T-shirt carrying the name of a Democratic candidate, but he was unable to get that blogger to retract the incorrect report.
"I think the players in the game have to be held to some journalistic standards," Greenfield said.
Robin Hood Hacker Exposes Bankers
An alleged hacker has been hailed as a latter-day Robin Hood for leaking data about the finances of banks and state-owned firms to Latvian TV.
Using the alias "Neo" - a reference to The Matrix films - the hacker claims he wants to expose those cashing in on the recession in Latvia.
He is slowly passing details of leading Latvian firms via Twitter to the TV station and has its audiences hooked.
The Latvian government and police are investigating the security breach.
Data leaked so far includes pay details of managers from a Latvian bank that received a bail-out.
It reveals that many did not take the salary cuts they promised.
Other data shows that state-owned companies secretly awarded bonuses while publicly asking the government for help.
The anonymous hacker claims to be part of a group - called the Fourth Awakening People's Army - that downloaded more than seven million confidential tax documents from the State Revenue Service. He is thought to be based in Britain.
Over a three month period they downloaded the private data of up to 1,000 companies.
Ilze Nagla, a TV presenter on the state-owned Latvian TV, told the BBC the hacker has attained cult status for some.
"A lot of people perceive him as a modern, virtual Robin Hood," she told the BBC.
"On the one hand of course he has stolen confidential data... and he actually has committed a crime. But at the same time there is value for the public in the sense that now a lot of information gets disclosed and the whole system maybe becomes a little more transparent," she said.
Latvia is currently in the middle of its worst economic crisis since it broke free from the Soviet Union in 1991.
Unemployment, at 23%, is the highest in the European Union and over the last two years economic output has dropped by almost a quarter.
Open Wi-Fi 'Outlawed' in Digital Economy Bill
Universities, libraries and small businesses operating open Wi-Fi networks will face the same penalties for illicit downloading as ordinary users
The government will not exempt universities, libraries and small businesses providing open Wi-Fi services from its Digital Economy Bill copyright crackdown, according to official advice released earlier this week.
This would leave many organisations open to the same penalties for copyright infringement as individual subscribers, potentially including disconnection from the internet, leading legal experts to say it will become impossible for small businesses and the like to offer Wi-Fi access.
Lilian Edwards, professor of internet law at Sheffield University, told ZDNet UK on Thursday that the scenario described by the Department for Business, Innovation and Skills (BIS) in an explanatory document would effectively "outlaw open Wi-Fi for small businesses", and would leave libraries and universities in an uncertain position.
"This is going to be a very unfortunate measure for small businesses, particularly in a recession, many of whom are using open free Wi-Fi very effectively as a way to get the punters in," Edwards said.
"Even if they password protect, they then have two options — to pay someone like The Cloud to manage it for them, or take responsibility themselves for becoming an ISP effectively, and keep records for everyone they assign connections to, which is an impossible burden for a small café."
In the explanatory document, Lord Young, a minister at BIS, described common classes of public Wi-Fi access, and explained that none of them could be protected. Libraries, he said, could not be exempted because "this would send entirely the wrong signal and could lead to 'fake' organisations being set up, claiming an exemption and becoming a hub for copyright infringement".
Universities cannot be exempted, Young said, because some universities already have stringent anti-file-sharing rules for their networks, and "it does not seem sensible to force those universities who already have a system providing very effective action against copyright infringement to abandon it and replace it with an alternative".
Subscriber vs IP
Young added that universities will need to figure out for themselves whether they qualify as an ISP or a subscriber. This is a distinction that carries very different implications under the terms of the bill, which would establish possible account suspension as a sanction against subscribers who repeatedly break copyright law, and force ISPs to store user data and hand it over to rights holders when ordered to do so.
Businesses providing open Wi-Fi networks to customers and clients will also need to decide whether they are ISPs or subscribers, "depending on the type of service and the nature of their relationship with their consumers...although it appears unlikely that few other than possibly the large hotel chains or conference centres might be ISPs", Young said.
Young added that free or 'coffee shop' access tends to be too low-bandwidth to support file-sharing and, under the bill, "such a service is more likely to receive notification letters as a subscriber than as an ISP". He recommended that they secure their connections and install privacy controls, to "reduce the possibility of infringement with any cases on appeal being considered on their merits".
The BIS minister also noted that there was scope in the bill's text — currently being amended in the House of Lords — "to reflect the position of libraries, universities or Wi-Fi providers", perhaps by letting such organisations have different sets of thresholds that would trigger notification letters from rights holders.
"This would be a matter for the code and we would urge the relevant representative bodies to consider now how best to engage in the [Digital Economy Bill] code development process," he added.
The bill defines an 'internet access service' as an electronic communications service that "is provided to a subscriber, consists entirely or mainly of the provision of access to the internet, and includes the allocation of an IP address or IP addresses to the subscriber to enable that access".
An ISP is defined as a person who provides an internet access service, and a subscriber is defined as a person who "receives the service under an agreement between the person and the provider of the service, and does not receive it as a communications provider".
Referring to BIS's comments about the low bandwidth of coffee-shop connections, Lilian Edwards suggested it was "not correct to draft laws hoping they are difficult to break".
Edwards also pointed out that BIS's guidance for universities shows the government admitting "they don't know themselves how universities fit into the Digital Economy Bill".
"[Universities] don't know if they're subscribers, ISPs or neither," Edwards said. "If the government is not clear, how on earth are the universities supposed to respond? This seems almost unprecedented to me, for a government document."
Apple Removes Over 5000 Apps from iPhone App Store
Apple is causing a stir in the iPhone community by removing over 5000 apps from the app store due to what they deem as inappropriate content. Apple has long been known for being protective over what gets onto the app store, but never before have they rejected or removed so many apps at once. This recent course of action has caused Apple to receive a great deal of criticism from iPhone users, app developers, and the media.
The official reason of Apple banning the apps is due to what they deem “sexually inappropriate.” While some of these apps are very mature in nature, some are just joke apps that wanted a cheap laugh. In response to the removal of the apps, Apple has posted new guidelines to submitting apps. Here they are as reported by MobileCrunch:
1. No images of women in bikinis ( I personally found this rule highly hypocritical as the SI swimsuit app is a top app on iTunes.)
2. No images of men in bikinis
3. No skin
4. No silhouettes
5. No sexual connotations or innuendo.
6. Nothing that can be sexually arousing
7. No apps will be approved that in any way imply sexual content
I personally, am highly against this stance Apple has taken. Censorship on any level is wrong for a person‘s development, and bad for a society. I may not have agreed with what the banned apps had shown, but they shouldn’t have been banned. We can not live in a society where an entity, whether it be governmental or a business conglomerate, can tell us what we can and can’t watch, listen to, or do, especially when no one is being hurt because of it. Once that line is crossed, we can never go back, and our democratic society we live in will become nothing short of a socialist police state.
Connecticut Bill Would Reduce Penalty For 'Sexting' Between Consenting Minors
Two lawmakers want to lessen the penalty for "sexting" between consenting minors to make sure such children won't be charged with a felony and, if found guilty, forced to register as sex offenders.
Under existing law, it is a felony for children under 18 to send or receive text messages that include nude or sexual images, a practice known as sexting. Such messages are considered child pornography, and those convicted are put on a state sex-offender registry.
State Rep. Rosa Rebimbas, a freshman Republican legislator from Naugatuck, wants to make sexting between two consenting children a Class A misdemeanor. If a teenage girl sends a nude photo to her boyfriend, it is different from someone circulating nude photos of someone else without their consent, Rebimbas said. More options would prevent consenting children from having felony charges on their records, she said.
Complaints about sexting have popped up across Connecticut, and forums on the topic have been held in West Hartford, Naugatuck and other communities.
Naugatuck's informational forum and conversations with members of the Naugatuck Police Department prompted Rebimbas and Rep. David Labriola, R-Naugatuck, to propose the sexting bill, Rebimbas said.
Rebimbas said reaction to the bill has been positive, and most lawmakers see a need to discuss the bill because of changes in technology. "I think we do need to update our laws," she said.
The proposed sexting bill would allow flexibility, said state Rep. Michael Lawlor, D- East Haven and the House chairman of the judiciary committee. He said the committee would hold a public hearing on it.
West Hartford Police Chief James Strillacci, who represents the Connecticut Police Chiefs Association, said officers use their discretion in dealing with sexting. Officers are trying to protect children from the unforeseen consequences of their actions, he said, adding that not all acts of sexting result in felony charges.
Strillacci did not know how many children have been caught sexting in Connecticut, and the office of the chief state's attorney did not return calls Friday. "I'm sure it's happening more than we know," he said.
A December 2009 Pew Research Center's Internet & American Life Project survey states that 18 percent of 800 youths aged 14-17 with cellphones reported receiving "sexually suggestive" nude or semi-nude images of someone they know. Seventeen percent of teens who pay their own cellphone bills said they have sent provocative texts.
U.S. v. ESTEY
United States of America, Plaintiff-Appellee,
Jacob Benjamin Estey, Defendant-Appellant.
United States Court of Appeals, Eighth Circuit.
Submitted: January 14, 2010.
Filed: February 19, 2010.
Before MURPHY and BYE, Circuit Judges, and GOLDBERG,[ 1 ] Judge.
Defendant-Appellant Jacob Estey ("Estey" or "defendant") was convicted of one count of receipt of visual depictions of minors engaging in sexually explicit conduct, in violation of 18 U.S.C. § 2252(a)(2), and one count of possession of visual depictions of minors engaging in sexually explicit conduct, in violation of 18 U.S.C. § 2252(a)(4)(B). He was sentenced to 210 months imprisonment. On appeal, Estey argues the district court[ 2 ] erred in declining his motions to suppress, and abused its discretion in denying his motion for a new trial. He also contests the sentence imposed. We affirm.
A computer crime investigation unit in Spain informed the Federal Bureau of Investigation ("FBI") of computer IP addresses in the United States that were sharing child pornography using eDonkey and eMule peer-to-peer file-sharing software. One of the addresses matched the Des Moines residence of Estey. FBI Special Agent David Larson ("Larson") was assigned to work the suspected child pornography investigation in Des Moines. Larson obtained and executed a search warrant of Estey's residence.
During questioning, Estey admitted to FBI agents that he had copied programs containing child pornography onto disks when he disposed of his brother's computer and loaded the contents of the disks onto his own computer. Larson testified that Estey also admitted to going online and using the file-sharing software to collect child pornography. Images were found in the shared folder of the file-sharing software, allowing others to access the images on the internet. Photographs on Estey's computer corresponded to images discovered by the Spanish investigation unit. Hard drives and computer disks seized during the search of the residence revealed images of child pornography.
A. The district court did not err in denying the motions to suppress.
Estey moved to suppress evidence on two grounds. First, he argued that his confession was elicited in violation of the Fifth Amendment; and, second, that the probable cause for the search warrant for his residence was stale, in violation of the Fourth Amendment. The district court did not err in denying both motions to suppress.
Estey contends that his confession was involuntary because it was obtained by a promise of leniency from law enforcement officers. Whether a confession was voluntary is a question of law subject to de novo review, but factual findings underlying a district court's decision are reviewed under a clearly erroneous standard. United States v. Kilgore, 58 F.3d 350, 353 (8th Cir. 1995). "The test for determining the voluntariness of a confession is whether the police extracted the confession by threats, violence, or direct or implied promises, such that the defendant's will was overborne and his capacity for self-determination critically impaired." United States v. Gannon, 531 F.3d 657, 661 (8th Cir. 2008) (internal quotations omitted). Courts examine the totality of the circumstances in making this assessment. Id.
The record indicates that Estey's interview, including his confession, was voluntary. FBI agents appropriately advised Estey of his rights prior to a noncustodial interview. Estey was told that he did not have to speak with the FBI if he chose not to do so, that he had the right to refuse to answer all or any particular question, and that he was free to leave. The practice of agents providing such advice is a proper method to ensure that a noncustodial interview is not misinterpreted as a custodial interrogation and to avoid Miranda problems. See United States v. Bordeaux, 400 F.3d 548, 559-60 (8th Cir. 2005).
Estey's claim appears based on the notion that he misunderstood the assurance of FBI agents that he was not under arrest at that time, nor would he be under arrest at the end of the interview, to be an offer of total immunity. However, these statements were clearly not a promise of total immunity nor were they an assurance precluding future prosecution. In fact, during the interview, Estey asked the FBI how much prison time he could expect to serve, indicating that he did not understand the statement as a promise of total immunity. Estey does not cite to any other conduct, expressed or implied, suggesting his will was overborne and he was coerced to confess. In short, the totality of the circumstances do not indicate that Estey's will was overborne by the conduct of the law enforcement agents. Therefore, the district court properly denied the motion to suppress the confession.
Estey also challenges the district court's denial of his motion to suppress evidence seized during the search of his residence. In a suppression hearing, the district court ruled that the five-month delay prior to executing the warrant did not render the warrant invalid. The district court based its ruling on prior court decisions and FBI testimony explaining that child pornographers commonly retain pornography for a lengthy period of time. As further justification for its decision, the court added that individuals in a one-story house are unlikely to either move or replace computers within such a short span; there was, therefore, only a minuscule possibility that no illicit images would be found on the computer. On appeal, Estey argues that the district court erred because the search warrant was based on stale information and therefore lacked probable cause. "We examine the factual findings underlying the district court's denial of the motion to suppress for clear error and review de novo the ultimate question of whether the Fourth Amendment has been violated." United States v. Williams, 577 F.3d 878, 880 (8th Cir. 2009), citing United States v. Walsh, 299 F.3d 729, 730 (8th Cir. 2002).
Probable cause for a warrant search "exists if there is a fair probability that contraband or evidence of a crime will be found in a particular place." United States v. Hartje, 251 F.3d 771, 774 (8th Cir. 2001). "A warrant becomes stale if the information supporting is not sufficiently close in time to the issuance of the warrant and the subsequent search conducted so that probable cause can be said to exist as of the time of the search." United States v. Brewer, 588 F.3d 1165, 1173 (8th Cir. 2008 ) (internal quotations omitted). "There is no bright-line test for determining when information is stale...time factors must be examined in the context of a specific case and the nature of the crime under investigation." United States v. Summage, 481 F.3d 1075, 1078 (8th Cir. 2007). The factors in determining whether probable cause has dissipated, rendering the warrant fatally stale, "include the lapse of time since the warrant was issued, the nature of the criminal activity, and the kind of property subject to the search." United States v. Gibson, 123 F.3d 1121, 1124 (8th Cir. 1997).
We agree with the district court's determination that the information in the search warrant was not stale. While Estey is correct to note there are outer limits to the use of such evidence, this case involves a search warrant issued five months after discovering information linking the defendant's residence with child pornography. This Court, and others, have held that evidence developed within several months of an application for a search warrant for a child pornography collection and related evidence is not stale. See, e.g., United States v. Horn, 187 F.3d 781, 786-787 (8th Cir 1999) (warrant not stale three or four months after child pornography information was developed); United States v. Davis, 313 Fed. Appx. 672, 674, (4th Cir. 2009) (holding that information a year old is not stale as a matter of law in child pornography cases); United States v. Hay, 231 F.3d 630, 636 (9th Cir. 2000) (warrant not stale for child pornography based on six-month old information); United States v. Lacy, 119 F.3d 742, 745-46 (9th Cir. 1997) (warrant upheld for child pornography based on ten month old information). Furthermore, in denying Estey's motion, the district court noted that this Court has acknowledged similar observations to the FBI agent's statements about the habits of child pornography collectors. See United States v. Chrobak, 289 F.3d 1043, 1046 (8th Cir. 2002). Meanwhile, Estey does not offer evidence contrary to the FBI statements regarding the habits of child pornography collectors. Given the circumstances of the case and the nature of the crime, the execution of the warrant five months after the development of the information did not render the warrant deficient in any respect based on stale information. Estey's motion to suppress was properly denied.
B. The district court did not abuse its discretion in denying Estey's motion for a new trial.
Estey also contests the district court's denial of his motion for a new trial. Estey's motion relates to the conduct of a juror during voir dire. The juror in question was a frequent contributor to the website ratemybody.com. The website positions itself as a dating website and allows people to rate people's attractiveness. It does not mention or allude to pornography or obscenity. The website contains a public bulletin board where members can discuss a variety of topics of general interest.
To obtain a new trial due to juror dishonesty during voir dire, "a party must first demonstrate that a juror failed to answer honestly a material question on voir dire and then further show that a correct response would have provided a valid basis for challenge for cause." McDonough Power Equip., Inc. v. Greenwood, 464 U.S. 548, 556, 104 S. Ct. 845 (1984). Under this standard, "only those reasons that affect a juror's impartiality can truly be said to affect the fairness of a trial." Id. Estey's argument that the district court abused its discretion in denying his motion for a new trial because a juror lied about a material matter during voir dire is unpersuasive.
Estey fails to establish that the juror at issue answered a question dishonestly. During voir dire, the defendant's counsel asked the venire jury panel the following: "[a]nybody here ever written [sic] a letter to the editor, anybody else or called into a radio program or made a public speech or anything on any subject related to this?...Does anybody belong to any groups that have a position on this type of subject matter?" The district court correctly found that there was no evidence that the juror answered dishonestly since there is no evidence that the juror ever wrote publicly about child pornography or obscenity or that he belonged to a group with a position on this subject matter. In essence, Estey is accusing the juror of lying in response to a question that was not asked. Estey attempts to frame the subject as the exchange of materials of a sexual nature on the internet. However, Estey's charges clearly relate to child pornography, not the exchange of materials of a sexual nature, which is not a federal crime. In fact, there is no indication that the juror wrote or exchanged materials of a sexual nature. Furthermore, a defendant is not entitled to a new trial when any problem with a juror's answer during voir dire was caused by the poor quality of the question asked. See United States v. Williams, 77 F.3d 1098, 1100-01 (8th Cir. 1996). Thus, had Estey's counsel intended to ask broader questions about sexuality in general, the counsel could have done so.
Nor has Estey established that the juror was motivated by actual partiality. See United States v. Tucker, 137 F.3d 1016, 1029 (8th Cir. 1998) (to challenge a juror for cause, a party must show actual partiality growing out of the nature and circumstances of the specific case). The juror's contributions to the website do not establish bias and nothing in the record establishes that the juror had any impressions or opinions impacting his ability to be impartial. See Moran v. Clarke, 443 F.3d 646, 650-51 (8th Cir. 2006) ("Essentially, to fail this standard, a juror must profess his inability to be impartial and resist any attempt to rehabilitate his position.").
Estey fails to prove that the juror's contributions to the website, if known, would have supported striking him for cause. See Tucker, 137 F.3d at 1029. Here, the juror made comments on the bulletin board of the website after the petit jury was empaneled but before the jury verdict. The juror stated that he had jury duty for a federal case. The juror did not post comments about the substance of the trial until after the verdict. The juror's failure to disclose his website postings during the trial do not amount to concealing a material fact. Immaterial statements by a juror about the trial, such as about scheduling, are neither prohibited nor prejudicial. United States v. Tucker, 243 F.3d 499, 510 (8th Cir. 2001). While Estey argues that the juror may have spoken privately with forum members about the trial, there is no evidence of this, and the allegation is purely speculative. See United States v. Whiting, 538 F.2d 220, 223 (8th Cir. 1976) ("Where an attack is made upon the integrity of the trial by reason of alleged misconduct on the part of a juror in failing to disclose information pertinent to the issue of prejudice, the defendant's burden of proof must be sustained not as a matter of speculation, but as a demonstrable reality.").
Estey cannot show that the juror answered a question dishonestly, that the juror was motivated by partiality, and that the facts, if known, would have supported striking the juror for cause. Consequently, the district court did not abuse its discretion in overruling Estey's motion for a new trial.
Estey further argues that the district court should have recused itself from ruling on his motion for a new trial. Estey's claim that the presiding judge should have recused himself pursuant to 28 U.S.C. § 455 (b)(1) also relates to the alleged misconduct by the juror in question. On December 5, 2008, the presiding judge learned of the juror's internet activity and spoke with both counsel with the understanding that a motion would be filed on Estey's behalf. The following day, the judge had a dinner party with current staff and former clerks. The judge briefly mentioned that he had the unusual experience of a juror engaging in an online chat during and immediately after the trial. No other details were provided. At some subsequent point, the Court learned that one of the individuals present at that conversation apparently was a co-worker of the juror. The individual knew that his co-worker had been a juror in a federal case the prior week. On Monday, December 8, the juror's online profile and information at ratemybody.com had been deleted. The judge then sent a letter to both counsel disclosing this information and stating that he was personally aware of circumstances that may have contributed to the deletion of online material.
A district court's denial of a motion for recusal is reviewed for abuse of discretion. See, e.g., United States v. Tucker, 82 F.3d 1423, 1425 (8th Cir. 1996); Pope v. Fed. Express Corp., 974 F.2d 982, 985 (8th Cir. 1992). A federal judge must recuse himself from a case if he has "a personal bias or prejudice concerning a party, or personal knowledge of disputed evidentiary facts concerning the proceeding." 28 U.S.C. § 455 (b)(1).
Estey's claim that the presiding judge had personal knowledge of disputed evidentiary facts concerning the proceeding is completely without merit. The central issue in Estey's motion, which Estey admits, is the credibility of the juror and whether the juror belongs to groups dealing with the type of subject matter that was the focus of the trial. Nothing in the record suggests that the judge had any personal knowledge about the juror's affiliation with groups involved with pornography or obscenity nor does the record indicate any knowledge by the court about the juror ever expressing public opinions on this subject matter.
Although Estey does not suggest the district court acted improperly, he claims the judge inadvertently became a material witness because the judge was the only person with knowledge that the defendant's counsel had communicated the basis of the motion for a new trial to the court. The judge then mentioned the basis of the motion for a new trial to a co-worker of the juror in question before defense counsel had filed the motion. However, the judge never suggested that a post trial motion was expected nor did he discuss any confidential information. More fundamentally, the judge did not have any more information about the juror's acts and opinions than the information obtained by the parties themselves after reviewing the voir dire questions and reading the juror's website postings. The judge took practical steps by disclosing fully to the parties the rather unusual circumstances that may have contributed to the deletion of online material. The judge promptly informed the parties that it appeared that a comment was made to the juror that may have prompted him to delete his online profile. The judge clearly stated that no confidential information was imparted at the dinner.
Nothing in the record establishes that the district court had any insight into the juror who was the focus of the new trial motion. Since the district court did not have personal knowledge of disputed evidentiary facts, the court did not abuse its discretion in denying the motion to recuse.
C. The two-level enhancement under U.S.S.G. § 2G2.2(b)(3)(F) was properly applied.
Estey also objects to the two-level sentencing enhancement imposed by the district court under U.S.S.G. § 2G2.2(b)(3)(F)[ 3 ] for distributing child pornography. We review de novo whether the district court correctly interpreted and applied the sentencing guidelines while district court factual findings are reviewed for clear error. United States v. Mashek, 406 F.3d 1012, 1016-1017 (8th Cir. 2005). Application note 1 following § 2G2.2 defines "distribute" as:
"any act, including possession with intent to distribute, production, transmission, advertisement, and transportation, related to the transfer of material involving the sexual exploitation of a minor. Accordingly, distribution includes posting material involving the sexual exploitation of a minor on a website for public viewing but does not include the mere solicitation of such material by a defendant."
At sentencing, the district court relied upon this Court's decision in United States v. Griffin, 482 F.3d 1008 (8th Cir. 2007). In Griffin, the Court found that distribution occurred when a defendant "used a file-sharing network [Kazaa] to distribute and access child pornography." Id. at 1013. The defendant's "use of the peer-to-peer file-sharing network made the child pornography files in his shared folder available to be searched and downloaded by other Kazaa users." Id. at 1012; see also United States v. Sewell, 457 F.3d 841, 842 (8th Cir. 2006) (by using Kazaa to download images of child pornography, the defendant "made these images available to be searched and downloaded by other Kazaa users by failing to disable the Kazaa feature that automatically places the files in a user's My Shared Folder.").
Estey's efforts to distinguish his case from Griffin are unconvincing. Estey notes that Griffin involved a five-level enhancement for distribution of child pornography for "the receipt of, expectation of receipt, or thing of value, but not for pecuniary gain" under U.S.S.G. 2G2.2(b)(3)(B). Estey, meanwhile, was charged with the catchall two-level enhancement under U.S.S.G. § 2G2.2(b)(3)(F). However, this distinction merely suggests that Estey's use of the filing sharing program could have amounted to distribution under either of these subparts of U.S.S.G. § 2G2.2(b)(3).
Second, Estey's argument that he inadvertently shared images with other network users is undermined by the record. Estey attempts to claim that, unlike Griffin, where the defendant admitted to knowingly using a file-sharing program, Estey took steps to disable the file-sharing feature of the program. He argues that inadvertent file-sharing is not analogous to "posting material" under Application note 1 to U.S.S.G. § 2G2.2(b)(3). However, like the defendants in both Griffin and Sewell, Estey knowingly placed an internet peer-to-peer file-sharing program on his computer, knew how the program operated, and shared images with other network users. Distribution under section U.S.S.G. § 2G2.2(b) occurs where a defendant "downloads and shares child pornography files via an internet peer-to-peer file-sharing network, as these networks exist-as the name "file-sharing" suggests-for users to share, swap, barter, or trade files between one another." Griffin, 482 F.3d at 1013. The record indicates Estey collected images from program searches and that other users were able to receive these images because Estey had child pornography in his file-sharing folder when the images were discovered by law enforcement authorities. Moreover, Estey admitted that he would place pictures in the file sharing folder because by sharing with others, it allowed him to download faster. Therefore, the district court properly applied the two-level enhancement for distribution under U.S.S.G. § 2G2.2(b)(3).
For the foregoing reasons, we affirm the judgment of the district court.
1. The Honorable Richard W. Goldberg, Judge, United States Court of International Trade, sitting by designation.
2. The Honorable James E. Gritzner, United States District Judge for the Southern District of Iowa.
3. Section 2G2.2(b)(3)(F) applies to "distribution other than distributions described in subdivisions (A) through (E)." The Government argues that subdivision (B) is relevant here which provides for a five-level enhancement for distribution child pornography to another person "for the receipt, or expectation of receipt, of a thing of value, but not for pecuniary gain. U.S.S.G. § 2G2.2(b)(3)(B).
This copy provided by Leagle, Inc.
Chatroulette: Eye Vagina
DMCA Takedown 101
The Digital Millennium Copyright ACT (DMCA) is one of the best-known and most-controversial pieces of legislation passed in recent years. It has had a greater impact on the Web than virtually any other piece of legislation and is largely responsible for much of the Web we see today.
One of the critical elements of the DMCA was the Safe Harbor provisions, which established a notice-and-takedown system for removing allegedly infringing works from the Web.
For the most part, that system has been used as intended. Countless DMCA notices have been filed to secure the removal of everything from illegal MP3s and movies to plagiarized poems. However, the system has also been abused at times and mistakes have been made in other cases.
Given how important this process has become, it is crucial for bloggers, Webmasters, hosts and anyone who posts content online to understand how the procedure works so they can both take advantage of it to protect their own work, if needed, and be prepared to answer any claims that are filed against them.
In 1998, U.S. copyright law was beginning to look dated. Though Congress had just rewritten the entire copyright statute some twenty years prior, the rapid growth of the Internet had left a lot of questions about copyright unanswered.
One of the bigger questions what liability Web hosts and other service providers faced when users on their network committed copyright infringement. Before the DMCA, theoretically, sites such as Geocties, which hosted content for users, could be sued for contributing to copyright infringement simply because they provided the hosting for unlawful content.
As part of the DMCA, which itself stemmed largely from the World Intellectual Property Organization Copyright Treaty, Congress gave Web hosts "safe harbor" from such liability provided they met certain qualifications and obeyed a set of rules. This meant that Web hosts could not be held liable for infringement that took place on their service, so long as they completed the necessary elements.
In the end there were four different kinds of online service providers that were given protection under the law:
1. Conduits: Services that were not destinations in and of themselves, (IE: Broadband access providers) were granted complete safe harbor for infringements that passed through their networks.
2. Caching Services: Services that cache data temporarily, such as those used by many broadband providers to speed up access, were also granted complete safe harbor.
3. Web Hosts: Services that host content were given safe harbor provided they had no knowledge of the infringement, lacked the ability to control it, did not encourage it, did not profit directly from it and work to expeditiously remove infringing material after receiving proper notification.
4. Information Location Tools: Similar to Web hosts, information location tools, including search engines and directories, were given safe harbor provided they met a similar set of criteria.
Though this element of the law has seen its fair share of controversy, without safe harbor, many Web sites, including YouTube and most social networking sites, would almost impossible to operate. The legal risks would simply be too high. In short, this procedure is largely responsible for many of the sites and services we have today.
The Takedown Notice
Under the DMCA, copyright holders and their agents can demand removal of allegedly infringing content. To do that, they must provide a complete takedown notice. Under the law, this notice must contain the following elements:
1. A physical or electronic signature of a person authorized to act on behalf of the owner of an exclusive right that is allegedly infringed.
2. Identification of the copyrighted work claimed to have been infringed, or, if multiple copyrighted works at a single online site are covered by a single notification, a representative list of such works at that site.
3. Identification of the material that is claimed to be infringing or to be the subject of infringing activity and that is to be removed or access to which is to be disabled, and information reasonably sufficient to permit the service provider to locate the material.
4. Information reasonably sufficient to permit the service provider to contact the complaining party, such as an address, telephone number, and, if available, an electronic mail address at which the complaining party may be contacted.
5. A statement that the complaining party has a good faith belief that use of the material in the manner complained of is not authorized by the copyright owner, its agent, or the law.
6. A statement that the information in the notification is accurate, and under penalty of perjury, that the complaining party is authorized to act on behalf of the owner of an exclusive right that is allegedly infringed.
This notice, which must be filed by the copyright holder or an agent working for them, is sent to the service provider's DMCA agent, which all service providers must appoint and register with the U.S. Copyright Office. Most DMCA filers, use some form of stock letter to help speed the process along.
Once the notice has been received, the host has to first make sure it is a complete notice and then they are to either remove or disable access to the infringing work. This can be done many ways but is usually handled by simply backing up and deleting the allegedly infringing material.
With that done, the host then usually contacts the client involved, who in turn has the opportunity to respond.
The Counter-Notice Process
The client who is the subject of the DMCA notice, has several courses of action that they can take.
First, they can simply do nothing. If the notice was valid and the takedown was just, they can simply do nothing and accept that the work has been disabled.
Second, if the work was not infringing and the notice was either in error or malicious, the client can then file what is known as a counter-notice. That notice must contain the following elements:
1. A physical or electronic signature of the subscriber.
2. Identification of the material that has been removed or to which access has been disabled and the location at which the material appeared before it was removed or access to it was disabled.
3. A statement under penalty of perjury that the subscriber has a good faith belief that the material was removed or disabled as a result of mistake or misidentification of the material to be removed or disabled.
4. The subscriber's name, address, and telephone number, and a statement that the subscriber consents to the jurisdiction of Federal District Court for the judicial district in which the address is located, or if the subscriber's address is outside of the United States, for any judicial district in which the service provider may be found, and that the subscriber will accept service of process from the person who provided notification.
When a counter-notice is filed, the host must then notify the person who filed the original notice and then, in a time between 10-14 business days, restore the work that was taken down. In that time, the filer of the notice has the option of seeking resolution in the courts and obtaining an injunction that will keep the work offline.
Finally, in extreme cases where the notice was false and filed knowingly so, the subscriber/user can file suit against the filer for damages including attorney's fees and court costs.
How to File a Takedown Notice
If you discover that work you hold the copyright in is being infringed and wish to file a DMCA notice. You can take the following steps to do so.
1. Determine if the work is infringing, consult an attorney if necessary.
2. Take screenshots or otherwise preserve the infringing site, useful if a dispute should arise later.
3. Obtain a stock DMCA notice template and fill it in with the required information.
4. Using a service such as WhoIsHostingThis or Domain Tools, locate the host of the site where the work is located.
5. Look on the host's site and attempt to locate the contact information for their DMCA agent.
6. Failing that, see if the host has registered with the U.S. Copyright Office and provided the needed information there.
7. If that fails, send the notice to the host's abuse team.
8. Wait at least 72 hours and ensure that the work has been removed.
9. If unable to secure removal of work (IE: Not a U.S.-based host or otherwise uncooperative), consider filing a notice with each of the major search engines.
If everything goes according to plan, the work should be removed in a a couple of business days. Some hosts respond very quickly, even within the hour, where others make take a little more time. Be patient with your notices and be on the lookout for the work to be removed as not all hosts will send confirmations via email.
How to Respond to a Notice
If you are on the other end of a DMCA notice, you need to take certain steps to ensure that your rights are not trampled on or that the process is not abused.
1. Request a fully copy of the notice if it isn't provided so that you can understand who filed the notice, what works they are claiming to be infringing and the works they say are the originals.
2. Determine if the notice is valid or was sent in mistake. Consult an attorney if necessary.
3. If the notice was in error or malicious, file a counter-notice as promptly as possible, even if you do not wish to have the work restored. Instructions for doing so should be included with the notification of the takedown. If required, you can use a template for responding.
It is important to file a counter-notice if the takedown was sent in error, even if you view the takedown as not being a big deal because hosts are required to ban and delete accounts of repeat infringers. If you receive too many DMCA notices, you may find your entire account disabled, even if the takedown notices were invalid. As such, it is important to file counter-notices to prevent such an action from happening.
Takedowns in Other Countries
The biggest limitatin to the DMCA notice and takedown system is that the DMCA is limited to the U.S. and only applies to Web hosts and search engines located within the country.
However, other nations have adopted very similar systems. The European Union, has the European Directive for Electronic Commerce, which offers a very similar procedure (though implementation differs from country to country). Australia is another nation that has a notice and takedown system.
Other nations, such as India, do not have any safe harbor at all, meaning that there is no formal system for demanding removal of work but hosts are generally cooperative due to the threat of a lawsuit. Still other nations, including Canada, have no notice-and-takedown system but also provide complete safe harbor for Web hosts, meaning there is little way to compel Web hosts to remove content short of a court order.
When filing a takedown notice with an ISP in another country, it is best to check the laws that exist there and ensure that your notice is compliant with their terms.
The DMCA process, whether on the filing or receiving end, should never be taken lightly as missteps could, and often do, have very serious legal implications. If you are unsure about what to do in a specific situation, consult an attorney.
As controversial as the DMCA safe harbor protections have been at times, it has enabled many of the sites we enjoy every day. Furthermore, many of the incidents that have caused controversy could have been resolved by one or all parties understanding the law better and using it correctly.
This is why everyone who posts content online should be at least aware of this process and how it works. Even if you aren't a U.S. citizen, it is very likely your host is and the search engines people use to find you almost certainly are. As such, the law can affect you.
Fortunately, the process itself is fairly easily understood and offers a great deal of protection. for those who both need to file a notice against infringing material and those who are filed against without cause. However, as with any law, it is up to us to apply it correctly and that is the greatest challenge we face.
IOC Orders Blogger to Take Down Video
The International Olympic Committee has ordered a blogger to remove a video showing the death of Georgian luger Nodar Kumaritashvili from his website.
During a practice run leading to this years Winter Olympics in Vancouver, Kumaritashvili flew off the track and slammed against steel support pillars around the track's perimeter.
Stephen Pate, publisher of the online site NJN Network, published the video along with commentary about the death, and the IOC has since ordered him in an email to take it down.
The IOC asserts that it owns all the rights to all images taken at the games, and only licensed broadcasters can use them. However, Pate points to a Canadian law that allows copyrighted images to be used in news worthy cases.
"One of the rights is for news organizations to report the news, so it's a news story," Pate told CBC.
"The man died. Why did he die, was there negligence, what happened? And secondly, why is an international sports body trying to restrict the right of the public to see the story? And that made it a newsworthy item to write up."
Aside from the copyright issue, the IOC also said that the footage was disrespectful to the Kumaritashvili family.
The International Luge Federation said Mr. Kumaritashvili failed to compensate properly when he slid into the last curve.
But its chairman, Josef Fendt, said later that the track was far faster than its designers ever intended it to be.
Officials shortened the track before the Olympics began to slow speeds and raised safety barriers to keep lugers on the track if they crashed.
NBC's Broken Olympic Coverage Manages To Annoy Absolutely Everyone
Let us put aside for a moment the rah-rah, "Go Team USA" focus of the NBC coverage that often bugs viewers who would like a more global view of the Olympics. Let us also set aside sport-specific beefs, like the way Scott Hamilton's groaning has gotten completely out of hand when he's calling figure skating, or the way the curling announcers make it sound like only a three-year-old wouldn't know precisely how to win every single game with ease, because they certainly could.
The mere structure of the NBC coverage has left a great deal to be desired this time around, and it came to a head last night when they shuffled the much-anticipated USA-Canada hockey game off to MSNBC, in part to use NBC as a showcase for probably the least anticipated of the figure skating events: ice dancing. (Along with some speed skating, bobsled, and the men's super-G, which happened earlier in the day -- oh, and the much-hyped ski cross event.)
The basic problem with NBC's coverage is that they haven't improved the fundamentals of the coverage in spite of massive changes in the way people take in content. The prime-time coverage is largely as it's always been: a few events (including figure skating) are heavily showcased, a few other events (most skiing and speed skating fall into this category) are usually shown in an abbreviated format regular viewers instantly recognize as "USA-Plus" (meaning you see the Americans, plus a few other people who are relevant because they either do very well or wipe out spectacularly), and two events -- hockey and curling -- are shown as complete events, but they're shoved off to cable.
West-coast residents have been particularly incensed that they wait an additional three hours after the East coast gets whatever "live" coverage there actually is in prime time, even though they are in the time zone where the Olympics actually are. What this means is that even if NBC is showing "live" coverage of its big events in New York, which is across the continent from Vancouver, it delays them three hours for Seattle, which is less than three hours south of Vancouver.
The "spoiler" problem and the future, after the jump.
Because what NBC perceives to be the high-profile events are frequently shoved into the evening, the ones that happen earlier in the day are dealt a particular blow. This has particularly plagued some of the skiing events, where NBC chooses to sit on the tape of the events for hours and hours, during which time other news outlets inevitably report on them (see the recent discussion from the NPR ombudsman about why news organizations can't really ignore news events just because somebody else is withholding the tape from viewers rather than airing it).
With the hockey game last night, everyone knew it was going to be an important game, and if you were anywhere near Twitter, you knew that it was whipping fans into an absolute frenzy. NBC eventually cut over to show about the last 30 seconds, but by then, the opportunity had been missed.
This just isn't the way people follow ... anything, really, at this point. At one time, you could broadcast events hours after they happened, and you'd have a reasonable chance that people could live in a bubble while they were waiting. That is not the world we live in anymore. The fantasy that is indulged when Bob Costas speaks breathlessly about an upcoming ski race where he already knows exactly what happened is no longer even a fragile fantasy; it's a blatant fiction that everyone knows about.
Naturally, NBC wants to kick the big events into prime-time for ratings reasons, and it's hard to argue with their ratings successes for these Olympics, which have been massive. Nevertheless, they're clinging to a broadcast model that's not only on its last legs -- it's on the last toe of the last leg. This isn't Wide World Of Sports -- people don't want to wait around for when your big sports show happens to take place.
Self-scheduling is the rule, at this point. It's harder and harder to tell people when they will watch things, and in what form. I can't prove it, but my sense is that part of the reason so many of us have taken to watching curling is that you can see entire matches, without the break-ins from Costas and the cutaways to other sports.
There's probably too much action in a set of Olympics for absolutely everything to be shown top-to-bottom, and perhaps that would be boring, anyway. But if the broadcast networks who cover this stuff don't find a way to stop pretending it's still 1976, where an event happens when the person who owns the broadcast rights tells you it happens, they're going to wind up being left in the dust by whatever manipulator of technology figures out how to do it better.
MagicJack Dials Wrong Number in Legal Attack on Boing Boing
Gadget maker MagicJack recently lost a defamation lawsuit that it filed against Boing Boing. The judge dismissed its case and ordered it to pay us more than $50,000 in legal costs.
The Florida-based VOIP company promotes a USB dongle that allows subscribers to make free or inexpensive phone calls over the internet. I posted in April 2008 about its terms of service—which include the right to analyze customers' calls—and various iffy characteristics of its website.
We had no idea that it would file a baseless lawsuit to try and shut me up, that CEO Dan Borislow would offer to buy our silence after disparaging his own lawyers, or that MagicJack would ultimately face legal consequences for trying to intimidate critics.
At several points in the process, we could have taken a check and walked away: as it is, the award doesn't quite cover our costs. But we don't like being bullied, and we wanted the chance to tell anyone else threatened by this company what to expect.
The post was titled "MagicJack's EULA says it will spy on you and force you into arbitration." This EULA, or End-User Licensing Agreement, concerns what subscribers must agree to in order to use the service. I wrote that MagicJack's allows it to target ads at users based on their calls, was not linked to from its homepage or at sign-up, and has its users waive the right to sue in court. I also wrote that that MagicJack's website contained a visitor counter that incremented automatically; and that the website claimed to be able to detect MagicJacks, reporting that "Your MagicJack is functioning properly" even when none are present.
In the lawsuit, filed March 2009 in Marin, Ca., MagicJack alleged that these statements were false, misleading, and had irreparably harmed MagicJack's reputation by exposing it to "hate, ridicule and obloquy." The lawsuit demanded removal of our post and unspecified damages. It also alleged that I am a professional blogger.
Published in the gadgets section of our site, the post didn't criticize the service or the gadget itself, which works very well. Though just 200 words long, it soon came up among search results for the company's name. It was also reposted on Boing Boing's homepage by Cory Doctorow, under the title "MagicJack net-phone: swollen pustule of crappy terms of service and spyware."
Boing Boing has a long history of covering EULAs and related issues; we're also no stranger to legal intimidation. Believing that the suit sought to exploit the trivial matter of the website counter to silence our discussion of the more important issues, we fought back. Our lawyers, Rob Rader, Marc Mayer and Jill Rubin of MS&K, determined that it was a SLAPP lawsuit: a strategic lawsuit against public participation. In such a lawsuit, winning is not the main objective. Instead, it is crafted to harry critics, not least with the high cost of fighting a lawsuit, into abandoning their criticism. New York Supreme Court Judge J. Nicholas Colabella wrote that "short of a gun to the head, a greater threat to First Amendment expression can scarcely be imagined."
California led the way in fighting such lawsuits, passing an anti-SLAPP statute in 1992. This allows defendants to file a special motion to strike complaints leveled against constutitionally protected speech--and to recover costs. Accordingly, our lawyers filed such a motion, which forced MagicJack to show it would have a 'reasonable probability' of prevailing if it went to trial.
After it failed to do so, a California judge dismissed MagicJack's suit late last year. She noted that in its complaint, MagicJack essentially admitted the very act it claims to be defamed by.
"Plaintiff's own evidence shows that the counter is not counting visitors to the website as a visitor visits the site," wrote Judge Verna A. Adams. "Instead, the visitor is seeing an estimate. ... As to the statements based on the EULA, such statements, read in context, do not imply that the plaintiff is eavesdropping on its customers calls. Instead, the statements clearly constitute the opinion of the author that analyzing phone numbers for purposes of targeted advertising amounts to "spy[ing]," "snoop[ing]," and "systematic privacy invasion."
After the dismissal of the lawsuit, MagicJack CEO Dan Borislow apologized and told us that his lawyers, Arnold & Porter, did not fully disclose to him the weaknesses in his case or properly analyze California law. During negotiations, we were surprised when MagicJack agreed to a settlement of our legal costs, then backed out.
We would not agree to keep the actual legal dispute confidential under any circumstances. However, we offered not to publish details of our legal costs or their settlement if Borislow would donate $25,000 to charity. MagicJack, however, offered to pay our legal bill only if we'd agree to keep the whole dispute confidential; when we refused, Borislow wrote that he would 'see us in court.' Nonetheless, we're happy with the outcome. The irony for MagicJack is that the proceedings are public record, so the silence it sought was effectively worthless.
MagicJack's relentless television infomercials are a staple of cable television. The gadget itself, no larger than a 3G modem, earned praise from gadget reviewers and opprobrium from Florida's attorney general.
According to Wikipedia, the Better Business Bureau of Southeast Florida has received hundreds of customer service complaints, primarily related to difficulty returning the product under the 30-day guarantee and the longtime lack of an uninstaller. In 2008, its grade with the Better Business Bureau was reported to be "F." According to the bureau, this rating is assigned to companies under the following circumstances: "We strongly question the company's reliability for reasons such as that they have failed to respond to complaints, their advertising is grossly misleading, they are not in compliance with the law's licensing or registration requirements, their complaints contain especially serious allegations, or the company's industry is known for its fraudulent business practices."
MagicJack currently has an A- rating after becoming an 'Accredited' BBB partner in 2009. At the time of its failing grade, however, Borislow dismissed the Better Business Bureau's system as meaningless: "I have Comcast cable in my house; they are rated an F. I use Sprint on some of our phones; they are rated an F. My bank, who I have been with forever, Colonial Bancorp, is rated an F. The list is endless." The BBB's 'TrustLink rating' gives it only two stars out of five.
In December 2008, MagicJack filed a $1,000,000 lawsuit (docket) against competitiors Joiphone and PhonePower, who linked to a blog post by a Singapore-based blogger whose own discussion of the product echoed ours: "Magic Jack (sic) will spy on you and force you into arbitration," Vinay Rasam headlined a post at now-vanished site voipphoneservices.org. In that case, MagicJack's legal rationale was trademark infringement, false advertising and violation of the unfair trade practices act.
In April 2009, MagicJack reached a settlement with Florida's Attorney General after claims it charged customers for services during the 'free' trial period. The company paid the state's costs and made no admission that it broke the law. Investigators in the case found that MagicJack's product had limitations that were not properly disclosed, and that the company did not respond adequately to customer complaints.
The BBB in Florida lists a corporate number for MagicJack, but while the customer care department has a webchat service, it does not appear to have a public telephone number of its own.
When Using Open Source Makes You an Enemy of the State
The US copyright lobby has long argued against open source software - now Indonesia's in the firing line for encouraging the idea in government departments
It's only Tuesday and already it's been an interesting week for the world of digital rights. Not only did the British government changed the wording around its controversial 'three strikes' proposals, but the secretive anti-counterfeiting treaty, Acta, was back in the headlines. Meanwhile, a US judge is still deliberating over the Google book settlement.
As if all that wasn't enough, here's another brick to add to the teetering tower of news, courtesy of Andres Guadamuz, a lecturer in law at the University of Edinburgh.
Guadamuz has done some digging and discovered that an influential lobby group is asking the US government to basically consider open source as the equivalent of piracy - or even worse.
It turns out that the International Intellectual Property Alliance, an umbrella group for organisations including the MPAA and RIAA, has requested with the US Trade Representative to consider countries like Indonesia, Brazil and India for its "Special 301 watchlist" because they use open source software.
What's Special 301? It's a report that examines the "adequacy and effectiveness of intellectual property rights" around the planet - effectively the list of countries that the US government considers enemies of capitalism. It often gets wheeled out as a form of trading pressure - often around pharmaceuticals and counterfeited goods - to try and force governments to change their behaviours.
Now, even could argue that it's no surprise that the USTR - which is intended to encourage free market capitalism - wouldn't like free software, but really it's not quite so straightforward.
I know open source has a tendency to be linked to socialist ideals, but I also think it's an example of the free market in action. When companies can't compete with huge, crushing competitors, they route around it and find another way to reduce costs and compete. Most FOSS isn't state-owned: it just takes price elasticity to its logical conclusion and uses free as a stick to beat its competitors with (would you ever accuse Google, which gives its main product away for free, of being anti-capitalist?).
Still, in countries where the government has legislated the adoption of FOSS, the position makes some sense because it hurts businesses like Microsoft. But that's not the end of it.
No, the really interesting thing that Guadamuz found was that governments don't even need to pass legislation. Even a recommendation can be enough.
Example: last year the Indonesian government sent around a circular to all government departments and state-owned businesses, pushing them towards open source. This, says the IIPA, "encourages government agencies to use "FOSS" (Free Open Source Software) with a view toward implementation by the end of 2011, which the Circular states will result in the use of legitimate open source and FOSS software and a reduction in overall costs of software".
Nothing wrong with that, right? After all, the British government has said it will boost the use of open source software.
But the IIPA suggested that Indonesia deserves Special 301 status because encouraging (not forcing) such takeup "weakens the software industry" and "fails to build respect for intellectual property rights".
From the recommendation:
In fact, IP enforcement is often even more strict in the open source community, and those who infringe licenses or fail to give appropriate credit are often pilloried.
If you're looking at this agog, you should be. It's ludicrous.
But the IIPA and USTR have form here: in recent years they have put Canada on the priority watchlist.
Thousands of Authors Opt Out of Google Book Settlement
Some 6,500 writers, from Thomas Pynchon to Jeffrey Archer, have opted out of Google's controversial plan to digitise millions of books
Former children's laureates Quentin Blake, Anne Fine and Jacqueline Wilson, bestselling authors Jeffrey Archer and Louis de Bernières and critical favourites Thomas Pynchon, Zadie Smith and Jeanette Winterson have all opted out of the controversial Google book settlement, court documents have revealed.
Authors who did not wish their books to be part of Google's revised settlement needed to opt out before 28 January, in advance of last week's ruling from Judge Denny Chin over whether to allow Google to go ahead with its divisive plans to digitise millions of books. The judge ended up delaying his ruling, after receiving more than 500 written submissions, but court documents related to the case show that more than 6,500 authors, publishers and literary agents have opted out of the settlement.
As well as the authors named above, these include the estates of Rudyard Kipling, TH White, James Herriot, Nevil Shute and Roald Dahl, Man Booker prizewinners Graham Swift and Keri Hulme, poets Pam Ayres, Christopher Middleton, Gillian Spraggs and Nick Laird, novelists Bret Easton Ellis, James Frey, Monica Ali, Michael Chabon, Philip Hensher and Patrick Gale, historian Simon Sebag Montefiore, biographer Victoria Glendinning and bestselling author of the Northern Lights trilogy Philip Pullman.
Ursula K Le Guin, who gained significant author support for her petition calling for "the principle of copyright, which is directly threatened by the settlement, [to] be honoured and upheld in the United States", also opted out.
"My feelings were, in the end, that I doubted I would lose out by opting out, whereas I might do by opting in. Also there was the principle that copyright is important," said novelist Marika Cobbold, author of books including Guppies for Tea and Shooting Butterflies, who opted out. "It would be like handing over my babies to a babysitter I'd never met, [and] I couldn't understand what was in it for me. I love Google, and in principle making information accessible is wonderful, but things are moving so fast, and authors are losing so much control over what we've done, that my fear was who knows, in five to 10 years' time, how this information could be used?"
Gillian Spraggs has also set up a new group that will campaign in support of authors' rights. For "UK authors and agents who are deeply concerned about the Google book settlement, the Digital Economy Bill, and other current threats to the fundamental principles of copyright", its manifesto states that "authors have the right to have their intellectual property protected by the state [and] decide whether and where they are going to publish, and in what format(s)".
"The [Google books settlement] is in some trouble in the States. Following serious criticisms from the US Department of Justice, there are big questions over whether the court will approve it, and if it does, in what form," writes Spraggs. "But if authors in Britain don't make their voices heard now, they may find that a similar scheme (or a worse one) has been imposed over here by government decree." Her group, Action on Authors' Rights, "aims to bring home to the UK government and opposition the well-founded concerns of UK authors about the Google book settlement and the Digital Economy Bill, and to have an input into the debate on digitisation and copyright in Europe", she said.
"I decided to opt out of the Google book settlement on the advice of my agency, David Higham Associates, and on the advice of Gill Spraggs, who had read the small print. Then I was inspired to read the small print too, and I didn't like what I found. Google's preemptive action has 'turned copyright law on its head'. It seems they plan, unilaterally, to take ownership away from the writer, and the ownership doesn't pass to the readers (fat chance!) but to a giant profit-making corporation. A vast entity allegedly intent on 'doing nothing evil' has simply decided this will be so, and then hired a fleet of lawyers to make it happen," said award-winning science fiction author Gwyneth Jones. "The danger to me, and every other writer, is not that our works will be available free online (I offer most of my recent novels free online already. These 'portable document format' novels are the text as I wrote it, and they do my sales no harm at all). The danger of the digital 'publishing' corporations is their unprecedented access to billions of tiny payments, for product that costs them effectively nothing, at their point of entry. This seems to mean they don't have to worry about any form of resistance at all. I don't like the sound of that, not from anybody's point of view."
Doubts Raised on Book’s Tale of Atom Bomb
William J. Broad
A new book about the atomic destruction of Hiroshima has won critical acclaim with its heartbreaking portrayals of the bomb’s survivors and is set to be made into a movie by James Cameron.
“The Last Train from Hiroshima,” published in January by Henry Holt, also claims to reveal a secret accident with the atom bomb that killed one American and irradiated others and greatly reduced the weapon’s destructive power.
There is just one problem. That section of the book and other technical details of the mission are based on the recollections of Joseph Fuoco, who is described as a last-minute substitute on one of the two observation planes that escorted the Enola Gay.
But Mr. Fuoco, who died in 2008 at age 84 and lived in Westbury, N.Y., never flew on the bombing run, and he never substituted for James R. Corliss, the plane’s regular flight engineer, Mr. Corliss’s family says. They, along with angry ranks of scientists, historians and veterans, are denouncing the book and calling Mr. Fuoco an impostor.
Facing a national outcry and the Corliss family’s evidence, the author, Charles Pellegrino, now concedes that he was probably duped. In an interview on Friday, he said he would rewrite sections of the book for paperback and foreign editions.
“I’m stunned,” Mr. Pellegrino said. “I liked and admired the guy. He had loads and loads of papers, and photographs of everything.”
The public record has to be repaired, he added. “You can’t have wrong history going out,” he said. “It’s got to be corrected.”
Mr. Corliss died in 1999, but his family preserved the documentary evidence of his participation in the historic flight, including an air medal from President Harry S. Truman. “We’re so distraught,” Ethel D. Corliss, Mr. Corliss’s widow, said in an interview. “Thank God he’s not alive. He was so proud.”
The unnamed B-29 bomber at the center of the uproar flew escort on the bombing run on Aug. 6, 1945, and photographed the mushroom cloud. According to the book, Mr. Fuoco became the bomber’s flight engineer at the last minute when Mr. Corliss fell ill, and he made detailed observations of Hiroshima’s destruction from his seat as the plane’s flight engineer.
Not so, say the two surviving members of the flight crew.
Russell Gackenbach, the flight’s navigator, called Mr. Corliss a good friend. “From my seat in the airplane, we could shake hands,” he said in an interview. “There’s no way on earth Corliss was not on that mission.”
The book rose to No. 24 on the New York Times list of hardcover nonfiction and was praised by Publishers Weekly in a starred review as wise, informed and heart-stopping. The New York Times called it “sober and authoritative.”
The book focuses mainly on survivor tales, with the bomb problems an added drama. It credits Mr. Fuoco with solving a top-secret puzzle involving what he claimed was an accident with the Hiroshima weapon, known as Little Boy, as it was being readied at an air base on Tinian, an island in the Western Pacific.
A burst of radiation killed a young scientist, the book says, and damage to the nuclear fuel assembly cut the bomb’s destructive power by more than half. The book repeatedly calls the weapon “a dud.”
“Joe Fuoco brought all the puzzle pieces together,” Mr. Pellegrino said in a comment on Amazon.com.
Mr. Fuoco’s assertions, however, have upset the Los Alamos weapons laboratory in New Mexico, the birthplace of the bomb. It says the device suffered no accident and no technical failures. Its initial blast killed an estimated 70,000 people.
The book’s claims, said Alan Carr, the official historian at Los Alamos, read “more like a technically dubious piece of fiction than a historical rendering of actual events.”
“This book is a Toyota,” said Robert S. Norris, the author of “Racing for the Bomb” and an atomic historian. “The publisher should recall it, issue an apology and fix the parts that endanger the historical record.”
Late last month, the newsletter of the 509th Composite Group — which flew three B-29 bombers over Hiroshima on the atomic run, one to drop the bomb and the others to document the effects — denounced the book and offered to help Mr. Cameron make a “historically accurate film about these important events.”
The newsletter called Mr. Fuoco an impostor, adding that his name appeared nowhere in the unit’s records. “Any claims by him,” it said, “are completely fraudulent.”
Mr. Pellegrino is the author or co-author of more than a dozen books. They include science fiction, “Unearthing Atlantis,” “The Jesus Family Tomb” and two books on the Titanic. Mr. Cameron used those books as sources for his Titanic movie and received help from Mr. Pellegrino on “Avatar,” according to the publisher. It says Mr. Pellegrino has a Ph.D. in zoology and lives in New York City.
In the interview, Mr. Pellegrino expressed shock and remorse, quickly conceding that the weight of evidence suggested that Mr. Fuoco had never flown over Hiroshima on the bombing run. He said it appeared, however, that Mr. Fuoco did fly reconnaissance missions over Hiroshima before and after the bombing.
The family of Mr. Corliss provided The New York Times with a copy of the air medal order that lists him and the other crew members. It is dated Sept. 14, 1945 — days after the formal Japanese surrender. “By direction of the president,” it begins, going on to cite the men for “meritorious achievement,” saying each “was well aware of the great danger involved.”
The family also provided several pages of Mr. Corliss’s handwritten descriptions of what he did and saw as the flight engineer. “When the bomb went off it was so bright that I had to squint,” Mr. Corliss wrote. His plane, he added, kept circling the mushroom cloud. “All the time it was churning all around, sometimes inside out, with red, yellow, purple and brown colors” as the firestorm sucked up cars and buildings, bodies and dirt.
The family also supplied a military sheet that gave the bomber’s weight and balance on the day of the bombing run. It was signed by Mr. Corliss.
Mr. Corliss’s family says he also served in Korea and Vietnam and retired from the Air Force in 1967 as a master sergeant. He then worked in heating and air-conditioning.
An Air Force spokesman, Capt. Ian Phillips, said Friday that its history office had several documents that listed Mr. Corliss as the flight engineer on the photographic escort plane for the Hiroshima mission. “There is no mention,” he added, “of Joseph Fuoco being assigned to the 509th.”
In an interview, Mr. Fuoco’s widow, Claire, defended her husband as honest and true.
“That’s a lot of baloney,” she said of the impostor charge. “He couldn’t make up such a thing. I always called him a Boy Scout.” She said she had no documentary evidence of his participation in the Hiroshima flight.
Mr. Gackenbach, the flight’s navigator, said the misrepresentations of Mr. Fuoco were unusual only in that they showed up in a book. He said many former servicemen had falsely claimed to have flown over Hiroshima on the famous bombing run.
If all of them had actually been there, Mr. Gackenbach added, the aircraft “could never have taken off.”
Triumph of the Cyborg Composer
David Cope's software creates beautiful, original music. Why are people so angry about that?
The office looks like the aftermath of a surrealistic earthquake, as if David Cope’s brain has spewed out decades of memories all over the carpet, the door, the walls, even the ceiling. Books and papers, music scores and magazines are all strewn about in ragged piles. A semi-functional Apple Power Mac 7500 (discontinued April 1, 1996) sits in the corner, its lemon-lime monitor buzzing. Drawings filled with concepts for a never-constructed musical-radio-space telescope dominate half of one wall. Russian dolls and an exercise bike, not to mention random pieces from homemade board games, peek out from the intellectual rubble. Above, something like 200 sets of wind chimes from around the world hang, ringing oddly congruent melodies.
And in the center, the old University of California, Santa Cruz, emeritus professor reclines in his desk chair, black socks pulled up over his pants cuffs, a thin mustache and thick beard lending him the look of an Amish grandfather.
It was here, half a dozen years ago, that Cope put Emmy to sleep. She was just a software program, a jumble of code he’d originally dubbed Experiments in Musical Intelligence (EMI, hence “Emmy”). Still — though Cope struggles not to anthropomorphize her — he speaks of Emmy wistfully, as if she were a deceased child.
Emmy was once the world’s most advanced artificially intelligent composer, and because he’d managed to breathe a sort of life into her, he became a modern-day musical Dr. Frankenstein. She produced thousands of scores in the style of classical heavyweights, scores so impressive that classical music scholars failed to identify them as computer-created. Cope attracted praise from musicians and computer scientists, but his creation raised troubling questions: If a machine could write a Mozart sonata every bit as good as the originals, then what was so special about Mozart? And was there really any soul behind the great works, or were Beethoven and his ilk just clever mathematical manipulators of notes?
Cope’s answers — not much, and yes — made some people very angry. He was so often criticized for these views that colleagues nicknamed him “The Tin Man,” after the Wizard of Oz character without a heart. For a time, such condemnation fueled his creativity, but eventually, after years of hemming and hawing, Cope dragged Emmy into the trash folder.
This month, he is scheduled to unveil the results of a successor effort that’s already generating the controversy and high expectations that Emmy once drew. Dubbed “Emily Howell,” the daughter program aims to do what many said Emmy couldn’t: create original, modern music. Its compositions are innovative, unique and — according to some in the small community of listeners who’ve heard them performed live — superb.
With Emily Howell, Cope is, once again, challenging the assumptions of artists and philosophers, exposing revered composers as unknowing plagiarists and opening the door to a world of creative machines good enough to compete with human artists. But even Cope still wonders whether his decades of innovative, thought-provoking research have brought him any closer to his ultimate goal: composing an immortal, life-changing piece of music.
Cope’s earliest memory is looking up at the underside of a grand piano as his mother played. He began lessons at the age of 2, eventually picking up the cello and a range of other instruments, even building a few himself. The Cope family often played “the game” — his mother would put on a classical record, and the children would try to divine the period, the style, the composer and the name of works they’d read about but hadn’t heard. The music of masters like Rachmaninov and Stravinsky instilled in him a sense of awe and wonder.
Nothing, though, affected Cope like Tchaikovsky’s Romeo and Juliet, which he first heard around age 12. Its unconventional chord changes and awesome Sturm und Drang sound gave him goose bumps. From then on, he had only one goal: writing a piece that some day, somewhere, would move some child the same way Tchaikovsky moved him. “That, just simply, was the orgasm of my life,” Cope says.
He begged his parents to pay for the score, brought it home and translated it to piano; he studied intensely and bought theory books, divining, scientifically, what made it work. It was then he knew he had to become a composer.
Cope sailed through music schooling at Arizona State University and the University of Southern California, and by the mid-1970s, he had settled into a tenured position at Miami University of Ohio’s prestigious music department. His compositions were performed in Carnegie Hall and The Kennedy Center for the Performing Arts, and internationally from Lima, Peru, to Bialystok, Poland. He built a notable electronic music studio and toured the country, wowing academics with demonstrations of the then-new synthesizer. He was among the foremost academic authorities on the experimental compositions of the 1960s, a period during which a fired-up jet engine and sounds derived from placing electrodes on plants were considered music.
When Cope moved to UC Santa Cruz in 1977 to take a position in its music department, he could’ve put his career on autopilot and been remembered as a composer and author. Instead, a brutal case of composer’s block sent him on a different path.
In 1980, Cope was commissioned to write an opera. At the time, he and his wife, Mary (also a Santa Cruz music faculty member), were supporting four children, and they’d quickly spent the commission money on household essentials like food and clothes. But no matter what he tried, the right notes just wouldn’t come. He felt he’d lost all ability to make aesthetic judgments. Terrified and desperate, Cope turned to computers.
Along with his work on synthesis, or using machines to create sounds, Cope had dabbled in the use of software to compose music. Inspired by the field of artificial intelligence, he thought there might be a way to create a virtual David Cope software to create new pieces in his style.
The effort fit into a long tradition of what would come to be called algorithmic composition. Algorithmic composers use a list of instructions — as opposed to sheer inspiration — to create their works. During the 18th century, Joseph Haydn and others created scores for a musical dice game called Musikalisches Würfelspiel, in which players rolled dice to determine which of 272 measures of music would be played in a certain order. More recently, 1950s-era University of Illinois researchers Lejaren Hiller and Leonard Isaacson programmed stylistic parameters into the Illiac computer to create the Illiac Suite, and Greek composer Iannis Xenakis used probability equations. Much of modern popular music is a sort of algorithm, with improvisation (think guitar solos) over the constraints of simple, prescribed chord structures.
Few of Cope’s major works, save a dalliance with Navajo-style compositions, had strayed far from classical music, so he wasn’t a likely candidate to rely on software to write. But he did have an engineer’s mind, composing using note-card outlines and a level of planning that’s rare among free-spirited musicians. He even claims to have created his first algorithmic composition in 1955, instigated by the singing of wind over guide wires on a radio tower.
Cope emptied Santa Cruz’s libraries of books on artificial intelligence, sat in on classes and slowly learned to program. He built simple rules-based software to replicate his own taste, but it didn’t take long before he realized the task was too difficult. He turned to a more realistic challenge: writing chorales (four-part vocal hymns) in the style of Johann Sebastian Bach, a childhood favorite. After a year’s work, his program could compose chorales at the level of a C-student college sophomore. It was correctly following the rules, smoothly connecting chords, but it lacked vibrancy. As AI software, it was a minor triumph. As a method of producing creative music, it was awful.
Cope wrestled with the problem for months, almost giving up several times. And then one day, on the way to the drug store, Cope remembered that Bach wasn’t a machine — once in a while, he broke his rules for the sake of aesthetics. The program didn’t break any rules; Cope hadn’t asked it to.
The best way to replicate Bach’s process was for the software to derive his rules — both the standard techniques and the behavior of breaking them. Cope spent months converting 300 Bach chorales into a database, note by note. Then he wrote a program that segmented the bits into digital objects and reassembled them the way Bach tended to put them together.
The results were a great improvement. Yet as Cope tested the recombinating software on Bach, he noticed that the music would often wander and lacked an overall logic. More important, the output seemed to be missing some ineffable essence.
Again, Cope hit the books, hoping to discover research into what that something was. For hundreds of years, musicologists had analyzed the rules of composition at a superficial level. Yet few had explored the details of musical style; their descriptions of terms like “dynamic,” for example, were so vague as to be unprogrammable. So Cope developed his own types of musical phenomena to capture each composer’s tendencies — for instance, how often a series of notes shows up, or how a series may signal a change in key. He also classified chords, phrases and entire sections of a piece based on his own grammar of musical storytelling and tension and release: statement, preparation, extension, antecedent, consequent. The system is analogous to examining the way a piece of writing functions. For example, a word may be a noun in preparation for a verb, within a sentence meant to be a declarative statement, within a paragraph that’s a consequent near the conclusion of a piece.
Finally, Cope’s program could divine what made Bach sound like Bach and create music in that style. It broke rules just as Bach had broken them, and made the result sound musical. It was as if the software had somehow captured Bach’s spirit — and it performed just as well in producing new Mozart compositions and Shakespeare sonnets. One afternoon, a few years after he’d begun work on Emmy, Cope clicked a button and went out for a sandwich, and she spit out 5,000 beautiful, artificial Bach chorales, work that would’ve taken him several lifetimes to produce by hand.
When Emmy’s Bach pieces were first performed, at the University of Illinois at Urbana-Champaign in 1987, they were met with stunned silence. Two years later, a series of performances at the Santa Cruz Baroque Festival was panned by a music critic — two weeks before the performance. When Cope played “the game” in front of an audience, asking which pieces were real Bach and which were Emmy-written Bach, most people couldn’t tell the difference. Many were angry; few understood the point of the exercise.
Cope tried to get Emmy a recording contract, but classical record companies said, “We don’t do contemporary music,” and contemporary record companies said the opposite. When he finally did land a deal, no musician would play the music. He had to record it with a Disklavier (a modern player piano), a process so taxing he nearly suffered a nervous breakdown.
Though musicians and composers were often skeptical, Cope soon attracted worldwide notice, especially from scientists interested in artificial intelligence and the small, promising field called artificial creativity. Other “AC” researchers have written programs that paint pictures; that tell Mexican folk tales or write detective novels; and that come up with funny jokes. They have varying goals, though most seek to better understand human creativity by modeling it in a machine.
To many in the AC community, including the University of Sussex’s Margaret Boden, doyenne of the field, Emmy was an incredible accomplishment. There’s a test, named for World War II-era British computer scientist Alan Turing, that’s a simple check for so-called artificial intelligence: whether or not a person interacting with a machine and a human can tell the difference. Given its success in “the game,” it could be argued that Emmy passed the Turing Test.
Cope had taken an unconventional approach. Many artificial creativity programs use a more sophisticated version of the method Cope first tried with Bach. It’s called intelligent misuse — they program sets of rules, and then let the computer introduce randomness. Cope, however, had stumbled upon a different way of understanding creativity.
In his view, all music — and, really, any creative pursuit — is largely based on previously created works. Call it standing on the shoulders of giants; call it plagiarism. Everything we create is just a product of recombination.
In Cope’s fascinating hovel of a home office on a Wednesday afternoon, I ask him how exactly he knows that’s true. Just because he built a program that can write music using his model, how can he be so certain that that’s the way man creates?
Cope offers a simple thought experiment: Put aside the idea that humans are spiritually and creatively endowed, because we’ll probably never fully be able to understand that. Just look at the zillions of pieces of music out there.
“Where are they going to come up with sounds that they themselves create without hearing them first?” he asks. “If they’re hearing them for the first time, what’s the author of them? Is it birds, is it airplane sounds?”
Of course, some composers probably have taken dictation from birds. Yet the most likely explanation, Cope believes, is that music comes from other works composers have heard, which they slice and dice subconsciously and piece together in novel ways. How else could a style like classical music last over three or four centuries?
To prove his point, Cope has even reverse-engineered works by famous composers, tracing the tropes, phrases and ideas back to compositions by their forebears.
“Nobody’s original,” Cope says. “We are what we eat, and in music, we are what we hear. What we do is look through history and listen to music. Everybody copies from everybody. The skill is in how large a fragment you choose to copy and how elegantly you can put them together.”
Cope’s claims, taken to their logical conclusions, disturb a lot of people. One of them is Douglas Hofstadter, a Pulitzer Prize-winning cognitive scientist at Indiana University and a reluctant champion of Cope’s work. As Hofstadter has recounted in dozens of lectures around the globe during the past two decades, Emmy really scares him.
Like many arts aficionados, Hofstadter views music as a fundamental way for humans to communicate profound emotional information. Machines, no matter how sophisticated their mathematical abilities, should not be able to possess that spiritual power. As he wrote in Virtual Music, an anthology of debates about Cope’s research, Hofstadter worries Emmy proves that “things that touch me at my deepest core — pieces of music most of all, which I have always taken as direct soul-to-soul messages — might be effectively produced by mechanisms thousands if not millions of times simpler than the intricate biological machinery that gives rise to a human soul.”
I ask Cope whether Emmy bothers him. This is a man who averages about four daily hours of hardcore music listening, who’s touched so deeply by a handful of notes on the piano as to shut his eyes in reverie.
“I can understand why it’s an issue if you’ve got an extremely romanticized view of what art is,” he says. “But Bach peed, and he shat, and he had a lot of kids. We’re all just people.”
As Cope sees it, Bach merely had an extraordinary ability to manipulate notes in a way that made people who heard his music have intense emotional reactions. He describes his sometimes flabbergasting conversations with Hofstadter: “I’d pull down a score and say, ‘Look at this. What’s on this page?’ And he’d say, ‘That’s Beethoven, that’s music of great spirit and great soul.’ And I’d say, ‘Wow, isn’t that incredible! To me, it’s a bunch of black dots and black lines on white paper! Where’s the soul in there?’”
Cope thinks the old cliché of beauty in the eye of the beholder explains the situation well: “The dots and lines on paper are merely triggers that set things off in our mind, do all the wonderful things that give us excitement and love of the music, and we falsely believe that somewhere in that music is the thing we’re feeling,” he says. “I don’t know what the hell ’soul’ is. I don’t know that we have any of it. I’m looking to get off on life. And music gets me off a lot of the time. I really, really, really am moved by it. I don’t care who wrote it.”
He does, of course, see Emmy as a success. He just thinks of her as a tool. Everything Emmy created, she created because of software he devised. If Cope had infinite time, he could have written 5,000 Bach-style chorales. The program just did it much faster.
“All the computer is is just an extension of me,” Cope says. “They’re nothing but wonderfully organized shovels. I wouldn’t give credit to the shovel for digging the hole. Would you?”
Cope has a complex relationship with his critics, and with people like Hofstadter who are simultaneously awed and disturbed by his work. He denounces some as focused on the wrong issues. He describes others as racists, prejudiced against all music created by a computer. Yet he thrives on the controversy. If not for the harsh reaction to the early Bach chorales, Cope says, he probably would have abandoned the project. Instead, he decided to “ram Emmy down their throats,” recording five more albums of the software’s compositions, including an ambitious Rachmaninov concerto that nearly led to another nervous breakdown from lack of sleep and overwork.
For the next decade, he fed off the anger and confusion and kudos from colleagues and admirers. Years after the 1981 opera was to be completed, Cope fed a database of his own works into Emmy. The resulting score was performed to the best reviews of his life. Emmy’s principles of recombination and pattern recognition were adapted by architects and stock traders, and Cope experienced a brief burst of fame in the late 1990s, when The New York Times and a handful of other publications highlighted his work. Insights from Emmy percolated the literature of musical style and creativity — particularly Emmy’s proof-by-example that a common grammar and language underlie almost all music, from Asian to Western classical styles. Eleanor Selfridge-Field, senior researcher at Stanford University’s Center for Computer Assisted Research in the Humanities, likens Cope’s discoveries to the findings from molecular biology that altered the field of biology.
“He has revealed a lot of essential elements of musical style, and the definition of musical works, and of individual contributions to the evolution of music, that simply haven’t been made evident by any other process,” she says. “That really is an important contribution to our understanding of music, revealing some things that are really worth knowing.”
Nevertheless, by 2004, Cope had received too many calls from well-known musicians who wanted to perform Emmy’s compositions but felt her works weren’t “special” enough. He’d produced more than 1,000 in the style of several composers, an endless spigot of material that rendered each one almost commonplace. He feared his Emmy work made him another Vivaldi, the famous composer often criticized for writing the same pieces over and over again. Cope, too, felt Emmy had cheated him out of years of productivity as a composer.
“I knew that, eventually, Emmy was going to have to die,” he says. During the course of weeks, Cope found every copy of the many databases that comprised Emmy and trashed them. He saved a slice of the data and the Emmy program itself, so he could demonstrate it for academic purposes, and he saved the scores she wrote, so others could play them. But he’d never use Emmy to write again. She was gone.
For years, Cope had been experimenting with a different kind of virtual composer. Instead of software based on re-creation, he hoped to build something with its own personality.
This program would write music in an odd sort of way. Instead of spitting out a full score, it converses with Cope through the keyboard and mouse. He asks it a musical question, feeding in some compositions or a musical phrase. The program responds with its own musical statement. He says “yes” or “no,” and he’ll send it more information and then look at the output. The program builds what’s called an association network — certain musical statements and relationships between notes are weighted as “good,” others as “bad.” Eventually, the exchange produces a score, either in sections or as one long piece.
Most of the scores Cope fed in came from Emmy, the once-removed music from history’s great composers. The results, however, sound nothing like Emmy or her forebears. “If you stick Mozart with Joplin, they’re both tonal, but the output,” Cope says, “is going to sound like something rather different.”
Because the software was Emmy’s “daughter” — and because he wanted to mess with his detractors — Cope gave it the human-sounding name Emily Howell. With Cope’s help, Emily Howell has written three original opuses of varying length and style, with another trio in development. Although the first recordings won’t be released until February, reactions to live performances and rough cuts have been mixed. One listener compared an Emily Howell work to Stravinsky; others (most of whom have heard only short excerpts online) continue to attack the very idea of computer composition, with fierce debates breaking out in Internet forums around the world.
At one Santa Cruz concert, the program notes neglected to mention that Emily Howell wasn’t a human being, and a chemistry professor and music aficionado in the audience described the performance of a Howell composition as one of the most moving experiences of his musical life. Six months later, when the same professor attended a lecture of Cope’s on Emily Howell and heard the same concert played from a recording, Cope remembers him saying, “You know, that’s pretty music, but I could tell absolutely, immediately that it was computer-composed. There’s no heart or soul or depth to the piece.”
That sentiment — present in many recent articles, blog posts and comments about Emily Howell — frustrates Cope. “Most of what I’ve heard [and read] is the same old crap,” he complains. “It’s all about machines versus humans, and ‘aren’t you taking away the last little thing we have left that we can call unique to human beings — creativity?’ I just find this so laborious and uncreative.”
Emily Howell isn’t stealing creativity from people, he says. It’s just expressing itself. Cope claims it produced musical ideas he never would have thought about. He’s now convinced that, in many ways, machines can be more creative than people. They’re able to introduce random notions and reassemble old elements in new ways, without any of the hang-ups or preconceptions of humanity.
“We are so damned biased, even those of us who spend all our lives attempting not to be biased. Just the mere fact that when we like the taste of something, we tend to eat it more than we should. We have our physical body telling us things, and we can’t intellectually govern it the way we’d like to,” he says.
In other words, humans are more robotic than machines. “The question,” Cope says, “isn’t whether computers have a soul, but whether humans have a soul.”
Cope hopes such queries will attract more composers to give his research another chance. “One of the criticisms composers had of Emmy was: Why the hell was I doing it? What’s the point of creating more music, supposedly in the style of composers who are dead? They couldn’t understand why I was wasting my time doing this,” Cope says.
That’s already changed.
“They’re seeing this now as competition for themselves. They see it as, ‘These works are now in a style we can identify as current, as something that is serious and unique and possibly competitive to our own work,’” Cope says. “If you can compose works fast that are good and that the audience likes, then this is something.”
I ask Cope whether he’s actually heard well-known composers say they feel threatened by Emily Howell.
“Not yet,” he tells me. “The record hasn’t come out.”
The following afternoon, we walk into Cope’s campus office, which seems like another college dorm room/psychic dump, with stacks of compact discs and scores growing from the floor like stalagmites, and empty plastic juice bottles scattered about. The one thing that looks brand-new is the black upright piano against the near wall.
Cope pulls up a chair, removes his Indiana Jones hat and eagerly explains the latest phase of his explorations into musical intelligence. Though he’s still poking around with Emily Howell, he’s now spending the bulk of his composition time employing on-the-fly programs.
Here’s how this cyborg-esque composing technique works: Cope comes up with an idea. For instance, he’ll want to have five voices, each of which alternates singing groups of four notes. Or perhaps he’ll want to write a piece that moves quickly from the bottom of the piano keyboard to the top, and then back down. He’ll rapidly code a program to create a chunk of music that follows those directions.
After working with Emmy and Emily Howell for nearly 30 years and composing for about twice that many, Cope is fast enough to hear something in his head in the bathtub, dry off and get dressed, move to the computer and 10 minutes later have a whole movement of 100 measures ready. It may not be any good, but it’s the fastest way to translate his thoughts into a solid rough draft.
“I listen with creative ears, and I hear the music that I want to hear and say, ‘You know? That’s going to be fabulous,’ or ‘You know … ‘” — he makes a spitting noise — “‘in the toilet.’ And I haven’t lost much, even though I’ve got a whole piece that’s in notation immediately.”
He compares the process to a sculptor who chops raw shapes out of a block of marble before he teases out the details. Using quick-and-dirty programs as an extension of his brain has made him extraordinarily prolific. It’s a process close to what he was hoping for back when he first started working on software to save him from composer’s block.
As complex as Cope’s current method is, he believes it heralds the future of a new kind of musical creation: armies of computers composing (or helping people compose) original scores.
“I think it’s going to happen,” Cope says. “I don’t believe that composers are stupid people. Ultimately, they’re going to use any tool at their disposal to get what they’re after, which is, after all, good music they themselves like to listen to. There will be initial withdrawal, but eventually it’s going to happen — whether we want it to or not.”
Already, at least one prominent pop group — he’s signed a confidentiality agreement, so he can’t say which one — asked him to use software to help them write new songs. He also points to services like Pandora, which uses algorithms to suggest new music to listeners.
If Cope’s vision does come true, it won’t be due to any publicity efforts on his part. He’ll answer questions from anyone, but he refuses to proactively promote his ideas. He still hasn’t told most of his colleagues or close friends about Tinman, a memoir he clandestinely published last year. The attitude, which he settled on at a young age, is to “treat myself as if I’m dead,” so he won’t affect how his work is received. “If you have to promote it to get people to like it,” he asks, “then what have you really achieved?”
Cope has sold tens of thousands of books, had his works performed in prestigious venues and taught many students who evangelize his ideas around the world. Yet he doesn’t think it adds up to much. All he ever wanted was to write something truly wonderful, and he doesn’t think that’s happened yet. As a composer, Cope laments, he remains a “frustrated loser,” confused by the fact that he burned so much time on a project that stole him away from composing. He still just wants to create that one piece that changes someone’s life — it doesn’t matter whether it’s composed by one of his programs, or in collaboration with a machine, or with pencil on a sheet of paper.
“I want that little boy or girl to have access to my music so they can play it and get the same thrill I got when I was a kid,” he says. “And if that isn’t gonna happen, then I’ve completely failed.”
Serious Threat to the Web in Italy
In late 2006, students at a school in Turin, Italy filmed and then uploaded a video to Google Video that showed them bullying an autistic schoolmate. The video was totally reprehensible and we took it down within hours of being notified by the Italian police. We also worked with the local police to help identify the person responsible for uploading it and she was subsequently sentenced to 10 months community service by a court in Turin, as were several other classmates who were also involved. In these rare but unpleasant cases, that's where our involvement would normally end.
But in this instance, a public prosecutor in Milan decided to indict four Google employees —David Drummond, Arvind Desikan, Peter Fleischer and George Reyes (who left the company in 2008). The charges brought against them were criminal defamation and a failure to comply with the Italian privacy code. To be clear, none of the four Googlers charged had anything to do with this video. They did not appear in it, film it, upload it or review it. None of them know the people involved or were even aware of the video's existence until after it was removed.
Nevertheless, a judge in Milan today convicted 3 of the 4 defendants — David Drummond, Peter Fleischer and George Reyes — for failure to comply with the Italian privacy code. All 4 were found not guilty of criminal defamation. In essence this ruling means that employees of hosting platforms like Google Video are criminally responsible for content that users upload. We will appeal this astonishing decision because the Google employees on trial had nothing to do with the video in question. Throughout this long process, they have displayed admirable grace and fortitude. It is outrageous that they have been subjected to a trial at all.
But we are deeply troubled by this conviction for another equally important reason. It attacks the very principles of freedom on which the Internet is built. Common sense dictates that only the person who films and uploads a video to a hosting platform could take the steps necessary to protect the privacy and obtain the consent of the people they are filming. European Union law was drafted specifically to give hosting providers a safe harbor from liability so long as they remove illegal content once they are notified of its existence. The belief, rightly in our opinion, was that a notice and take down regime of this kind would help creativity flourish and support free speech while protecting personal privacy. If that principle is swept aside and sites like Blogger, YouTube and indeed every social network and any community bulletin board, are held responsible for vetting every single piece of content that is uploaded to them — every piece of text, every photo, every file, every video — then the Web as we know it will cease to exist, and many of the economic, social, political and technological benefits it brings could disappear.
These are important points of principle, which is why we and our employees will vigorously appeal this decision.
Until next week,
Current Week In Review
Recent WiRs -
February 20th, February 13th, February 6th, January 30th
Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.
"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public." - Hugo Black
|Thread Tools||Search this Thread|
|Thread||Thread Starter||Forum||Replies||Last Post|
|Peer-To-Peer News - The Week In Review - February 13th, '10||JackSpratts||Peer to Peer||0||10-02-10 07:55 AM|
|Peer-To-Peer News - The Week In Review - January 30th, '10||JackSpratts||Peer to Peer||0||27-01-10 07:49 AM|
|Peer-To-Peer News - The Week In Review - January 23rd, '10||JackSpratts||Peer to Peer||0||20-01-10 09:04 AM|
|Peer-To-Peer News - The Week In Review - January 16th, '10||JackSpratts||Peer to Peer||0||13-01-10 09:02 AM|
|Peer-To-Peer News - The Week In Review - December 5th, '09||JackSpratts||Peer to Peer||0||02-12-09 08:32 AM|