|19-03-08, 07:53 AM||#1|
Join Date: May 2001
Location: New England
Peer-To-Peer News - The Week In Review – March 22nd, '08
"The whole issue of P2P needs to be addressed in Italy, but this particular decision is very strange. As far as we're concerned, it's as if a person is apprehended for shop-lifting and the authorities, instead of investigating the thief, investigate the police officers who made the arrest." – Enzo Mazza
"It's mine - you can't have it. If you want to use it for something, then you have to negotiate with me. I have to agree, I have to understand what I'm getting in return." – Tim Berners-Lee
"It was not about mocking a minority but a religious figure, the Prophet, so it was blasphemy, not racism. The idea of challenging religious authority led to liberal democracy, whereas the singling out of minorities, as minorities, led to Nazism and the persecution of the bourgeoisie in Russia. So this distinction is crucial to understand." – Flemming Rose
"Thompson’s multiple responses are rambling, argumentative, and contemptuous… What we cannot tolerate, however, is Thompson’s continued inability to maintain a minimum standard of decorum and respect for the judicial system to which all litigants, and especially attorneys, must adhere…
A thorough review of Thompson’s filings lead to one conclusion. He has abused the processes of the Court… Accordingly… the Clerk of this Court is hereby instructed to reject for filing any future pleadings, petitions, motions, documents, or other filings submitted by John Bruce Thompson, unless signed by a member in good standing of The Florida Bar other than himself." – Lewis, C.J., and Wells, Anstead, Pariente, Quince, Cantero, and Bell, JJ. – Supreme Court of Florida
"Copyright is a very big issue in the legal world today, but in the business world, when you talk to consumers about protecting copyrights, it's a dead issue. It's gone. If you have a business model based on copyright, forget it." – Gerry Faulhaber
March 22nd, 2008
Italian File-Sharers Let Off The Hook
Italian companies may not spy on individuals who engage in illegal file-sharing, according to a controversial new ruling.
The ruling of Francesco Pizzetti, president of the official Italian body for Guaranteeing the Protection of Private Data, follows the attempts of a German record label, Peppermint, which last year began using the Swiss computer firm Logistep to gather the IP addresses of at least 300 Italians who were illegally sharing files.
An Italian magistrate granted the companies permission to obtain the street addresses of the file-sharers from Internet service providers and send them registered letters, inviting them to destroy the files in question or else face hefty fines.
Italian consumer rights groups protested against the decision and the case was brought to the attention of the Guarantor, who handed down the ruling.
Italian consumer groups have greeted the decision, but Italy's record labels have expressed their disappointment.
"The whole issue of P2P needs to be addressed in Italy, but this particular decision is very strange," Enzo Mazza, president of major representative body FIMI, tells Billboard.biz. "As far as we're concerned, it's as if a person is apprehended for shop-lifting and the authorities, instead of investigating the thief, investigate the police officers who made the arrest."
Israel Rebukes US: Our Copyright Laws are Fine, Thanks
Israel wants the US government to know that it won't implement laws banning the circumvention of DRM and it won't rewrite its ISP safe harbor rules; furthermore, neither of these issues should have any effect on trade relations between the two countries.
The Israeli filing (PDF) made to the US Trade Representative comes a month after the International Intellectual Property Alliance called out numerous countries around the world for not living up to the IIPA's vision of the ideal copyright enforcement regime. Canada came in for a thorough trouncing, and Israel was also subject to criticism that it wasn't doing enough on copyright.
The IIPA's comments were made as part of its "Special 301" report to the US Trade Representative. Private groups like the IIPA submit reports to the US government, which eventually decides whether to place other countries on watch lists or apply trade penalties. Israel has no intention of remaining on its current watch list, and the filing has an irritated tone to it.
The reason for the irritation is that Israel thinks it has done plenty to help copyright owners. In 2007, it overhauled its copyright law, increasing the maximum statutory damages that can be collected for infringement five-fold. In addition, Israel added a "making available" right and clarified that a copyright owners' right of reproduction includes even temporary copies.
But the IIPA wanted more. Specifically, it wanted to see rules surrounding DRM and a safe harbor law that is friendlier to content owners. As Israel points out, though, it is not a signatory to the two WIPO treaties that mention DRM, and it notes that even content owners have different approaches to the issue.
"The critiques and criticism of TPM [technological protection measures] both from business model perspectives and from copyright perspectives are almost endless," says the Israeli response. "Indeed, some content providers are already experimenting with nonencrypted access to content. Hence, the question of whether and in what manner to implement TPM is not straightforward and politically volatile."
When it comes to ISP safe harbors, Israel has a notice and takedown system that lets copyright owners notify an ISP regarding infringing material on its servers. In such cases, the user hosting the material has three days to respond to the charge; if no response is received, the material comes down. The IIPA wants a system more like the DMCA where just filing a takedown notice is enough to have material removed (in the US system, a counternotice can be filed by the host to have the material put back up).
Israel objects that it is under no obligation to implement such a system, and notes that it chose the current arrangement for a reason. "A 'takedown' system which operates on the basis of a mere allegation of infringement would be an invitation to censorship and abuse of process," it says in the filing. "It is not the role of the ISP or Host to become a policeman of content. Requiring such would effectively bring the Internet to a halt."
Despite the "usual inaccuracies and hyperbole," Israel does welcome one recommendation from the IIPA report: bumping Israel from the Priority Watch List to the Watch List.
Canadian law professor Michael Geist wishes that his own government would respond this forcefully to the Special 301 process.
Demonoid Tracker Moves to Ukraine
Demonoid, once one of the most popular BitTorrent trackers, has reappeared again, this time hosted in Ukraine. The website is still down but the trackers are now fully operational again, perhaps a sign that Demonoid is crawling back up to speed?
In June 2007 Demonoid was pressured to leave their host in the Netherlands, mainly because of legal threats from the Dutch anti-piracy outfit, BREIN. The site then relocated to Canada, but after threats from the CRIA, it decided to shut down there as well.
A month ago we reported on the brief resurrection of the Demonoid tracker in Malaysia. At the time we hinted at the possibility that the site could perhaps be planning a comeback. Unfortunately the tracker went offline again after a few days. With no official response from the Demonoid team, it remains a mystery what the reason behind the resurrection was.
Now, a month later, the Demonoid tracker is again responding. Just over a week ago, Demonoid torrents began to work again, this time being tracked from Ukraine.
The new host of inferno.demonoid.com is the Ukrainian ISP Cocall Ltd, while the frontend of the site still remains in the US. Again, there is no official explanation for the return of the tracker, although many hope that it’s a sign that the site will be fully up and running soon.
Last December, Demonoid’s founder Deimos spoke about the future of the site: “Money is an issue, but the real problem at the moment is finding a suitable place to host the website. There has been no luck there.” Perhaps he now found his safe harbor in Ukraine?
Ron Jeremy Takes on Porn Pirates
The legendary Ron Jeremy has had enough of video streaming sites such as YouPorn and PornTube, that host pirated versions of his epic movies for free. Jeremy says adult content deserves the same respect as Hollywood’s majors do, and is happy that Vivid Entertainment is going after these sites.
The public needs to understand that piracy is killing the adult industry, Jeremy said: “What harms the industry is the Internet. Before it was helpful. Every company had its own website. Now you have things like YouPorn and PornTube that show full-length features of Vivid’s movies. Who the f— do they think they are?”
Luckily for Jeremy, one of the world’s largest adult film creators, Vivid Entertainment Group, has had enough of these sites as well. Last December the company announced that it was taking legal action against PornoTube.com, and after that similar video streaming services.
“Now Vivid is suing them,” said Jeremy, while demanding the same treatment as his colleagues in Hollywood. “You wouldn’t see YouTube play a full-length feature of a Steven Spielberg film. But they think that it’s just porn so they can get away with it. So now Vivid is striking back. Piracy is piracy, whether the film is PG, R or X. We deserve the same respect.”
Jeremy is not the only one who is upset with all the pirates clips that float around on the Internet. In September we reported that some of the leading adult webmasters were discussing how they could take on BitTorrent sites, something they haven’t succeeded in so far.
There is no doubt that adult clips are widely shared on the Internet, especially via BitTorrent. Approximately 5% of all files being shared on public BitTorrent trackers are adult content, and most of these files are copyrighted. On top of that, sites such as Empornium, PureTnA and Cheggit solely focus on sharing porn, and are among the most popular private BitTorrent trackers on the Internet.
IFPI Takes out MP3 Top Site Server
If you like the newsgroups, BitTorrent and electronic music, today's news may come as a bit of an annoyance. The IFPI (International Federation of the Phonographic Industry) today announced the successful raid against two servers in Budapest, Hungaria, which served as the home of the release group RAGEMP3.
The raid also disrupted the MP3 release group XXL. According to the press statement, the IFPI, ProArt (the local copyright traffic cop), and the Hungarian police were able to infiltrate the release group's server.
What the IFPI doesn't indicate are the long term consequences for RAGEMP3 and XXL. Although one of their servers has been knocked off line, their organizational structure is likely still intact. No arrests have been made, and it's likely the servers that were raided were not the only ones used by the operation.
Top sites, as their name indicate, are the top tier hierarchy of the online distribution world. Most often, its at this point where information trickles down to the common masses. From the top sites, information passes on to the newsgroups and BitTorrent, and finally, whatever scraps are left over are dumped into the P2P market.
Kazaa Downloads Cost One Man $750 Per Song in RIAA Suit
Even as Tanya Andersen refilled her malicious prosecution lawsuit last week, the RIAA won a victories in two unrelated lawsuits. One involved a case where the defendant never showed up in court; the other a defendant who admitted to using KaZaA to download and distribute music.
James V. Lewis was sued by the labels in August 2007 after an IP address flagged by MediaSentry was traced to his ISP account. Lewis never showed up in court, and the RIAA filed for a default judgment in October. Initially, the judge declined to give the labels what they were looking for, instead scheduling a hearing to discuss the case.
After a hearing held last week, the judge gave the RIAA what it was looking for: a default judgment in the amount of $3,000 plus an additional $420 in court costs. Lewis has also been barred from infringing on "any other sound recording, whether now in existence or later created, that is owned or controlled by the Plaintiffs."
The other case, Atlantic v. Anderson, involves a Texas resident who was sued in November 2006 for copyright infringement. Abner Anderson decided to fight the lawsuit, submitting a brief answer to the RIAA's complaint in which he did little more than deny the labels' accusations. He also said that any infringement that did take place was due to the negligence on the part of the RIAA.
The RIAA moved for summary judgment in the case, arguing that Anderson's making the songs in question available on KaZaA was the same as distributing them, and that the facts of the case were indisputable.
Anderson disagreed. In his response to the RIAA's motion, he argued that the problem of illegal downloading was the result of the recording industry's own negligence. "Without an official statement, the distribution of literature from Plaintiffs, or something to inform the public of actions that constitute copyright infringement, the public could not be expected to know that using this software network was improper," argued Anderson. He also noted that he planned to challenge the constitutionality of the statutory damages sought by the RIAA, as other defendants have done.
Judge Vanessa D. Gilmore was unconvinced. In her decision, she pointed out that Anderson had admitted to downloading and using KaZaA during discovery. Furthermore, his screen name matched the one flagged by MediaSentry, and he admitted to "actively distributing" music to other KaZaA users. "Defendant concedes that he did place the subject Copyrighted Recordings in his shared folder for distribution to other users while being connected to KaZaA," she wrote in her opinion.
Judge Gilmore also touched on the RIAA's argument that making a file available over P2P constitutes infringement. "Numerous courts have assessed whether availing music and/or media for downloaded by other users on a peer-to-peer network constitutes copyright infringement as a matter of law," wrote the judge. "Accordingly... because it has been both proven and admitted to that the Defendant intentionally downloaded and/or distributed those Copyrighted Recordings, no genuine issue of material fact remains as to Plaintiffs' claim for copyright infringement."
The RIAA was awarded $23,250, or $750 in statutory damages for each of the 31 songs named in the lawsuit, plus $420 in court costs. Judge Gilmore took issue with Anderson's argument that the damages sought by the RIAA were excessive. "Yet, the true cost of Defendant's harms in distributing Plaintiffs' Copyrighted Recordings for download by other users on KaZaA is incalculable," wrote the judge in her opinion. "That is, there is no way to ascertain the precise amount of damages caused by the Defendant's actions."
The Anderson case may prove significant for the RIAA because of the ruling on the statutory damages question; an RIAA spokesperson said that the group would be "citing it often" in other cases. It's important to note, however, that the defendant's failed to raise the "making available = infringement" question in his defense. Indeed, his admission that he knowingly set up and used KaZaA to download and share music on the P2P service may have precluded him from doing so.
Piracy Provision Aims at Universities
A House bill to make college more affordable contains a mandate that campuses develop plans to prevent illegal downloads. Schools say they're a minor part of the problem and unfairly targeted.
Colleges and universities that take part in federal financial aid programs will be under new obligations to take steps to prevent illegal downloads of music, movies and other copyrighted material if legislation overwhelmingly passed by the House last month becomes law.
A two-page portion of the 800-page College Opportunity and Affordability Act:H.R.4137: has raised alarms in the higher- education community. It would hold schools disproportionately responsible, education groups say, for activities that take place mostly off-campus.
"More than 80% of students live off-campus and use commercial networks," not school networks, said Steve Worona, director of policy and networking for Educause, a nonprofit that focuses on information technology in higher education.
Universities go well past the minimum legal requirements to dissuade piracy by requiring students to sign copyright-law notifications, Worona argued, yet the commercial networks where the "vast majority" of illegal downloads occur "do nothing beyond it -- and for some reason we're the ones targeted."
The main purpose of the legislation, which the House approved 354-58, is to make college more affordable to low- and middle-income families.
But the anti-piracy provision could increase student costs, Worona said. It would mandate that schools develop plans to offer alternatives to illegal downloading and to explore technological deterrents.
Worona called the mandate "expensive, ineffective, inappropriate and unnecessary" and expressed concern that schools could be penalized for failing to come up with such plans.
Rep. Steve Cohen (D-Tenn.) had intended to introduce an amendment that no school failing to devise plans "shall be denied or given reduced federal funding," but after tornadoes struck his state he returned to his district to deal with the aftermath -- and missed a key procedural vote to attach his language to the bill.
Educational institutions' arguments don't sway representatives of the artists who receive no royalties from illegal file-sharing.
"Piracy hurts ordinary working musicians, but it also will hurt our nation's culture and its music fans if enough talented and hard-working musicians cannot survive in the business," American Federation of Musicians President Thomas F. Lee said in a letter to the House Committee on Education and Labor in support of the provision.
The Motion Picture Assn. of America, which also supports the measure, in earlier congressional testimony cited a 2005 study to claim that 44%, or about $572 million, of industry losses came from students using college networks. But in late January the MPAA, acknowledging "human error," lowed the proportion to 15%, or about $195 million.
"I have no doubt that the exceptional size of this [initial] number contributed significantly to the sense of urgency in dealing with college students," Educause Vice President Mark Luker said in an interview. Even 15% is too high, he said: Adjusting for the fact that most students are off-campus, he considered 3% a more reasonable estimate.
UCLA's director of strategic policy for information technology, Kent Wada, agreed with Luker's 3% estimate, adding: "Consider also that we see the behavior and values associated with illegal file-sharing already largely developed by the time students arrive at college."
Research by USC's John Heidemann, an Information Sciences Institute associate professor, bore out their estimates. After hearing the MPAA's initial claim, he monitored file-sharing on USC's network for 14 hours and found 3% to 13% of users using peer-to-peer technology. (USC was among only a few schools to conduct research and not rely solely on the MPAA's numbers.)
The Recording Industry Assn. of America, which also supports the bill, has subpoenaed numerous universities in recent years over piracy issues, asking schools to identify students who were illegally distributingsongs onto file-sharing networks.
But in the last few months, several universities have fought back.
In the most prominent case, the University of Oregon moved in November to have a subpoena dismissed. The school accused the industry of misleading the judge, violating students' privacy rights and engaging in questionable investigative practices.
The latter charge involves MediaSentry, an Internet service used by the RIAA to obtain user information from file-sharing networks. Some states, including Oregon, require private investigators to have a license, which MediaSentry lacks.
The RIAA says MediaSentry isn't a private investigator.
The case is pending.
According to Luker, all universities have explicit policies against copyright infringement on campus networks that students must sign each year. And "quite a few" schools, he said, have sponsored subscriptions to legal downloading services such as Napster or the Ruckus Network, with the cost passed on to students.
"The corresponding costs must be charged back to the students, ultimately, through tuition or fees, raising the cost of higher education," he said.
UCLA and USC say they have not increased charges to students because of their Ruckus subscriptions.
The RIAA supports several such means to promote legal campus downloads. But they aren't catching on.
"The commercial alternatives simply don't provide the services consumers want," Educause's Worona said. "They can't download to an iPod or move tracks from place to place, and many don't have a full range of selection. It doesn't make sense for Congress to mandate something that has failed in the public marketplace."
Universities still have hope based on the Senate version of the education bill:S.1642:, passed last summer. In response to vocal critics such as Educause, Majority Leader Harry Reid (D-Nev.) withdrew his amendment to require federally funded universities to use technological deterrents.
A House-Senate conference committee is to meet this year to work out differences in a final bill to be sent to the White House.
RIAA Tactics to Combat Piracy Again in Question
Commentary: Recording association, Tony Soprano -- not much difference?
As any fan of "The Sopranos" knows, the mob often takes out its enemies in a gruesome fashion as a way to warn others to fall in line.
The same can be said of the campaign over the past four years instigated by the dreaded Recording Industry Association of America, more commonly known as the RIAA, which has been on a mission to stop or slow down the practice of illegal music downloading online.
Their special target, as most people know, has been college students, with some seeing their very education come under threat for what used to be a time-honored tradition -- copying their friends' music.
That copying, of course, has taken on a much larger scale with the Internet, which allows students to share songs and albums by the thousands -- often without paying a dime.
"This is a form of tough love," said Jonathan Lamy, a spokesman for the RIAA in Washington, which is made up of the biggest music industry labels. Last February, in an effort to step up the pace, the RIAA began sending "pre-lawsuit letters" to universities, which then forward them on to students associated with certain Internet accounts in question. The RIAA asks first for a few thousand dollars in payment and warns that the computer owner could face a federal lawsuit.
No room for negotiation
Much like the New York mob family in "The Sopranos," the RIAA is trying to send a blunt message -- that downloading free music using peer-to-peer networks could cost them dearly.
I don't condone music piracy, but the RIAA's tactics are nearly as bad as the actions of mobsters, real or fictional. The analogy comes up easily and frequently in any discussion of the RIAA's maneuvers.
Lawyers, who are defending alleged music pirates, say the biggest problems are: There is no room for negotiation with the RIAA, many students are wrongfully targeted, and most are settling for several thousand dollars because they fear even bigger legal costs or fines down the road.
"My students were saying it's extortion," said Robert Talbot, a professor at the University of San Francisco School of Law. Talbot teaches an Internet and intellectual property clinic, and now many of his law students are volunteering to help those who receive threatening letters from the RIAA.
"The letters are kind of scary," said Talbot. "These are usually kids who are 17 or 18 years old, they don't have any money and they are scared."
Talbot said his students are working on one case where four kids share one computer. "Students are trying to negotiate, but I don't have much hope. They don't want to negotiate. It's pay up or we go into federal court."
Going to court
The music industry had its first big win last October when a jury in Duluth, Minn., found Jammie Thomas, a single mother of two, liable for copyright infringement and ordered her to pay $9,250 for each of the 24 shared songs cited in the lawsuit, or a total of $222,000.
The tactics by the RIAA were highlighted in a more recent lawsuit, filed last week in a federal court in Portland, Ore., which alleged that the group is violating federal racketeering laws under the Racketeer Influenced and Corrupt Organization Act.
In that case, which was nearly thrown out by the judge, plaintiff Tanya Andersen alleges in an amended complaint that music industry defendants engaged in a campaign of "threat and intimidation," "using flawed and illegal private investigation information," in an attempt to "coerce payment from private citizens across the United States."
The RIAA says it has been sued at least four or five times in cases involving RICO statutes, and none of these suits has prevailed.
"They have been invariably rejected by the courts," Lamy said. He also said that the RIAA has negotiated with many students. "We don't have any interest in bringing a lawsuit against the wrong person," he added.
The original RIAA case against Andersen was eventually dismissed. It was discovered that her computer was not used to download the music in question, but she has countersued for legal fees. The amended complaint filed in Oregon seeks class-action status for other people victimized by the anti-piracy campaign by the RIAA and at four record companies.
When asked for financial specifics on what kind of damage students are doing to the recording industry, the RIAA says it does not have data on piracy at universities, but global music piracy causes $12.5 billion of economic losses every year and 71,060 jobs in the U.S. have been lost, citing data from the Institute for Policy Innovation.
Another data point, however, seems to indicate that though the RIAA's efforts are filling its coffers with millions of dollars, its letters and suits may not even be deterring music piracy. Fred von Lohmann, a senior intellectual property attorney with the Electronic Frontier Foundation, said recent data from Big Champagne of Los Angeles indicate that there is "more file sharing than ever before." Lamy of the RIAA said peer-to-peer traffic is "essentially flat."
Defense lawyers say most students who get these letters are settling for an average of $4,000. The RIAA says settlements suggested in the pre-lawsuit letters are within the bounds of copyright laws, and are even less than what the law allows.
"Copyright law allows from $750 up to $150,000 per work," Lamy said.
It isn't right to jeopardize someone's education. Granted, some wealthier students just show the letter to their parents, who quickly pay to make the case quietly go away. The RIAA should just charge students double the rate of a song on iTunes (99 cents) for every song they are found downloading. But Lamy said settlements need to be of "consequence" to deter the activity in the first place.
Von Lohmann and the EFF have long been arguing that the RIAA campaign is unwise and unfair. The music industry is using scare tactics to bilk millions of dollars from college students who can ill afford it as a stop-gap measure while it tries to figure out a business model as it goes through its biggest seismic shift ever. Even Apple (AAPL) which created the most successful way to sell digital music legally through iTunes, is reportedly looking at new options for selling digital music.
"In three to four years when you figure out your business model, what are you going to say to the thousands of kids who had to drop out of school?" von Lohmann said. "Are you just going to say, I'm sorry?"
RIAA Pockets Filesharing Settlement Money, Doesn't Pay Artists Whose Copyrights Were Infringed
None of the estimated $400 million that the RIAA received in settlements with Napster, KaZaA, and Bolt over allegations of copyright infringement has gone to the artists whose copyrights were allegedly infringed. Now the artists are considering suing the RIAA.
Lawyers who have represented artists such as The Rolling Stones, Van Halen, and Christina Aguilera say artists and managers are upset that they haven't seen any of the settlement money the RIAA received after suing the popular file-sharing services. According to the New York Post, the artists are "girding for battle with their music overlords," who respond that they have "started the process" of figuring out how to share the money, most of which was received seven years ago in a settlement with Napster. The RIAA also claims that there isn't actually that much money available after subtracting legal fees. Whoops.
Torrent Storage is proud to release your new headquarters for the latest and greatest scene releases - http://www.releasepirate.com
Please register and post to help make this scene release site a prosperous one.
Don\'t forget to share this site with your friends!
Download Music from Your Friends' iTunes Libraries Over the Internet with Mojo
Windows/Mac only: Share any song in your iTunes library and download any song from your friends' iTunes libraries over the internet with freeware application Mojo. Essentially, Mojo makes sharing music with your friends through iTunes wildly simple, from its simple interface to its brilliant implementation. If you've ever used apps like previously mentioned ourTunes to download music from shared libraries, you have an idea of what Mojo does, bu you should still prepare to be amazed. I'm head over heels for Mojo, so hit the jump for a full-on screenshot tour and detailed walk-through and overview of everything Mojo has to offer.
To get started, you need to download and install Mojo on your computer (it's fully ready to go on Macs, and currently in beta for Windows). The first time you run Mojo, you'll be asked to create an account. Do that, then you'll see the Mojo friends window, which is much like a buddy window on an instant messenger client. Granted, you won't have any buddies in this window to begin with (unless it's also been installed by another computer on your local network), but don't worry, you will.
Next, let's say your friend downloads and installs Mojo as well. They give you their user name, you hit the little plus (+) sign to add them as a buddy, and they're sent an approval request. They approve you, and voilà—you now have access to every song in their iTunes library. So what now?
Browsing and Downloading Music
To browse your friend's library, just double-click their entry in the buddy window. Mojo will open a new window which shows every song in their library and their playlists, along with their Movies, TV Shows, Podcasts, and Audiobooks. Double-click any song to play it back, and to download a song (or even video), just click the download arrow next to the song or the big download button at the bottom of the screen.
Mojo will download the song and automatically add it to your iTunes library. Additionally, it will even create a playlist in a folder called Mojo containing all the songs you downloaded from that friend.
You may be thinking: Sure, this is impressive, but what else can it do? Well, for one, Mojo automatically detects whether or not you already have a song in your iTunes library. Any song that you've already got displays in Mojo in a light gray color. And if your friend has purchased a song from the iTunes Music store, and it's dripping with nasty DRM—Mojo highlights those tracks in red.
So What's the Catch?
If you've already checked out the Mojo homepage, you may notice that there is a premium version of the application. Luckily for all of the cheapskates out there like me, you really don't need to buy the premium version to enjoy most of the best features of Mojo. But let's say you do want to go Pro. Here's what you get:
As far as I can tell, that's it. Playlist subscriptions, which allow you to subscribe to a playlist in your friend's library, automatically downloads music in the playlist as your friend adds to it. Crazy cool, yes, but if you don't want to shell out for it, it's really not that must-have.
Right now, as I said, Mojo is available and ready for primetime on the Mac, and is currently in beta for Windows users. The app takes practically zero know-how to set up and get started with, and everything it does is near perfect. I've only tested it on my Mac so far, so if you give the beta a try on Windows, let's hear how it's working in the comments. For another detailed usage overview, check out the introduction screencast from Mojo.
AnyDVD HD Is Here, So Start the Blu-ray BD+ DRM Crackin'
Late last year, disc-copying software maker SlySoft claimed they cracked the BD+ DRM protection in Blu-ray discs. They weren't kidding. The newest version of AnyDVD HD strips Blu-ray discs of BD+, allowing you to copy even the most locked-up Blu-ray discs (*cough*Fox*cough*) to your heart's content—assuming the copies are for personal use, of course. On the DVD front, the updated software rips movies that can't be read by Windows, and can now get around most ARccOS protection. Sounds like a reasonable temptation to all you pirate types, so run along, have at it and report back to us. [SlySoft] Thanks, Mike!!
Web Creator Rejects Net Tracking
The creator of the web has said consumers need to be protected against systems which can track their activity on the internet.
Sir Tim Berners-Lee told BBC News he would change his internet provider if it introduced such a system.
Plans by leading internet providers to use Phorm, a company which tracks web activity to create personalised adverts, have sparked controversy.
Sir Tim said he did not want his ISP to track which websites he visited.
"I want to know if I look up a whole lot of books about some form of cancer that that's not going to get to my insurance company and I'm going to find my insurance premium is going to go up by 5% because they've figured I'm looking at those books," he said.
Sir Tim said his data and web history belonged to him.
He said: "It's mine - you can't have it. If you want to use it for something, then you have to negotiate with me. I have to agree, I have to understand what I'm getting in return."
Phorm has said its system offers security benefits which will warn users about potential phishing sites - websites which attempt to con users into handing over personal data.
Kent Ertugrul, chief executive, of Phorm, told BBC News: "We have not had the chance to describe to Tim Berners-Lee how the system works and we look forward to doing that.
"We believe Phorm makes the internet a more vibrant and interesting place. Phorm protects personal privacy and unlike the hundreds of other cookies on your PC, it comes with an on/off switch."
The advertising system created by Phorm highlights a growing trend for online advertising tools - using personal data and web habits to target advertising.
Social network Facebook was widely criticised when it attempted to introduce an ad system, called Beacon, which leveraged people's habits on and off the site in order to provide personal ads.
The company was forced to give customers a universal opt out after negative coverage in the media.
Sir Tim added: "I myself feel that it is very important that my ISP supplies internet to my house like the water company supplies water to my house. It supplies connectivity with no strings attached. My ISP doesn't control which websites I go to, it doesn't monitor which websites I go to."
Sir Tim Berners-Lee talks about the future of the internet
Talk Talk has said its customers would have to opt in to use Phorm, while the two other companies which have signed up - BT and Virgin - are still considering both opt in or opt out options.
Sir Tim said he supported an opt-in system.
"I think consumers rights in this are very important. We haven't seen the results of these systems being used."
Privacy campaigners have questioned the legality of ISPs intercepting their customers' web-surfing habits.
But the Home Office in the UK has drawn up guidance which suggests the ISPs will conform with the law if customers have given consent.
Sir Tim also said the spread of social networks like Facebook and MySpace was a good example of increasing involvement in the web. But he had a warning for young people about putting personal data on these sites.
"Imagine that everything you are typing is being read by the person you are applying to for your first job. Imagine that it's all going to be seen by your parents and your grandparents and your grandchildren as well."
But he said he had tried out several of the sites, and thought they might in the end be even more popular with the elderly than with young people.
Sir Tim was on a short visit to Britain from his base at MIT in Boston, during which he met government ministers, academics and major corporations, to promote a new subject, Web Science.
This is a multi-disciplinary effort to study the web and try to guide its future. Sir Tim explained that there were now more web pages than there are neurons in the human brain, yet the shape and growth of the web were still not properly understood.
"We should look out for snags in the future," he said, pointing to the way email had been swamped by spam as an example of how things could go wrong. "Things can change so fast on the internet."
But he promised that what web scientists would produce over the coming years "will blow our minds".
A Push to Limit the Tracking of Web Surfers’ Clicks
AFTER reading about how Internet companies like Google, Microsoft and Yahoo collect information about people online and use it for targeted advertising, one New York assemblyman said there ought to be a law.
So he drafted a bill, now gathering support in Albany, that would make it a crime — punishable by a fine to be determined — for certain Web companies to use personal information about consumers for advertising without their consent.
And because it would be extraordinarily difficult for the companies that collect such data to adhere to stricter rules for people in New York alone, these companies would probably have to adjust their rules everywhere, effectively turning the New York legislation into national law.
“Should these companies be able to sell or use what’s essentially private data without permission? The easy answer is absolutely not,” said the assemblyman who sponsored the bill, Richard L. Brodsky, a Democrat who has represented part of Westchester County since 1982.
Mr. Brodsky is not the only lawmaker with this idea. In Connecticut, the General Law Committee of the state assembly has introduced a bill that focuses on data collection rules for ad networks, the companies that serve ads on sites they do not own.
The New York bill, still a work in progress, is shaping up as much broader. Although it is likely to see some tinkering before it comes to a vote — which Mr. Brodsky hopes will happen this spring — it aims to force Web sites to give consumers obvious ways to opt out of advertising based on their browsing history and Web actions.
If it passed, computer users could request that companies like Google, Yahoo, AOL and Microsoft, which routinely keep track of searches and surfing conducted on their own properties, not follow them around. Users would also have to give explicit permission before these companies could link the anonymous searching and surfing data from around the Web to information like their name, address or phone number.
Because there is no federal legislation on these subjects, Mr. Brodsky’s bill — and, to a lesser extent, the one in Connecticut — could set interesting precedents.
“A law like this essentially takes some of the gold away from marketers,” said Joseph Turow, a professor at the Annenberg School for Communication at the University of Pennsylvania. “But it’s the right thing to do. Consumers have no idea how much information is being collected about them, and the advertising industry should have to deal with that.”
Web companies in the advertising business, which have spent the last few years busily courting advertising agencies to persuade them to shift their clients’ ad dollars to the Internet, are now lavishing their attention on Albany. In recent weeks, Microsoft and Yahoo have sent lobbyists to meet with Mr. Brodsky, and AOL, a unit of Time Warner, is planning a meeting. Unlike most Web companies, Microsoft favors legislation about online privacy and advertising practices and has lobbied federal lawmakers to establish regulations, said Michael Hintze, associate general counsel for Microsoft.
Microsoft asked Mr. Brodsky to broaden his bill to include all sorts of companies that serve ads around the Web, not just those that show ads based on users’ behavior. Such a change would create a bill that more clearly includes Microsoft’s chief competitor, Google.
Mr. Brodsky says he has asked the Web companies point-blank if they would support legislation similar to what he has proposed. Microsoft gave him a firm “yes,” but Yahoo, he said, seemed to be opposed to any sort of regulation. Yahoo declined to comment on its meeting with Mr. Brodsky.
Targeted advertising, the kind based on consumer data, is one reason that big brands like Coca-Cola and General Motors have been shifting their ad budgets to the Web. The largest Web companies collect data about Web-surfing consumers hundreds of times a month and use the information to help clients show different ads to different people, based on their demographics and interests.
It is unclear how much consumer data is really needed for effective online advertising. The attitude among Web companies is that more is always better, but Mr. Brodsky said there might be a compromise position that enables many ad practices but enhances consumer protection.
“What we have with this new technology is a conflict between the economic model of the Internet and consumers’ reasonable expectations of privacy,” Mr. Brodsky said.
He has sponsored three recently passed laws that relate to Internet security, but the pending bill is his first involving online advertising. Mr. Brodsky said he became concerned with advertising practices last spring when privacy activists contacted him about Google’s plan to buy DoubleClick, a company that delivers ads to Web sites. That deal, now worth about $3.2 billion, drew antitrust scrutiny but has recently cleared all regulatory hurdles; it was one of many that have helped consolidate consumer data in the hands of a few Internet companies.
Not surprisingly, executives in the advertising industry say that concerns like Mr. Brodsky’s are unwarranted.
“There has really been no harm shown by behavioral targeting or third-party advertising, so this rush to regulate the Internet is really unnecessary,” said Mike Zaneis, vice president for public policy for the Interactive Advertising Bureau, an industry group that represents companies like Google and Yahoo.
Moreover, Mr. Zaneis said, the New York bill threatens to undercut the business model that supports the Web. “If you take the fuel out of this engine, you begin to see the free services and content dry up,” he said.
Another view is that the genie is already out of the bottle. Data collection by online ad companies is already widespread. Advertisers have come to expect Web companies to sell them ads based on copious consumer data, and it might be difficult to beat back that expectation.
Furthermore, some Web executives say the Internet is changing far too fast for lawmakers to keep up. “Taking a snapshot of what should be the standard today probably will not be a lasting and durable solution,” said J. Trevor Hughes, executive director of the Network Advertising Initiative, a group of online advertising networks that voluntarily produced and agreed to a set of privacy standards.
The Federal Trade Commission, which regulates advertising on the national level, has proposed voluntary privacy guidelines and is receiving comments about those rules until April 11. A spokeswoman for the commission declined to comment on the bills pending in New York and Connecticut.
Mr. Brodsky said he welcomed input about his bill and was working to modify it. “In the end, I don’t have a philosophical objection to targeting, if it’s done with permission,” he said. “But it is absolutely clear that people right now do not understand what they’re actually giving up.”
2 Witnesses Tell Court of Threats by Detective
David M. Halbfinger
In some of the most colorful testimony so far in the Hollywood wiretapping trial, the actress Linda Doucett and the former wife of a wealthy Los Angeles investor on Wednesday accused the private eye Anthony Pellicano of threatening them.
Mr. Pellicano, who is charged with racketeering, is accused of bribing police and telephone company workers to run illegal database checks and install illicit wiretaps to gain an edge in legal disputes.
Among those who benefited, prosecutors say, is Brad Grey, now chairman of Paramount Pictures. When the comedian Garry Shandling sued Mr. Grey, his former manager, Mr. Pellicano worked for the defense. Ms. Doucett — who was once Mr. Shandling’s fiancée, had lived with him and co-starred on his HBO series “The Larry Sanders Show” — was a witness in that case.
Ms. Doucett, who evidence shows was the subject of several illegal database checks, told of getting a menacing phone call in November 2003, not long after she had been interviewed for the first time by Stanley Ornellas, the F.B.I. agent leading the Pellicano investigation. The caller alluded to that meeting, she said, and to a call Ms. Doucett had just received from a reporter asking about Mr. Pellicano.
Ms. Doucett said the caller also threatened to harm her young son. “If you talk to your friend Stan, or the press, you won’t be seeing your child anymore and he won’t be going to St. Gene’s any more,” she quoted the caller as saying, referring to her son’s school.
On cross-examination by Mr. Pellicano, who is representing himself, Ms. Doucett said the F.B.I. traced the call but never charged anyone. But she said she knew “in my heart” that Mr. Pellicano was responsible.
Asked how, she replied: “You’re the only bad guy I’ve ever known that knew anything about my personal life.”
Before Mr. Pellicano could take his seat, Ms. Doucett turned the tables on him. “Why did you investigate me?” she asked. She got no answer.
Earlier Wednesday, Jude Green, the former wife of the investor Leonard I. Green, testified that Mr. Pellicano threatened her twice and her lawyer once during a lengthy divorce fight.
Her first lawyer, Stephen Kolodny, warned her that Mr. Pellicano was on the case, saying she should get a good shredder and that she could not even go to a judge for help. According to Ms. Green, the lawyer told her: “ ‘Tony Pellicano runs family law. You’ll have to watch your phones, watch your back, and watch your garbage.’ ”
Ms. Green said she, too, was threatened by Mr. Pellicano, once in a bizarre encounter as she was taking her dog to be groomed. Mr. Pellicano boxed her car into a parking lot with his own, glared at her silently and refused to let her leave, she said. When she threatened to hit his car with her own, he moved it, she said, but then he followed her to a coffee shop.
On cross-examination by Mr. Pellicano, she testified that he pursued her inside and began shoving her from behind until she yelled at him to back off, with an expletive, and fled.
“You remember that?” Ms. Green snapped from the witness stand.
RI Club Fire Figure Released From Prison
The band manager whose pyrotechnics display sparked a nightclub fire that killed 100 people in 2003 was freed from prison Wednesday after serving less than half of his four-year sentence.
Daniel Biechele, 31, walked from the front door of Rhode Island's minimum security prison into his lawyer's car at midday and was driven away. He did not respond to questions as he got into the vehicle.
His attorney, Thomas Briody, said in a statement that Biechele would not make any public statements "out of respect for those people most affected by the fire."
"He was a private citizen before this tragedy, and he wishes to remain so," Briody said.
Briody has declined to discuss future plans for Biechele, who married just before reporting to prison. But a spokeswoman for the Florida Department of Corrections has said Biechele will be assigned a parole officer and will live in Casselberry, Fla., outside Orlando.
Biechele, the former tour manager for the 1980s rock band Great White, pleaded guilty in 2006 to 100 counts of involuntary manslaughter for his part in the fire at The Station nightclub in West Warwick.
Sparks from the pyrotechnics display at the start of Great White's set on the night of Feb. 20, 2003, ignited flammable foam that lined the walls and ceiling of the one-story wooden roadhouse. The flames sent out toxic black smoke and created temperatures so high that most of the dead were killed within minutes. Panicked concertgoers became trapped at the front door. More than 200 people were injured.
Biechele was indicted along with club owners Jeffrey and Michael Derderian on 200 counts each of involuntary manslaughter. He was to have been the first of the three to stand trial, but struck a plea deal with prosecutors. He got four years, plus an additional 11 years suspended and three years' probation.
The parole board unanimously decided in September to release Biechele early, saying he had shown genuine remorse and had the support of family members of victims.
Many family members have said they appreciated Biechele's apologies; he sent each family a handwritten letter after his sentencing to express his remorse. He also tearfully apologized at his sentencing hearing, saying he wasn't sure he could ever forgive himself, and didn't expect forgiveness from anyone else.
"I don't think he had as big a role in what happened that night as some other people, and he was man enough to admit his mistake, show some genuine remorse and do his time with dignity," said Chris Fontaine, whose son, Mark, died in the fire.
"Here's a young man who has to live with his actions for the rest of his life," Fontaine said Wednesday after Biechele's release. "I think that's sufficient punishment."
Michael Derderian, who is serving a four-year sentence after pleading no contest to involuntary manslaughter for installing the foam, is due out on parole in October 2009.
Jeffrey Derderian was spared prison time and sentenced instead to probation and 500 hours of community service. He completed his community service requirement last year with a local fire and rescue company and with a national agency that works for burn survivors.
Media Companies Embrace Peer-to-Peer Technology
Networks go legit, but piracy still widely practiced
The technology best known for pirating movies, music and software online is increasingly being adopted by businesses as a cheap way to get video content to customers.
A number of start-ups are embracing so-called peer-to-peer technology and have persuaded some big-name media companies to use them to deliver legal content.
"In 2005 when we met with content owners, 'peer-to-peer' was a dirty word," said Robert Levitan, chief executive of file-sharing company Pando Networks. "In 2007, finally, content owners came and said 'Yeah, we think there's a role for P2P.' "
Levitan was speaking Friday at the first P2P Market Conference of the Distributed Computing Industry Association, a trade group with more than 100 members.
Pando is prime example of mainstream acceptance: It's providing the means for NBC to provide DVD-quality downloads of its shows, including "The Tonight Show" with Jay Leno.
But 90 percent of P2P downloads are still of illegally copied content, according to David Hahn, vice president of product management at SafeNet, which tracks the networks.
Hahn said 12 million to 15 million people are file-sharing across the world at any one time, mainly on the BitTorrent and eDonkey networks. The attraction of file-sharing is not just that it's free - there's also content available that can't be had by legal means, like TV shows that haven't aired in Europe.
The BitTorrent software was invented and set free on the Net in 2002 by Bram Cohen. He later started a company to profit from the technology. In 2005, BitTorrent stopped providing links to copyright content and now helps studios distribute movies.
Overall, acceptance of P2P technology is higher in Western Europe, where piracy using the technology also happens to be especially rampant, according to SafeNet.
The British Broadcasting Corp. uses P2P technology from Mountain View-based VeriSign for its iPlayer, which streams some of its most popular shows. French TV channels are using software from 1-Click Media, which claims 1 million users a day. The Norwegian public broadcasting service recently started using BitTorrent software to get its shows out.
Media companies don't need P2P technology to provide video over the Internet. They can hire so-called content delivery networks, or CDNs, to get the media to their customers, at a cost of about 25 to 35 cents per gigabyte. Doug Walker, chief executive of BitTorrent, put the size of this market at $680 million this year.
But P2P technology can offload much of the work of the CDNs by having subscribers who have downloaded the data already send it to subscribers who haven't. That cuts the cost of delivery by 50 to 90 percent, according to several of the companies presenting at the conference.
The P2P programs used by Pando and VeriSign are quite different from BitTorrent and eDonkey. They don't let consumers distribute their own content. What comes down the pipe is strictly from the media companies that contract with the P2P companies. The consumers may not even know they're using P2P software - all they know is that they've installed video player software on the computer.
So far, Internet service providers have been left out of the equation even though they're saddled with the burden of conveying all the extra traffic. Some of them have partially blocked or slowed down P2P traffic to keep it from swamping their networks.
But the adversarial relationship is changing: At the conference, Verizon Communications presented results of a test that showed that by sharing information on its network with Pando so it could optimize downloads, the companies were able to speed downloads and reduce Verizon's cost of carrying the traffic.
However, not all Internet service providers are likely to get on board with that solution. It may work well for phone companies, but cable companies have a different structure to their networks, and it may not address their concerns.
Limewire Music Store Now Open in Beta
Limewire, formerly a popular Gnutella-based P2P file-sharing service, has opened the beta of its DRM-free download store.
Though the store was announced in August of last year, Limewire's DRM-free download shop has only now opened in public beta, offering tracks on an a-la-carte or subscription basis.
A delay is understandable, as the architecture of the service has changed considerably. Not based upon peer-to-peer file transfer, the 256K encoded MP3s -- and even a limited number of the 500K -- are hosted on Limewire servers. Purchase transactions can take place through the Web site in a browser, or through the P2P client's own interface.
Limewire's service saw it's first peak in popularity approximately four years ago, which garnered it a crackdown on file sharing by the RIAA. Last year, reports of its P2P client's relevance were mixed: Digital Music News claimed the service was installed in as many as 36% of PCs, while TorrentFreak claimed Limewire constituted 18% of all P2P clients deployed. (You're invited to try to do the math.)
Though BitTorrent clients have been frequently blamed by ISPs for gluttonous bandwidth consumption, no client has reached an installed base as high as Limewire's P2P.
Limewire's store enters the growing ranks of DRM-free music shops populated by Amazon.com, iTunes, eMusic and 7digital.
Indie Labels Take E-Commerce Into Their Own Hands
With their digital download sites, a growing number of indie rock labels have begun to answer the prayers of fans who would love to hear long-out-of-print singles on their iPods or other mobile devices.
Merge Records became the latest to join the field with the recent launch of its online emporium, which, according to label president Mac McCaughan, features "high-quality MP3s and full FLAC (free lossless audio codec) files of recent, older and out-of-print titles, including all the early Merge singles, as well as the Superchunk 'Clambakes' series." The store will also eventually host exclusive tracks, remixes and video content, in addition to the label's catalog.
Given the wealth of options available to indies that want to peddle their merchandise online, why would a label want to sink the time and money into developing its own store? Merge wouldn't divulge how much it cost to build its online store, but it did say that most of the expenses were upfront. And whatever profits it makes will go directly to the label and bands, Merge publicist Christina Rentz said. "There is no middleman taking fees, so we are the only ones who benefit."
The ability to promote artists on label download sites is also key. Rentz said that through a "recommended artists" feature on the Merge site -- similar to Amazon's -- the label will promote lesser-known or older artists.
Such sites can also help foster a new ethic of digital-song ownership. After a song is purchased at Seattle label Sub Pop's download store, launched in fall 2007, "you can log on to your account page and download it as many times as you want," director of technology and digital development Dean Hudson said. "We are also able to do things like automatically upgrade songs without any cost to the buyer once the song becomes available at a higher bit rate. And of course, all the songs are (digital-rights-management)-free."
Changing Buyers' Habits
Perks like those aside, driving buyers to a single-label online store can be a challenge, especially if they are used to purchasing all their music from one, multilabel outlet, such as eMusic or iTunes. Def Jux, one of the first indie labels to start a download site, circumvents the problem by making its Web site and Web store one and the same.
Many other labels' digital stores are directly connected to their online physical stores as well, so that users can purchase T-shirts, CDs and MP3s all at once. "We are counting on our mail-order customers being our early adopters," Rentz said. "Our goal is to make it a real one-stop shop."
Most of those one-stop-shop customers aren't trying to replace long-lost discs from their high school years, however. In fact, label representatives say that new releases account for the bulk of their online sales.
"Our highest growth months have always been those with new releases," Def Jux general manager Jesse Ferguson said. "They tend to bring the most new people to the site."
Hudson noted a similar phenomenon: "People do dip into the catalog from time to time," he said. "But in general, the newer stuff sells."
And when the newer stuff does sell, it sells for pretty much the same price it would on iTunes. Merge will sell its tracks for 99 cents each; Def Jux's albums are $9.95 each, and Sub Pop's are $9.90. McCaughan said he chose the price structure for philosophical reasons: "Driving down the price of downloads will devalue the music."
The Royalty Scam
LAST week at South by Southwest, the rock music conference held every year in Austin, Tex., the talk in hotel lobbies, coffeeshops and the convention center was dominated by one issue: how do musicians make a living in the age of the Internet? It’s a problem our industry has struggled with in the wake of the rising popularity of sharing mp3 music files.
Our discussions were brought into sharp relief when news reached Austin of the sale of Bebo.com to AOL for a staggering $850 million. Bebo is a social-networking site whose membership has risen to 40 million in just two years. In Britain, it ranks with MySpace and Facebook in popularity, although its users tend to come from a younger age group.
Estimates suggested that the founder, Michael Birch (along with his wife and co-founder, Xochi), walked away with $600 million for his 70 percent stake in the company.
I heard the news with a particular piquancy, as Mr. Birch has cited me as an influence in Bebo’s attitude toward artists. He got in touch two years ago after I took MySpace to task over its proprietary rights clause. I was concerned that the site was harvesting residual rights from original songs posted there by unsigned musicians. As a result of my complaints, MySpace changed its terms and conditions to state clearly that all rights to material appearing on the site remain with the originator.
A few weeks later, Mr. Birch came to see me at my home. He was hoping to expand his business by hosting music and wanted my advice on how to construct an artist-centered environment where musicians could post original songs without fear of losing control over their work. Following our talks, Mr. Birch told the press that he wanted Bebo to be a site that worked for artists and held their interests first and foremost.
In our discussions, we largely ignored the elephant in the room: the issue of whether he ought to consider paying some kind of royalties to the artists. After all, wasn’t he using their music to draw members — and advertising — to his business? Social-networking sites like Bebo argue that they have no money to distribute — their value is their membership. Well, last week Michael Birch realized the value of his membership. I’m sure he’ll be rewarding those technicians and accountants who helped him achieve this success. Perhaps he should also consider the contribution of his artists.
The musicians who posted their work on Bebo.com are no different from investors in a start-up enterprise. Their investment is the content provided for free while the site has no liquid assets. Now that the business has reaped huge benefits, surely they deserve a dividend.
What’s at stake here is more than just the morality of the market. The huge social networking sites that seek to use music as free content are as much to blame for the malaise currently affecting the industry as the music lover who downloads songs for free. Both the corporations and the kids, it seems, want the use of our music without having to pay for it.
The claim that sites such as MySpace and Bebo are doing us a favor by promoting our work is disingenuous. Radio stations also promote our work, but they pay us a royalty that recognizes our contribution to their business. Why should that not apply to the Internet, too?
Technology is advancing far too quickly for the old safeguards of intellectual property rights to keep up, and while we wait for the technical fixes to emerge, those of us who want to explore the opportunities the Internet offers need to establish a set of ground rules that give us the power to decide how our music is exploited and by whom.
We need to do this not for the established artists who already have lawyers, managers and careers, but for the fledgling songwriters and musicians posting original material onto the Web tonight. The first legal agreement that they enter into as artists will occur when they click to accept the terms and conditions of the site that will host their music. Worryingly, no one is looking out for them.
If young musicians are to have a chance of enjoying a fruitful career, then we need to establish the principle of artists’ rights throughout the Internet — and we need to do it now.
Copyright is Dead
No wonder they call Economics the Dismal Science. At the Internet Video Policy Symposium in Washington yesterday (co-sponsored by Content Agenda), a chorus line of academic economists postulated that content owners face a far more difficult challenge than they know in monetizing their content on the Internet, and that the odds that we can build our way out of the current debate over how to manage scarce online capacity are virtually nil.
The most enthusiastically glum was Gerry Faulhaber, a professor at the Wharton School of Business at the University of Pennsylvania and the former chief economist for the FCC. According to Faulhaber, copyright is a dead letter.
"Copyright is a very big issue in the legal world today, but in the business world, when you talk to consumers about protecting copyrights, it's a dead issue," he said. "It's gone. If you have a business model based on copyright, forget it."
According to Faulhaber, the "world of open piracy," created by digital technology will always thwart content owners seeking to leverage the monopoly granted to them by copyright law.
"The music industry is yet to figure this out," he said. "The current iTunes model is probably the best they can do. In both movies and music this is likely to result in substantially lower revenue for content owners." The movie studios will have an even tougher time than the music companies, according to Faulhaber, because some of the monetization models that can work for music--such as advertising--probably won't work for full-length movies.
The likely result? "Content providers will have to hook up with the conduit guys," Faulhaber said. "They're the only ones in a position to monetize content online because they can control its distribution."
Faulhaber was also gloomy about resolving the current stand-off over the allocation of bandwidth.
"Video takes lots and lots of bandwidth, and bandwidth is not cheap," he said. "If bandwidth were cheap, the business would be attracting new entrants, which clearly it isn't."
As a result, some degree of "traffic shaping," or "network management" is both essential and inevitable, as it has been for telephone networks for decades. Regulating or prohibiting traffic shaping, Faulhaber claimed, would only make the problem worse.
"Regulating traffic shaping will reduce available capacity," he said. "If demand exceeds supply, total throughput on a network declines, sometimes to zero. The best illustration of this is highway traffic. When the volume of traffic exceeds that capacity of the highway, everyone has to slow down."
According to Scott Wallsten, senior fellow at the Georgetown University Center for Business and Public Policy, simply building more capacity won't solve the problem.
"Japan has 100 MBs networks and they still have congestion, and ISPs still have to shape traffic," Wallsten said. "If you price something at zero, people will use too much of it. Creating more capacity alone is not the answer to congestion."
What is? Recognizing the value of network management, according to Wallsten. "The data say that a small number of users are creating an externality," he said, using the economists' term for an action that imposes a cost on parties not directly involved in a transaction. "You need to make those heavy users internalize those costs," through something like congestion pricing," he said.
The focus of policy makers, therefore, according to Wallsten, should be on making sure that the unavoidable tools of traffic shaping are not used anti-competitively.
"There's a huge role for anti-trust here, but those laws are already on the books," he said. "We don't need new regulation to prohibit anti-competitive behavior."
Guitar Hero Lawsuit
Gibson Guitar said on Friday that it filed a patent infringement lawsuit against Viacom Inc's MTV networks and Harmonix as well as Electronic Arts relating to the wildly popular "Guitar Hero" video games.
The lawsuit, filed in Federal District Court in Tennessee, relates to the same patent involved in another suit Gibson filed earlier against various retailers, the Tennessee-based guitar maker said in a statement.
The "Guitar Hero" series has sold more than 14 million units in North America and raked in more than $1 billion since its 2005 debut.
Gibson said the games, in which players use a guitar-shaped controller in time with notes on a television screen, violate a 1999 patent for technology to simulate a musical performance.
Harmonix developed the first "Guitar Hero" game and was later bought by MTV. Electronic Arts and another company, Activision Inc, as well as several retailers, either develop, distribute or sell one or several of the games in the "Guitar Hero" series.
"This lawsuit is completely without merit and we intend to defend it vigorously," Harmonix said in a statement.
A spokesperson for Electronic Arts could not be reached for comment.
Earlier this month, Activision filed a preemptive suit against Gibson, which had complained that the games infringe upon one of its patents.
Activision filed a lawsuit asking the U.S. District Court for Central California to declare Gibson's patent invalid and to bar it from seeking damages.
Gibson, whose electric guitars are used by legendary blues and rock artists such as Eric Clapton, B.B. King and Slash, has been a high-profile partner in the "Guitar Hero" games.
Activision licensed the rights to model its video controllers on Gibson guitar models and to use their likenesses in the game.
Activision has said that by waiting three years to raise its claim, Gibson had granted an implied license for any technology.
(Additional reporting by Scott Hillis and Gina Keating in Los Angeles; Editing by Christian Wiessner) http://www.reuters.com/article/indus...47943920080322
Study: Internet Radio Reaches 33 Million Americans Each Week
A new study from Arbitron and Edison Media Research estimates that roughly 33 million Americans ages 12+ listen to a radio station online during the average week. This is an impressive jump from 29 million per week in 2007. The annual "Infinite Dial 2008: Radio’s Digital Platforms" study will be released in April.
Other results from the study showed that 13 percent of Americans 12+ listened to online radio last week, an increase of two percent from January 2007. Also, while roughly 24 percent of Americans 12+ have a social networking profile on a site such as MySpace or Facebook, 63 percent of online radio listeners have a profile on these sites. Also, one-third of online radio listeners with a social network profile visit their social networking site nearly every day or several times per day The top social networking Web sites among online radio listeners are MySpace and the business professional networking service Linked-In. The study showed that 28 percent of online radio listeners have a MySpace page, while nearly 24 percent have a profile on Linked-In.
"Social networking is clearly not about creating exclusive, self-enclosed communities," said Diane Williams, senior analyst, custom research for Arbitron. "We found that online radio listeners are more than one and half times more likely to have a profile on a social networking site as compared to average Americans and that they tend to be power-users with one-third of online radio listeners logging on to their social networking site nearly every day or even multiple times per day."
China Blocks YouTube Over Tibet Videos
Internet users in China were blocked from seeing YouTube.com on Sunday after dozens of videos about protests in Tibet appeared on the popular U.S. video Web site.
The blocking added to the communist government's efforts to control what the public saw and heard about protests that erupted Friday in the Tibetan capital, Lhasa, against Chinese rule.
Via CNet Asia
Access to YouTube.com, usually readily available in China, was blocked after videos appeared on the site Saturday showing foreign news reports about the Lhasa demonstrations, montages of photos and scenes from Tibet-related protests abroad.
There were no protest scenes posted on China-based video Web sites such as 56.com, youku.com and tudou.com.
The Chinese government has not commented on its move to prevent access to YouTube. Internet users trying to call up the Web site were presented with a blank screen.
Chinese leaders encourage Internet use for education and business but use online filters to block access to material considered subversive or pornographic.
Foreign Web sites run by news organizations and human rights groups are regularly blocked if they carry sensitive information. Operators of China-based online bulletin boards are required to monitor their content and enforce censorship.
China has at least 210 million Internet users, according to the government, and is expected to overtake the United States soon to have the biggest population of Web surfers.
Beijing tightened controls on online video with rules that took effect Jan. 30 and limited video-sharing to state-owned companies.
Regulators backtracked a week later, apparently worried they might disrupt a growing industry, and said private companies that were already operating legally could continue. They said any new competitors will be bound by the more stringent restrictions.
Wikileaks Defies 'Great Firewall of China'
Whistleblower website Wikileaks has made 35 censored videos of civil unrest in Tibet available in a bid to get round the "great firewall of China".
Wikileaks said that posting the videos was a "response to the Chinese Public Security Bureau's carte-blanche censorship of YouTube, the BBC, CNN, the Guardian and other sites" that had carried sensitive video footage about Tibet.
Wikileaks, which earlier this month successfully saw off legal action that threatened to shut the website, is calling on bloggers to post footage to help it circumvent the Chinese internet censorship.
China's internal censorship of online and TV coverage of the unrest in Tibet has drawn heavy criticism.
However, the BBC world news editor, Jon Williams, revealed on a BBC Editors blog that the press counsellor at the Chinese embassy in London had indicated that a foreign press trip to Tibet could be on the cards.
Williams said that the Chinese embassy is giving "serious consideration" to organising a foreign press trip to Lhasa, the Tibetan capital.
He added that the embassy's press counsellor, Liu Weimin, had repeated an offer made by the Chinese premier, Wen Jiabao, in Beijing that "serious consideration" was being given to organising an official trip so that "international media could see for themselves the situation in Tibet".
Earlier this week the Guardian editor, Alan Rusbridger, sent a formal letter of complaint to the Chinese embassy in London calling for access to the Guardian website to be restored and "henceforth unfettered".
Chinese authorities can censor online content internally using either an outright block on a specific website address, or using filtering technology that restricts access to individual online articles containing key words such as "Tibet" and "violence".
It has not been clear which technical restrictions the Chinese authorities have been using against international news websites.
However, according to reports from several internet users in China, the censorship appears to have become less draconian this week compared to the weekend, when the worst of the unrest in Tibet was taking place.
Videos on the Guardian website that had previously been inaccessible can now be viewed in China and users in major cities such as Beijing, Shanghai and Guilin have been able to access a range of online news stories on Tibet.
One Chinese technology blogger said that while access has improved it does not necessarily mean that the authorities have relented.
"Suppose there is less access from Chinese readers once they felt the site is hard to access," he said. "The censorship system will turn to other hot sites with higher sensitive hits automatically."
Wikileaks' call for bloggers to post the Tibet videos in a bid to circumvent the Chinese internet restrictions echoes comments made earlier this week by Jaime FlorCruz, the CNN Beijing bureau chief, about how digital information was being disseminated from Tibet.
FlorCruz said that the online and broadcast censorship of media and access in Tibet meant that the only information trickling out from locals was via less mainstream sites including a Chinese blog and a version of Twitter.
CBC To Release TV-Show Via BitTorrent, For Free
CBC, Canada’s public television broadcaster has plans to release the upcoming TV-show “Canada’s Next Great Prime Minister” for free via BitTorrent. This makes CBC the first North-American broadcaster to embrace the popular filesharing protocol.
According to an early report, high quality copies of the show will be published the day after it aired on TV, without any DRM restrictions.
CBC is not alone in this, European broadcasters, including the BBC, are currently working on a next generation BitTorrent client that will allow them to make their content available online. The benefit of BitTorrent is of course that it will reduce distribution costs.
The popularity of movies and TV-shows on BitTorrent hasn’t gone unnoticed. We reported earlier that some TV-studios allegedly use BitTorrent as a marketing tool, and others leaking unaired pilots intentionally.
It is safe to say that BitTorrent is slowly replacing Tivo. Approximately 50% of all BitTorrent downloads are TV-shows, and some episodes of popular shows such as “Lost”, “Prison Break” and “Heroes” get up to 10 million downloads per episode, spread over thousands of sites.
It is good to see that broadcasters slowly start to realize that they can benefit from sharing their content via BitTorrent. Last month Norwegian Broadcasting (NRK) made the popular TV-show “Nordkalotten 365″ available in a DRM-less format. This experiment turned out to be a huge success, while the distribution costs were close to zero.
Do Not Adjust Your Set: TV is About to Blow Apart
Is television over? I don’t mean the technology of course. Television, in many ways, has never been better. High definition – although pretty brutal on Republican frontrunner John McCain – has applied Windolene to the televised world and made nature documentaries as riveting as the latest block-buster. CGI effects have made even Doctor Who as cool as Hollywood.
By television being over, I mean the classic television experience: the ritual of coming home after work, flopping on the couch and simply allowing “what’s on” to flood over you. We still do it of course. As an avenue for the moving image, however, the passive, network-driven model has clearly changed beyond recognition and will soon change still further. The classic television programme, like the classic newspaper, is morphing into something very different.
The internet, in the television industry as in many others, is both the infection and the cure. It will do to television what it has done to journalism: make everyone a producer and everyone a potential star.
You can trace the long arc of this now accelerating transformation from the onset of cable and satellite in the 1980s and 1990s. The mid-20th-century ability of a network such as the BBC or the great American broadcasting companies – such as CBS or NBC – to determine or greatly affect what people saw and felt and thought at any given time was slowly, and mercifully, eroded. That trend was clearly ramped up in the new millennium by Tivo – digital video recorders not unlike the Sky+ system – so that even the far more diverse programming of a hundred different channels came to be sliced and diced by myriad consumer preferences and appetites.
I don’t know about you, but I rarely leave my evening viewing to chance or to programmers any more. It’s planned and recorded in advance. And even if I watch live I delay starting for 10 minutes so I can zip past the advertisements.
The web, in turn, has ratcheted TV consumer power up a couple of notches. In this election season in the United States the shift is unmistakable. Ratings for most cable news shows have soared, but the platforms for dissemination of content have proliferated just as quickly. Some still watch the debates in real time – but it is very easy to watch them the day after online with all the tedious boilerplate removed.
YouTube has the clips; and the instant parodies; and the day-later parodies of the parodies. Blogs now edit their own versions of the high-lights, with blogger video commentary introducing or even interrupting them. And so a television moment on a late night comedy show will be designed for multiple audiences: the live one, the Tivoed one and the next day’s online one. The soundbite has become the videobite. And many TV advertisements are given nominal network exposure before having a longer viral life online.
The only limit to the merging of the web and television in this way seems to be the online attention span. Online viewers, sitting at their laptops or gazing at their iPhones, only really want to watch online TV for three minutes at most. And that’s why the old format still has a future.
You don’t want to watch a programme, let alone a full-length film or lengthy documentary, or even a half-hour news broadcast, on a computer screen. What endures online is the quick hit, the short impression, the visual punchline that requires a minimal set-up. For drama or in-depth journalism or even an interview that can actually get beneath the surface of a subject or beyond the spin of a public figure: television still has the edge.
But that too may be changing. No one doubts that the technology of streaming online video to a wide-screen television is on its way, however unsatisfying some of its current manifestations. And that, in turn, means self-produced, web-originated video is on the verge of bigger and better ambitions.
Just as blogging swept away the barriers of entry to journalism, so citizen television will surely begin to reach an audience that appreciates it. Right now it’s merely a blip. Novelty acts or musical parodies or cartoon fun dominate web television. But the principle of mass access to TV audiences through online media has been established.
Already you find print journalists or bloggers switching on a video cam and broadcasting their content live – their own personal television channel. Most online magazines are beginning to generate their own amateurish but classic television chat shows; traditional newsrooms feature online video interviews with their reporters and columnists and send them out to the blogosphere. The share that video is taking of web bandwidth keeps growing exponentially – and may even create traffic jams within a few years’ time.
As so often, this democratisation of production means higher highs and much lower lows. Nutcases and geniuses – thousands who would never in a million years have made it past the professional barriers for old-style TV networks – now broadcast their idiosyncratic monologues to the masses. Hyde Park Corner is no longer a function of mere words-on-pixels. You can now see and hear an opiner opine or watch a conversation unfold. Find yourself someone to interview, set up your video cam and you can have your own show. Just put it on your blog and try to find an audience. Anyone with a modem has their own potential TV channel. It’s just that most people haven’t realised it yet.
And this is where the new medium, with a bit of luck, may reach back and regenerate the old. When you think of the glory days of television chat shows, Americans recall the leisurely erudition of William F Buckley’s Firing Line, where intellectuals and thinkers were able to think out loud for an hour on the subjects of the day, without commercials and with an audience that revelled in more than a five-minute attention span.
That used to be the sober, intellectual, black-and-white BBC as well. It is all but impossible to find such a thing on network or cable television today. But the possibility of an online version is very real and economically feasible.
The great beauty of the online world, after all, is its lack of constraint. No interview need stop in full flow to accommodate an advertisement; the cost is so low that the format can accommodate the content rather than the other way round; and if viewers have sought you out, there is less need for the low attention span gimmick to keep the ratings up.
Equally, a simple 20-second sight gag or joke or comment online can be profoundly effective – a time slot that can’t exist alone on television. The web deconstructs and reconstructs media in ways that the institutions of the past couldn’t muster.
So if you’re still reeling from the impact of blogs on journalism, sit tight. Blogging with words was simply the beginning; blogging with video has only just begun.
Computers Owners to Face TV Licence Fees
Sweden's television licensing agency Radiotjänst is preparing to push through measures requiring computer owners to pay the annual television license fee of around 2,000 kronor ($327).
The move comes just days after Sveriges Television (SVT) announced plans to begin broadcasting all its programmes live on the internet.
"It is our interpretation that a computer may be considered a licensable piece of equipment if an entire channel's scheduled programming is available over the internet," said Anna Pettersson, CEO of Radiotjänst.
Since Sweden's public service broadcasting laws are "technology neutral", a computer will incur the same licence costs as a television if the same service is available on both.
People who have already paid their TV licence fee -- some 90 percent of Swedish households -- will not be required to pay extra for their computers.
But the move could affect many students, who for financial reasons may have a computer with an internet connection but no television.
The change will however not affect communal student houses where there is a TV for which the fee has already been paid.
"For households like these it doesn't matter how many transmitters there are," said Pettersson.
In Germany, similar measures were put in place in 2007. There however, computers with internet connections were equated with radios, meaning that the licence fee amounted to €66, considerably less than the €204 ($322) charged for a full television licence.
Court Will Examine Profanity Rules
The Supreme Court on Monday stepped into a legal fight over the use of curse words on the airwaves, the high court's first major case on broadcast indecency in 30 years.
The case concerns a Federal Communications Commission policy that allows for fines against broadcasters for so-called "fleeting expletives," one-time uses of the F-word or its close cousins.
Fox Broadcasting Co., along with ABC, CBS and NBC, challenged the new policy after the commission said broadcasts of entertainment awards shows in 2002 and 2003 were indecent because of profanity uttered by Bono, Cher and Nicole Richie.
A federal appeals court said the new policy was invalid and could violate the First Amendment.
No fines were issued in the incidents, but the FCC could impose fines for future violations of the policy.
The case before the court technically involves only two airings on Fox of the "Billboard Music Awards" in which celebrities' expletives were broadcast over the airwaves.
FCC Chairman Kevin Martin said Monday that he was pleased with the court's decision.
"The Commission, Congress and most importantly parents understand that protecting our children is our greatest responsibility," he said in a prepared statement. "I continue to believe we have an obligation then to enforce laws restricting indecent language on television and radio when children are in the audience."
Fox Broadcasting Co. was also pleased. The decision will "give us the opportunity to argue that the FCC's expanded enforcement of the indecency law is unconstitutional in today's diverse media marketplace where parents have access to a variety of tools to monitor their children's television viewing," company spokesman Scott Grogin said in a prepared statement.
The case will be argued in the fall.
The FCC appealed to the Supreme Court after the 2nd U.S. Circuit Court of Appeals in New York nullified the agency's enforcement regime regarding "fleeting expletives." By a 2-1 vote, the appeals court said the FCC had changed its policy and failed to adequately explain why it had done so.
The appeals court, acting on a complaint by the networks, nullified the policy until the agency could return with a better explanation for the change. In the same opinion, the court also said the agency's position was probably unconstitutional.
The court rejected the FCC's policy on procedural grounds, but was "skeptical that the commission can provide a reasoned explanation for its fleeting expletive regime that would pass constitutional muster."
Solicitor General Paul Clement, representing the FCC and the Bush administration, argued that the decision "places the commission in an untenable position," powerless to stop the airing of expletives even when children are watching.
The FCC has pending before it "hundreds of thousands of complaints" regarding the broadcast of expletives, Clement said. He argued that the appeals court decision has left the agency "accountable for the coarsening of the airwaves while simultaneously denying it effective tools to address the problem."
The appeal also argued that the FCC's explanation of its policy was well reasoned and that the appeals court decision was at odds with the landmark 1978 indecency case, FCC v. Pacifica Foundation, the last broadcast indecency case heard by the Supreme Court.
Lawyers for the networks said the old policy worked well for 30 years and that broadcasters had no reason suddenly to allow for an explosion of expletives.
Separately, CBS is challenging a $550,000 fine the FCC imposed for the "wardrobe malfunction" that bared Janet Jackson's breast during a televised 2004 Super Bowl halftime show. The 3rd U.S. Circuit Court of Appeals in Philadelphia is considering whether the incident was indecent or merely a fleeting and accidental glitch that shouldn't be punished.
The FCC changed its policy on indecency following a January 2003 broadcast of the Golden Globes awards show by NBC when U2 lead singer Bono uttered the phrase "f------ brilliant." The FCC said the "F-word" in any context "inherently has a sexual connotation" and can trigger enforcement.
NBC challenged the decision, but the case has yet to be resolved.
The Fox programs at issue in the case before the high court are a Dec. 9, 2002, broadcast of the Billboard Music Awards in which singer Cher used the phrase "F--- 'em" and a Dec. 10, 2003, Billboards broadcast in which reality show star Nicole Richie said, "Have you ever tried to get cow s--- out of a Prada purse? It's not so f------ simple."
The case will be argued in the fall.
The case is FCC v. Fox Television Stations, 07-582.
Associated Press writer John Dunbar contributed to this report.
FCC Bans Landlords from Making Exclusive Phone Deals
Tenants are no longer bound by exclusivity agreements between apartment owners and service providers.
Landlords can no longer force tenants to subscribe to a particular phone service, regulators decided Wednesday.
The five-member Federal Communications Commission voted unanimously to ban exclusive contracts between apartment owners and phone companies.
The ban, which applies to existing exclusivity agreements, will make it easier for tenants to sign up with a competing phone company or to buy bundled packages from cable companies that include telephone, entertainment and Internet services.
"This decision will help provide Americans living in apartment buildings with the same choices as people that live in the suburbs," FCC Chairman Kevin J. Martin said.
The commission estimates that 100 million Americans are renters.
"We think the FCC did the right thing," said Regina Costa of the Utility Reform Network, a San Francisco consumer group. "Apartment building owners may not be too happy about it, but the people who live in these buildings now have the ability to choose what services are best for them rather than what's convenient for the landlord."
Or profitable. Apartment owners grant exclusive rights to phone companies in return for discounts ranging from 10% to 30%, according to Alex Winogradoff, telecom analyst with technology research firm Gartner Inc. Most of those savings went in the owners' pockets, not the tenants', he said.
Winogradoff estimates that as many as 60% of apartments nationwide now have exclusivity agreements with phone companies, compared with 15% five years ago.
Big phone companies like AT&T Inc., the biggest telephone service provider in California and the U.S., supported the FCC's decision. Groups representing apartment owners opposed it, saying the commission acted without doing enough research.
"If you take away the right of an apartment owner to bargain with various telecom providers to get the best price and service, only the big, dominant providers are going to prevail and you're not going to have better prices or better services because of that," said Jim Arbury, senior vice president of government affairs for the National Multi Housing Council, which represents large building owners.
The ranks of independent phone companies have dwindled in recent years, shrinking the options for tenants freed from exclusivity arrangements.
Last fall, the FCC banned similar agreements between landlords and cable companies. That decision is facing a legal challenge from the cable TV industry.
Comcast: FCC Lacks Any Authority to Act on P2P Blocking
The man who spoke for Comcast at Harvard last month has told the Federal Communications Commission that the agency has no legal power to stop the cable giant from engaging in what it calls "network management practices" (critics call it peer-to-peer traffic blocking). Comcast vice president David L. Cohen's latest filing with the Commission claims that regulators can do nothing even if they conclude that Comcast's behavior runs afoul of the FCC's Internet neutrality guidelines.
"The congressional policy and agency practice of relying on the marketplace instead of regulation to maximize consumer welfare has been proven by experience (including the Comcast customer experience) to be enormously successful," concludes Cohen's thinly-veiled warning to the FCC, filed on March 11. "Bearing these facts in mind should obviate the need for the Commission to test its legal authority."
Should we read "test" as in "test an FCC Order on ISP network management in Federal court"? Cohen presented Comcast's case at the FCC's February 25th net neutrality hearing, held at Harvard Law School. Whatever the merits of his March 11 claims, they should be examined carefully. They may represent the framework for a legal challenge against any action the FCC takes to protect consumers.
Cohen's arguments fall along three main points.
Congress has not given the FCC authority to act on this matter
The Federal Communications Commission has made clear, Cohen writes, that cable service is not a common carrier and therefore is not subject to common carrier guidelines. Cohen summons the FCC's 2002 Cable Modem Declaratory Ruling to back up this argument. The ruling invoked language contained in the Telecommunications Act of 1996 to characterize cable as an "information service" rather than a common carrier. After a long court battle, the Supreme Court backed up the FCC in its 2005 Brand X decision. The high court rejected the Brand X ISP's plea that if the FCC did not attach common carrier status to cable, cable providers could exclude smaller competing ISPs from accessing larger networks.
Comcast argues that Cable Modem settles the question: "Any attempt to justify an injunction on Comcast based on a statutory provision that is explicitly limited to common carriers would violate the Communications Act and be arbitrary and capricious," Cohen writes.
The FCC's Internet Policy Statement does not give the agency the authority to deal with the issue
The 2005 statement pledges the FCC to ensure that ISP services are operated "in a neutral manner." But Cohen insists that said declaration has no force of law. "It is settled law that policy statements do not create binding legal obligations," he argues. "Indeed, the Internet Policy Statement expressly disclaimed any such intent."
Actually, the statement declared that the FCC has the "jurisdiction necessary to ensure that providers of telecommunications for Internet access or Internet Protocol-enabled (IP-enabled) services are operated in a neutral manner." On the other hand, FCC Chair Kevin Martin issued a comment insisting that "while policy statements do not establish rules nor are they enforceable documents, today’s statement does reflect core beliefs that each member of this Commission holds regarding how broadband internet access should function." Cohen is obviously hanging his legal hat on this ambiguity
Regulating Comcast's ISP policies may violate the Administrative Procedures Act (APA)
This is a generic protest found in many FCC filings. Congress enacted the Administrative Procedures Act in 1946 to establish uniform rules and guidelines for governmental agencies, given the enormous expansion of the executive branch over the previous dozen years. Cohen writes that the FCC is bound by the APA "not to act in an arbitrary and capricious manner" and that the law "does not permit the Commission to switch abruptly from an explicit policy of relying on market forces to a new regime in which the decisions that Internet service providers make in real-time in a dynamic marketplace are subject to governmental second-guessing and disruption."
Actually, the APA mentions "arbitrary and capricious" behavior only once—and in reference to the withholding of government documents from litigants.
The real question is how much significance to attach to any of these arguments. Legal saber rattling? Maybe. They could also pose a warning to the FCC to expect a lawsuit following any action against ISP P2P blocking. FCC Chair Martin says he hopes to finish his investigation of Comcast by late June.
ISP Quarrel Partitions Internet
There's a rift in the internet, pitting the Swedes and the Yanks against one another and making it difficult for millions to visit websites hosted across the Atlantic.
U.S.-based Cogent Communications shut down their links to the Swedish-based ISP Telia last Thursday in what Cogent describes as a contract dispute about the size and locations of the pipes connecting the two ISPs.
Like many large ISPs, Cogent and Telia interconnect their networks at multiple points and trade roughly equivalent amounts of traffic, an arrangement called peering.
The feud has continued through Tuesday, keeping it impossible for Swedes, along with other Nordic and Baltic residents, to reach sites hosted on Cogent's network and vice versa.
That's likely much more of a customer service problem for Telia than Cogent, due to geography and the size of Cogent's network (see illustration above).
Cogent broke up with Telia for the good of the internet, according to Cogent spokesman Jeff Henrikson.
Telia wasn't providing fat enough pipes at some peering locations and wouldn't fix the problems, according to Henrikson.
"Some traffic flow was impeded and some traffic was redirected further than it needed to go," Henrikson said. "They weren't responding to requests to comply with our contract, and we weren't left with much alternative but to terminate the contract."
Henrikson said Cogent is willing to get back together with Telia, but only if they fix the problems.
"This will lead us to having a stronger and better internet with full levels of traffic flowing freely across our networks," Henrickson said.
At first Telia customers got to sites hosted by Cogent through alternate paths, but that workaround was killed quickly, according to the analysis of internet bandwidth watcher Earl Zmijewski of Renesys.
A VOIP phone call from THREAT LEVEL to Telia for comment did cross the Atlantic and ring the 24-hour hotline, but no one answered.
Om Malik caught this story Friday, and his comment section is full of angry Swedes and Finns ready to take up arms. If only they could reach their Facebook pages to organize...
Independence at stake
Activists Can Nominate CNet Board, Judge Says
Michael J. de la Merced
A court in Delaware ruled Thursday that a group of activist investors can nominate seven directors to the board of CNet Networks, one of the original online media companies.
The ruling by William B. Chandler III of Delaware’s Court of Chancery — in which he describes the fight as “a tempest in a teapot” — opens the door for the investors, led by the hedge fund Jana Partners, to try to take over CNet’s board.
“We hope that the company will now put aside their efforts to thwart this debate with technicalities and instead engage stockholders in a dialogue about the company’s future,” Barry Rosenstein, Jana’s managing partner, said in a statement.
Shares in CNet rose 26 cents, to $7.46.
The chancellor’s ruling is the latest development in a fight over CNet, whose shares have fallen 9.6 percent over the last year as it continues to be outmaneuvered by competitors. Jana announced in January that it was nominating seven directors to the company’s board, which would replace two directors up for re-election and add five more.
CNet had argued that, according to its bylaws, no shareholder can propose amending those rules unless it has owned $1,000 worth of shares in the company for at least one year.
No investor in Jana’s group has owned shares that long, but the hedge fund argued that the rule applied only if it sought to include its proposal in CNet’s proxy solicitation materials. Jana said it would publish its own proxy solicitations and that the bylaw did not apply. Chancellor Chandler ruled in its favor.
Jana, an $8 billion fund, is known for its shareholder activism, having taken on companies like TD Ameritrade and Alcoa. In its fight against CNet, it has enlisted other investors like Sandell Asset Management, another hedge fund, and Spark Capital, a venture capital firm.
Jana currently hold almost 11 percent of CNet’s shares, according to regulatory filings. Sandell owns 2.3 percent and Spark about 1.7 percent.
CNet said in a statement that it was considering an appeal. CNet also said that the ruling, if upheld, did not remove other obstacles for Jana and its allies. The company pointed to a provision in its bylaws requiring a supermajority vote of about 67 percent to seat more than two directors.
Times Co. to Give Seats to Hedge Funds
The New York Times Company has struck a deal with a pair of hedge funds that want to shake up the company, giving the funds two seats on the board in order to avoid a proxy fight, the two sides announced Monday.
The agreement with Harbinger Capital Partners and Firebrand Partners marks the first time since the Times Company went public in 1967 that it has accepted directors nominated by outsiders, Times Company executives said.
It also settles, for now, the most serious bid the company has faced to loosen the control of the chairman, Arthur Sulzberger Jr., and his family. The funds have amassed 19 percent of the company’s common stock, which may be the largest stake any non-family shareholder has held in those four decades.
The new arrangement could make for some uncomfortable internal politics, but it is not clear that it will have any effect on the company’s direction. A two-class stock structure gives the Sulzberger family undisputed control of a majority of the board, and the Harbinger-Firebrand group has said that it has no plan to challenge that control.
In a statement released by the company, Mr. Sulzberger said, “Both the board and management welcome the perspectives and insights of our proposed new directors.”
In the same statement, Philip A. Falcone, senior managing director of Harbinger, said, “Our nominees look forward to working with the other directors and management to build and deliver value for all shareholders.”
The hedge funds have argued that the company should sell many of its assets — including, possibly, the headquarters building in Manhattan, The Boston Globe, some smaller newspapers and a minority stake in the Boston Red Sox — and invest aggressively in Internet companies. But the funds have also been careful not to criticize management directly, and have said that once they are privy to inside information, they may have a different view of the company’s strategy.
A person close to the funds’ leaders said that the Harbinger-Firebrand team could have won a proxy fight, but that the effort would have been expensive and damaging to relations with management. He was given anonymity because he was not authorized to discuss their strategy.
What they want is “a seat at the table,” he said, and “to understand the board and the management’s thinking, and add expertise and horsepower to their thinking.”
Janet L. Robinson, the Times Company’s chief executive officer, has said repeatedly that the company was always open to asset sales and Internet purchases; it sold its television stations last year, and has bought a handful of online companies, including About.com, in the last three years. But she insists that the company must act prudently, not selling just to sell or buying just to buy.
Analysts have generally supported the company’s recent strategy, though some would like to see deeper cost-cutting and have cast doubt on the hedge funds’ proposals.
The funds had nominated candidates for all four director seats elected by holders of Class A stock. Under the pact, the company agreed to nominate two of them — Scott Galloway, a founder of Firebrand and the leading strategist of the hedge funds’ bid, and James Kohlberg, chairman of Kohlberg & Company.
A Sulzberger family trust owns almost 90 percent of the company’s Class B stock, which is not publicly traded, and has the sole power to vote on most of the board seats.
Under the truce with the hedge funds, the number of directors elected by Class B stock will rise from 9 to 10. The number of Class A directors will rise from 4 to 5. William E. Kennard, who was one of the company’s original Class A director nominees, will instead become a Class B nominee.
Until now, the family and its allies on the board have effectively controlled the Class A election as well as the Class B.
But in recent years, a falling stock price, a sharp downturn for the newspaper industry and mounting shareholder discontent have relaxed that grip as never before. In 2006, 30 percent of Class A shareholders withheld their votes for directors, and last year, 42 percent did so. The major shareholder that led that campaign, Morgan Stanley Investment Management, gave up its attempts to force a change in direction and sold its stake in the company.
Times Company stock, which peaked above $50 a share in 2002, has mostly traded between $15 and $20 since last October. That opened the door for the Harbinger-Firebrand partnership to buy up more than twice as many shares as Morgan Stanley held.
The company reported earnings of $209 million on $3.2 billion in revenue last year. Like most major newspaper companies, its advertising revenue was sharply lower in 2007, and has continued to fall in 2008. The industry is suffering the twin blows of a long-term shift of readers and advertisers to the Internet and a downturn in the overall economy.
Web Has Unexpected Effect on Journalism
The Internet has profoundly changed journalism, but not necessarily in ways that were predicted even a few years ago, a study on the industry released Sunday found.
It was believed at one point that the Net would democratize the media, offering many new voices, stories and perspectives. Yet the news agenda actually seems to be narrowing, with many Web sites primarily packaging news that is produced elsewhere, according to the Project for Excellence in Journalism's annual State of the News Media report.
Two stories - the war in Iraq and the 2008 presidential election campaign - represented more than a quarter of the stories in newspapers, on television and online last year, the project found.
Take away Iraq, Iran and Pakistan, and news from all of the other countries in the world combined filled up less than 6 percent of the American news hole, the project said.
The news side of the business is dynamic, but the growing ability of news consumers to find what they want without being distracted by advertising is what's making the industry go through some tough times.
"Although the audience for traditional news is maintaining itself, the staff for many of these news organizations tend to be shrinking," said Tom Rosenstiel, the project's director.
NBC News' recent decision to name make David Gregory host of a nightly program on MSNBC, while keep his job as White House correspondent is an example of how people are being asked to do much more, he said.
News is less a product, like the day's newspaper or a nightly newscast, than a service that is constantly being updated, he said. Last week, for instance, The New York Times posted its first report linking New York Gov. Eliot Spitzer to a prostitution ring in the early afternoon, and it quickly became the day's dominant story.
Only a few years ago, newspaper Web sites were primarily considered an online morgue for that day's newspaper, Rosenstield said.
"The afternoon newspaper is in a sense being reborn online," he said.
A separate survey found journalists are, to a large degree, embracing the changes being thrust upon them. A majority say they like doing blogs and that they appreciate reader feedback on their stories. When they're asked to do multimedia projects, most journalists find the experience enriching instead of feeling overworked, he said. The newsroom is increasingly being seen as the most experimental place in the business, the report found.
Most news Web sites are no longer final destinations. The report found that many users insist that the sites, and even individual pages, offer plenty of options to navigate elsewhere for more information, the project found. Rosenstiel said he's even able to reach Washington Post stories through the New York Times' Web site.
In another unexpected finding, citizen-created Web sites and blogs are actually far less welcoming to outside commentary than the so-called mainstream media, the report said.
Outrage at Cartoons Still Tests the Danes
“I think this is safe house No. 5,” Kurt Westergaard said the other day, and it was clear that he genuinely had lost track.
Last month the Danish police arrested two Tunisians and a Dane of Moroccan descent on charges of plotting to kill Mr. Westergaard, one of the 12 cartoonists whose pictures of Muhammad in the Danish newspaper Jyllands-Posten sparked protests, some of them violent, by Muslims around the world in 2006 and put bounties on the heads of Mr. Westergaard and his editor, Flemming Rose. Mr. Westergaard (he drew Muhammad with a bomb in his turban) has been in hiding ever since.
Americans, for whom the presidential election seems to have become a delirious, unending sport, preoccupying their attention, turn out not to be the only ones who preferred to forget about the cartoons. So had many Danes and fellow Europeans. They were shocked by the arrests.
In the days shortly after, 17 Danish newspapers, having declined to publish the offending cartoons two years ago, declared solidarity with Mr. Westergaard and printed them. This, naturally, provoked a fresh round of protests from Gaza to Indonesia.
In Egypt the speaker of the Parliament claimed Danes had violated the Universal Declaration of Human Rights, which seemed a little rich coming just a few weeks after the European Parliament, which itself complained about the cartoons’ re-publication, condemned Egypt for the sorry state of its human rights.
Meanwhile demands in Afghanistan for the instant withdrawal of Danish troops under NATO’s command and the severing of all diplomatic ties with Denmark caused Denmark’s foreign minister, Per Stig Moeller, to reply that it was becoming difficult for him “to put Danish soldiers’ lives in danger” to support a country “where one is at risk to be condemned to death for values that we believe to be an inseparable part of democracy and the modern world.”
And then, while it still seemed just a Danish problem, trouble spread. A gallery in Berlin was shut because an exhibition of satirical art by a Danish group called Surrend, which has previously produced works mocking neo-Nazis, caused several angry Muslim visitors to threaten violence unless a poster depicting the Kaaba, the shrine in Mecca’s Grand Mosque, was removed.
Two years earlier, in the wake of the original cartoon imbroglio, a Berlin opera company canceled performances of Mozart’s “Idomeneo” when police warned the company that a scene with the severed head of Muhammad, among other religious figures, posed “incalculable risk” to the performers and audience. Cries of self-censorship erupted across Europe.
This time around Germany’s interior minister, Wolfgang Schäuble, a politician who has been conspicuous in working to improve relations with Muslims in Germany, was reported to have urged other newspapers in Europe to reprint the cartoons, a remark he strongly denied making, which made no difference to the Saudi newspaper Al-Watan.
“The German minister is required to immediately withdraw his statement,” Al-Watan demanded. Racism, not freedom of speech, was obviously behind Germany policy, the newspaper added. After all, Germans aren’t free to “discuss the Jewish Holocaust.”
And everybody knew what that meant.
Now many Europeans seem fed up. Over dinner in Copenhagen recently, Mr. Rose, who has made something of a second career out of the cartoon fallout, said it all came late but was inevitable.
“At the time, in 2006, there were good journalistic reasons for other newspapers to publish the cartoons because few people had seen them then, so they were news,” he said. “Now the journalistic justification is almost nonexistent because everyone knows what they look like, so it’s more about solidarity than about news.”
Unlike Mr. Westergaard, Mr. Rose doesn’t live in safe houses, although he long ago removed his name from the local telephone directory and has learned that a different Flemming Rose (there are apparently several in Denmark) decided to change his name.
“It was not about mocking a minority but a religious figure, the Prophet, so it was blasphemy, not racism,” Mr. Rose said of the cartoons. “The idea of challenging religious authority led to liberal democracy, whereas the singling out of minorities, as minorities, led to Nazism and the persecution of the bourgeoisie in Russia. So this distinction is crucial to understand.”
Years spent as a student and a newspaper correspondent in the Soviet Union shaped Mr. Rose’s philosophy. There he saw how “the concept of universal values was crucial to the dissident culture, and I saw what censorship meant,” he said. “I saw that values were not relative between Western society and the Soviets.”
The Soviets, he noted, had a law in their penal code outlawing defamation of the Soviet way of life. Blasphemy laws in Muslim countries today “have the same purpose of silencing dissident voices,” he said. “Free speech does not extend to libel, invasion of privacy and incitement to violence.” But “a distinction must be made between words and deeds,” he insisted. “Images are open to interpretation, they’re different from words.”
Mr. Westergaard put it differently: “Cartoons always concentrate and simplify an idea and allow a quick impression that arouses some strong feeling.”
He recalled a cartoon he did years ago to complement an article defending Palestinians against Israelis, “not because this was my belief but because my job was to illustrate the views in this article, and I showed a Palestinian wearing a yellow star with ‘Arab’ on it.” He continued: “Many people called to protest. One man said I had abused a Jewish symbol. We talked for a long time and finally accepted each other’ s viewpoint.” It was the talking, he said, that mattered.
Did he go too far that time?
“Looking back,” he said, “perhaps I should have made a cartoon that did not use the yellow star.”
But then why Muhammad and not a star?
“Because millions of Jews died in camps wearing that star.”
Which is obviously the wrong answer for those who have put a price on his head. “I have always been an atheist, and I dare say these events have only intensified my atheism,” he said. “But the same clash would eventually have occurred over some book or a play. It was waiting to happen.”
He brought a cartoon that he had recently revised. In it Jesus, wearing a suit and tie, strides from the cross on which a sign hangs: “Service hours, Sunday, 10-11, 2-3.” Mr. Westergaard recently added an imam watching Jesus walk away.
He agreed to meet at Jyllands-Posten, the newspaper, from which he’s now semi-retired. Tall, broad-shouldered, with a salt-and-pepper beard, at 72 he’s like a Scandinavian sailor out of central casting but dressed, as usual, in fire-engine red pants, a patterned red scarf and a Sgt. Pepper black coat — clearly an act of sartorial defiance. When asked about Mr. Westergaard’s general approach to the last two years, Mr. Rose, with awe, said, “Calm.”
As it happens, most of the dozen cartoonists are older and, like Mr. Westergaard, closer to the generational ethos of 1968 than to the cultural relativism of later generations. A Social Democrat, Mr. Westergaard ran a school for severely disabled children before he became a cartoonist. He likes to point out that Himmerland, the region of Denmark where he was born, was home to a race of warriors: “There were also Danes among the Crusaders.”
He knows it’s a loaded reference. “Is this another Crusade now, or what is it?” he asked.
Then he answered himself: “In Denmark there is a culture of radicalism, a skepticism toward authority and religion. It’s part of our national character.” Years of relativism, during which Danes felt they “had no right to ask anyone else to live like us,” ended with the cartoons, he said. But he’s less sure than Mr. Rose about the degree of progress, conceding that recent gains by Denmark’s anti-immigrant party “are an unfortunate setback due to all this.”
Now he’s accustomed to being (and maybe, who is to say, even slightly enjoys his status as) an accidental celebrity with a soapbox. “Disagreement is an essential part of democracy,” he said. “I want to explain my sense of this clash between two cultures because I have grandchildren who will grow up in this multicultural society. The Danes are tolerant people. They don’t deserve to be treated like racists.”
He added: “This will go on for the rest of my lifetime, I am sure. I will never get out of this. But I feel more anger than fear. I’m angry because my life is threatened, and I know I have done nothing wrong, just done my job.”
“Anger,” he said, smiling, “is the best therapy.”
Vatican Security Worries Over Bin Laden Tape
The Vatican on Thursday rejected an audiotaped accusation from Osama bin Laden that Pope Benedict XVI was leading a “new Crusade” against Muslims, but Italian security officials were concerned about the threats included in Mr. bin Laden’s new message.
“These accusations are absolutely unfounded,” the Rev. Federico Lombardi, the pope’s chief spokesman, said in a telephone interview. “There is nothing new in this, and it doesn’t have any particular significance for us.”
The audio message attributed to Mr. bin Laden was released Wednesday night and was addressed to “the intelligent ones in the European Union.” It was posted on a militant Web site on Wednesday, and an English transcription was distributed Thursday by the SITE Intelligence Group in Bethesda, Md., which tracks postings by Al Qaeda on the Internet.
The audiotape listed broad grievances, but specifically mentioned the pope, and coincided with the busiest week of the year at the Vatican, the week leading up to Easter Sunday. The pope, who turns 81 next month, will appear at several public events, including the annual Good Friday procession of the Stations of the Cross at the Colosseum.
In the five-minute message, the speaker said there would be a “severe” reaction against the publication in Europe of cartoons many Muslims considered offensive to the Prophet Muhammad. He said the cartoons — one reprinted last month in Denmark, more than two years after they were first published there — “came in the framework of a new Crusade in which the pope of the Vatican has played a large, lengthy role.”
Without naming any specific action or target, the speaker said, “The response will be what you see and not what you hear, and let our mothers bereave us if we do not make victorious our messenger of God.”
Father Lombardi dismissed the accusations, noting that the pope had condemned the cartoons several times and stressed that “religion must be respected.”
Al Qaeda and its supporters have issued several threats against the pope since he quoted a medieval Byzantine emperor in a speech in Germany two years ago referring to Islam as “evil and inhuman.” The pope apologized for the anger caused by the speech, saying that the view expressed in the quotation was not his own.
Asked about heightened security concerns, Father Lombardi said the recording “does not in any way affect the conduct of the pope.” He said the Vatican had no plans for any more security than what was already in place for the public events leading up to Easter.
An Italian security official, however, was quoted anonymously on Thursday by the ANSA news agency as saying that officials were taking the threats “seriously.” A spokesman for the nation’s Interior Ministry declined to answer specific questions about the threat, referring a reporter to the ANSA report.
The report stated that Italian antiterrorism officials would meet Friday to analyze the tape. It said the message would be “examined with attention to the threat surrounding the pope.” Though the Vatican is technically a separate and sovereign state, Italy provides policing and security for the pope.
After violent demonstrations against his speech two years ago, the pope has called frequently for dialogue between Christians and Muslims. A delegation of Muslim scholars met with Vatican officials this month to prepare for broader talks between the pope and Muslim representatives later this year.
Benedict XVI is scheduled to make his first visit as pope to the United States from April 15 to 20, with stops in New York and Washington. The Secret Service and the New York Police Department, responsible for the pope’s security on the trip, had no comment on the bin Laden audiotape.
Paul J. Browne, a deputy police commissioner in New York, said in an e-mailed statement that the department “has been working closely with the United States Secret Service to provide the highest level of protection possible for the pope during his visit to New York.”
Daniele Pinto contributed reporting.
Facebook Adds Privacy Controls, Plans Chat Feature
Facebook said on Tuesday it is introducing new privacy controls that give users of the fast-growing social-network site the ability to preserve social distinctions between friends, family and co-workers online.
Facebook executives told reporters at the company's Palo Alto, California headquarters of changes that will allow Facebook's more than 67 million active users worldwide to control what their friends, and friends of their friends see.
The Silicon Valley company was founded in 2004 as a social site for students at Harvard University and spread quickly to other colleges and eventually into work places. Its popularity stems from how the site conveniently allows users to share details of their lives with selected friends online.
While part of Facebook's appeal has been the greater degree of privacy controls it offers users compared with other major social network sites, the site has also been the target of two major rebellions by its users in response to new features many felt exposed previously private information to wider view.
Matt Cohler, Facebook's vice president of product management, told reporters the company was seeking to evolve beyond the simple privacy controls originally aimed at relatively homogenous groups of college-age users.
"We have a lot more users, a lot more types of users, a lot more relationships, we have a lot more types of relationships," Cohler said.
But only 25 percent of existing users have bothered to take control of their privacy using Facebook's existing personal information settings, the company said in a statement.
Use of Facebook has exploded fivefold over the past year and a half. Two-thirds of its users are now located outside the United States compared with about 10 percent 18 months ago, when most members were student age and in the United States.
Facebook members will be able to control access to details about themselves they share on the site at a group-level by creating and managing lists of friends that are granted different levels of access to such information. Users already control what individual friends see on a member's profile.
The new privacy controls will be introduced in the early morning hours of Wednesday, California time, the company said.
The group privacy controls take advantage of "friends lists," a feature the company introduced in December that help members organize friends in their network into groups. These private lists allow users to target messages to selected friends or filter what personal details those groups see.
Users can create up to 100 different "friends lists."
Late last year, Facebook allowed users to turn off a controversial feature called Beacon that monitors what Web sites they visit and Chief Executive Mark Zuckerberg apologized for not responding sooner to privacy complaints.
Beacon is a way to keep one's network of friends on Facebook informed of one's Web surfing habits. Critics argued this transformed it from a members-only site known for privacy protections into a diary of one's wider Web activities.
The company backed down in response to a petition signed by 50,000 Facebook users to scale back the Beacon feature.
Cohler said the company faces what he called a "classic Silicon Valley dilemma" between adding new features, making sure they are easier to use by the widest number of people, while also protecting members from unexpected personal revelations.
In addition, the company confirmed recent reports it is working on a new instant messaging chat feature that runs inside Facebook, allowing users to hold spontaneous back-and- forth chat with their friends on the site.
Facebook Chat, as the feature is known, will be introduced in a matter of weeks, the company said. It works inside a Web browser without requiring that users download any special software, akin to services such as Meebo.com to allow one-on- one chats.
(Editing by Andre Grenon)
How Yahoo Lost its Way
Elise Ackerman and Pete Carey
Almost eight years ago, Yahoo decided to lend a little start-up a helping hand, featuring its search technology on the Yahoo home page and giving it money at a critical juncture.
In cutthroat Silicon Valley, no good deed goes unpunished.
The start-up was Google, and Yahoo's generosity helped launch the most formidable competitor it had ever encountered. Now facing a takeover attempt by Microsoft, Yahoo is coming to terms with the punishing consequences of its complex relationship with Google, including a futile attempt to copy Google's extraordinarily profitable advertising model at significant cost to Yahoo's own business.
Long before the world learned that Google had turned the Internet into an amazing money-minting machine, Yahoo knew.
When Google was still a private company, it sent its financial statements to Yahoo's headquarters in Sunnyvale like clockwork. Google had to because Yahoo was one of its earliest investors.
The statements showed the incredible growth of Google's search advertising business, with sales more than doubling from quarter to quarter.
But Yahoo executives didn't focus on the money; they were interested in how much traffic was being driven by search, recalled Ellen Siminoff, an executive who joined Yahoo in 1996.
In 2000, Yahoo agreed to use and promote Google, which it touted as "the best search engine on the Internet." Google co-founder Larry Page described the pact as a "major milestone."
The following year, Yahoo was even more generous, paying Google $7.2 million for its services. (Google in turn paid Yahoo $1.1 million for promotional help.) Google desperately needed the money, which helped pushed it into the black for the entire year.
Yet Yahoo was hardly flush with cash. After two years of profit, Yahoo reported an annual loss of $93 million in 2001. The value of its stock had collapsed from $118.75 a share in January 2000 to $4.05 in September 2001.
Meanwhile, Yahoo's promotional push was having an effect on Google. "When we were turning the business around in 2001, Google was already becoming the ascendant player in Europe, especially in the U.K., which is one of the most important advertising markets," recalled L. Jasmine Kim, a former vice president for global marketing and sales development for Yahoo.
Members of the international team tried to telegraph their concern to Sunnyvale, Kim recalled. "In hindsight, it is easy to say we should have seen it as we did discuss our concerns, but technology moves at the speed of light. The game changed."
Yahoo management did not respond to questions about the company's relationship with Google.
The paranoid survive
The tech industry's giants - like Microsoft, Intel and Oracle - are famous for ruthlessly dealing with competitors. Not Yahoo.
In 2002, Yahoo paid Google $13.2 million, equivalent to more than a quarter of Yahoo's annual profit of $43 million. The sum, however, meant less to Google, which had blown past its benefactor with an annual profit of about $100 million.
But the price of coddling Google would be much higher, as Yahoo soon discovered.
In May 2001, Yahoo replaced Chief Executive Tim Koogle, a folksy, guitar-playing engineer, with Terry Semel, a veteran Hollywood deal maker who had rarely used e-mail.
Semel may not have been a technology guru, but he knew search would be key to Yahoo's success.
He also realized Yahoo had a big problem: It had neither its own search technology nor the software for handling search advertising.
Semel's first move: He struck a deal with Pasadena-based Overture Systems to provide ads for Yahoo's search results. Then he tried to buy Google.
After those talks fizzled in 2002, Semel acquired Inktomi, a search engine, for $235 million in December 2002. Seven months later, he bought Overture for $1.6 billion.
The deals gave Yahoo a huge boost. In 2004, revenue doubled and profit more than tripled. Yahoo's stock vaulted from $16.10 a share on the day the Overture deal was announced July 14, 2003, to $37.68 at the end of 2004.
In early 2005, when both companies reported their 2004 earnings, it seemed like Yahoo and Google might even be neck and neck. While Google had revved its profit engine harder - for an increase of 276 percent compared with Yahoo's 252 percent - Yahoo boasted a larger increase in sales - 119 percent to Google's 113 percent.
"What's even more important than exceeding any one of our financial targets, however, is the way in which we've achieved them, and they do flow from a robust foundation," Chief Financial Officer Sue Decker told analysts in January 2005.
Semel and Decker told Wall Street that Yahoo was in a better position than Google because it sold both search advertising, ads triggered by search queries, and display advertising, image-based ads that appear as banners or other graphical elements on a Web page.
The executives explained that Yahoo could cross-sell the two kinds of advertising and be a one-stop shop for the world's biggest brands.
"It became clear over the course of a year that there wasn't anything to that," said Mark Mahaney, an analyst with Citigroup.
It turned out Yahoo's happy ending was more Hollywood than reality.
While Yahoo's business had certainly improved, it was nowhere near catching Google.
Yahoo's revenue in 2004 had doubled largely as a result of the acquisition of Overture. And its profit that year was swollen by the sale of $400 million of Google stock.
Yahoo sold its remaining stake in Google, roughly 4.2 million shares, the following year for nearly $1 billion, again boosting its profit.
Executives continued to tout Yahoo's financial performance. "I am very proud of the remarkable growth and progress Yahoo has demonstrated throughout this past year," Semel said on Jan. 17, 2006.
But shares fell 13 percent the next day as investors responded to the news that core profit was a penny less than analysts had expected.
And the rate of sales growth fell by more than 60 percent, exposing the isolated bump Yahoo's sales had received from buying Overture.
Yahoo's stock plunge
During the next two years, Yahoo's stock plunged an additional 45 percent. On Jan. 31, the day before Microsoft made its bid, Yahoo traded at about $19 a share, the same level it traded at in fall 2003.
In 2007, Yahoo introduced new software that boosted the amount of money it earned from search advertising. Investors had waited years for the project known as "Panama" to be completed. The technology was seen as key to Yahoo regaining competitiveness.
But Panama didn't come close to closing the gap with Google. Yahoo's sales that year were almost $7 billion, compared with $16.6 billion for Google.
Worse, Yahoo said it expected to grow only about 10 percent the following year. (Google doesn't forecast future growth; however, its revenue increased 57 percent in 2007.)
"We are disappointed with guidance and don't expect investors to have confidence in management's investment decisions," Rob Sanderson of American Technology Research wrote in a note Jan. 30.
Unable to compete with Google in search advertising, Yahoo also appeared to be losing its edge as the leader in display advertising.
In September 2006, Decker and Semel warned they were seeing weakness in display advertising related to financial services and cars. "We think it is kind of early to tell whether this is a sign of anything broader," Decker cautioned.
In fact, it was the beginning of a long slide. Analyst Doug Anmuth of Lehman Brothers estimates that the growth of Yahoo's display business dropped by half, from 33 percent in 2006 to 16.5 percent in 2007.
The main cause of the decline was increased competition, especially from social-networking sites like MySpace and Facebook.
However, plenty of former Yahoos also blamed top management in Sunnyvale. "They were concentrated on two things: technology and technology," said Jerry Shereshewsky, a senior Yahoo marketing executive who left the company in summer 2007 and is now CEO of Grandparents.com.
"Yahoo as a company never really understood they were first and foremost in the media business supported almost wholly by advertising."
Former employees from other divisions also faulted management as indecisive, said more than a dozen who asked not to be quoted by name because it might hurt future business opportunities.
Other complaints: Upper management was plagued by cronyism. Even when new ideas got a green light, the projects were starved of resources. There were too many people with inflated titles, too many business units and too little cooperation among them.
Among the missed opportunities, former employees said, was a chance to buy Facebook when Mark Zuckerberg was still enrolled at Harvard University and open to a deal.
Around the same time, business development people at Yahoo unsuccessfully tried to stir up interest in MySpace. But senior executives wanted major deals that would "move the needle," said a former employee. MySpace was too small.
In 2006, an effort to buy YouTube foundered when Yahoo insisted on a clause in the contract that gave Yahoo an out if the video-sharing site was sued. Google agreed to remove the clause and got YouTube.
By 2007, the social-networking sites would be big enough to challenge Yahoo for display advertising. According to industry estimates, MySpace ranked second only to Yahoo in revenue from display advertising and page views.
And YouTube was the Web's undisputed top video destination, despite Yahoo's efforts to compete.
Google's newest move
Last week, Google announced its acquisition of DoubleClick, which inserts primarily display advertising on Web pages owned by major publishers.
"At best, it's alarming for Yahoo," said Jim Barnett, CEO of Turn, which provides software for optimizing display advertising. "Google already has the dominant position in search, and this puts Google in a very advantageous market position to take share in display."
Here, too, Yahoo missed an opportunity. After the dot-com crash in 2000, Yahoo briefly considered developing similar software, but decided against it.
Since November 2006, Yahoo has been working to build software that will serve targeted display advertisements to members of a consortium that now includes more than 600 newspapers, including the Mercury News.
The software has been expected to provide a major boost to Yahoo and its newspaper partners. But it is still not ready. By the time it is finished, Yahoo could well be a division of Microsoft.
Google February Search Share Down Globally
Google Inc's share of the global Web search market dipped in February from January, even as its U.S. share rose, Internet financial analysts said on Wednesday, citing market research data.
The data from comScore showed Google's dominance of the worldwide market for Web search dipping to 62.8 percent in February from 63.1 percent the month before, according to an analyst, who declined to be named.
Analysts view the monthly comScore search market data as an indicator on growth trends in Web search. Several recent monthly reports have sparked debate on Wall Street over whether the market is maturing, even though year-to-year growth rates remain high.
The volume of U.S. searches done through Google dropped to 5.86 billion from 6.14 billion in February, and the worldwide volume of searches also declined, comScore said on Wednesday.
"We are continuing to see deceleration in growth in Web search," said Jefferies & Co analyst Youssef Squali. "Google's month-over-month 5 percent decline is a little surprising, but all of the major Web search names were down."
The drop was partly due to February being two days shorter than January, a comScore spokesman said. However, several analysts said it may also reflect a maturing market. By contrast, searches rose 9 percent in January over December.
Amid a steep decline in the broader market, Google shares closed off $7.16, or 1.6 percent, to $432, while Yahoo Inc fell 59 cents, or 2.1 percent to $27.07. The Nasdaq Composite index slid 2.6 percent.
Investors cull comScore's monthly search data for clues to growth trends in Google's core business of online advertising tied to such Web searches, and to watch if any rivals slow the Google juggernaut. ComScore released summary data on U.S. search market trends, but worldwide data is only available to paid subscribers.
With search statistics becoming more unpredictable month to month, investors have begun to focus on how well Google is converting Web searches into ad viewership.
Data on growth in "paid clicks," or the number of Web search ads viewed in February, are expected to be released by comScore to clients later this week or early next week.
According to comScore data, Google's U.S. share among the top five Web search providers grew to 59.2 percent in February from 58.5 percent in January. Yahoo Inc's U.S. share dipped to 21.6 percent from 22.2 percent.
Microsoft Corp, rated No. 3 in U.S. Web search, slipped to a 9.6 percent share in February from 9.8 percent in January. No. 4 AOL, a unit of Time Warner Inc, and No. 5 Ask.com, a unit of IAC InterActiveCorp, held steady at 4.9 percent each.
Citigroup analyst Mark Mahaney, in a research note, blamed Google's decelerating growth in recent months on the maturing computer-based Web search market.
"Google's U.S. query growth of 26 percent marked a deceleration versus 37 percent growth in January and 40 percent growth in Q4," Mahaney said.
ComScore, which tracks online audiences in 20 major Internet markets around the world, estimated Web surfers performed 66 billion searches overall in December, 71.9 billion in January and 67.4 billion in February.
Google's 62.8 percent market share of the worldwide market compared with 58.5 percent in February 2007, according to a reading of comScore data by the unidentified analyst. Similarly, Yahoo's global share dropped to 11.9 percent from 12.2 percent in February from January.
Chinese Internet search leader Baidu.com Inc slipped to 4.5 percent in February globally from 4.6 percent in January, while Microsoft's share was steady at 3.1 percent, the analyst said, citing comScore data.
(Reporting by Eric Auchard; Editing by Gerald E. McCormick/Jeffrey Benkoe)
Firefox 3 Goes on a Diet, Eats Less Memory than IE and Opera
In our recent coverage of the Firefox 3 beta releases (1, 2, 3, 4), we have noted performance improvements and a significant reduction in memory consumption relative to Firefox 2. The enormous amount of effort that developers invested in boosting resource efficiency for Firefox 3 has paid off, and the results are very apparent during day-to-day use.
During intensive browsing with approximately 50 tabs, I have found that Firefox 3 generally consumes less than half of the memory used by Firefox 22.214.171.124. Firefox 3 is also snappier and more responsive when switching between tabs and performing other operations that typically lag in Firefox 126.96.36.199 when the browser is experiencing heavy load.
Mozilla developer Stuart Parmenter has written an overview of the tactics that were used to reduce Firefox's memory footprint and also reveals the results of a memory benchmark he performed to compare Firefox 3 with other browsers. The memory benchmark, which uses the Talos framework and was conducted on Windows Vista, replicates real-world usage patterns by automatically cycling pages through browser windows and then closing them. Firefox 3 used less memory than Firefox 2, Internet Explorer, and Opera, and it also freed more memory than the other browsers when pages were closed. Safari 3 and Internet Explorer 8 could not be benchmarked because they crashed during the test.
The results of this experiment, which others have been able to consistently reproduce using the same tools, represent a big victory for Firefox, which has previously faced widespread criticism for its high memory consumption. To achieve that victory, developers approached the problem from many different angles. To reduce memory fragmentation, the developers attempted to minimize the total number of memory allocations, particularly during startup. The developers also adopted FreeBSD's jemalloc allocator, which helped reduce fragmentation and improve performance.
Another big improvement is the new XPCOM cycle collector, which automatically detects unused objects that are persisting as a result of mutual references. Parmenter notes that the cycle collector has notable implications for extensions because it will be able to proactively eliminate certain kinds of memory leaks introduced by Firefox extensions that manipulate Firefox's internals. Caching behavior has also been improved so that it is less wasteful, and decompressed image data is no longer stored.
Mozilla evangelist Christopher Blizzard, who also wrote about the memory improvements, offers readers another insightful take-home message: the small memory footprint in the latest Firefox 3 beta, he says, is proof that Firefox is ready for mobile environments. "What it shows to anyone who looks is that we're able to hit the kinds of memory and performance requirements that mobile platforms demand," wrote Blizzard. "Users who use our software on mobile devices can expect web sites that just work, access to add-ons all balanced against the hardware limits imposed by mobile devices. In essence, we can bring that no compromises approach to mobile, just as we've done it with the desktop."
iPhone Users Love That Mobile Web
The last thing anyone wants to do is to give iPhone users another chance to crow about their phone’s slick interface and seamless connection to the Web. But, until now, little was known about the media habits of iPhone users and how they have diverged from the activities of mere mortals who own run-of-the mill smartphones and regular mobile phones.
Tuesday, M:Metrics, a measurement firm that studies mobile media, has released a survey of iPhone users six months after the device was released to long lines and nearly unending fanfare.
The results, from a January survey of more than 10,000 adults, are somewhat dramatic. 84.8 percent of iPhone users report accessing news and information from the hand-held device. That compares to 13.1 percent of the overall mobile phone market and 58.2 percent of total smartphone owners – which include those poor saps with Blackberries and devices that run Windows.
The study found that 58.6 percent of iPhone users visited a search engine on their phone, compared to 37 percent of smartphone users in general and a scant 6.1 percent of mobile phone users.
The market for mobile video once seemed like a non-starter in the United States. Well, 30.9 percent of iPhone users have tuned into mobile TV or a video clip from their phone, more than double the percentage that have watched on a smartphone.
Finally, 74.1 percent of iPhone users listen to music on their iTunes-equipped device. Only 27.9 percent of smartphone users listen to music on their phone and 6.7 percent of the overall mobile-phone-toting public listens to music on their mobile device.
Mark Donovan, an analyst at M:Metrics, says a major factor in the iPhone’s success as a media platform can be credited to AT&T and its unlimited data plan for iPhone users. “Once you take away the uncertainty of data charging, you really incentive people to use the device,” he said.
But then he gushes about the iPhone, sounding a lot like another died-in-the-wool iPhone convert (which, he concedes, he is.) “Apple really made a device that is Internet-centric and really fits the kind of digital lifestyle that a lot of people who are jacked into the Internet all the time are used to,” he said. “They did a great job of crushing some of the sweet spots of mobile Internet usage.”
Apple May Bundle Unlimited iTunes with iPods
A report by the Financial Times (registration required) cites unnamed executives who say that Apple is in talks with record labels to offer access to the entire iTunes music library for a lump sum price. The fee would be added as a premium option on an iPod or iPhone, or it could come as a monthly charge. It would allow downloading of any song at any time so long as the purchaser still owns the device, and the songs would be yours to keep.
This latest concept is similar to Nokia's "Comes With Music" program set to launch later this year. Nokia is reportedly rolling an $80 fee into the price of compatible phones for one year of access to Nokia's music store, which includes music from labels like Universal.
Apple's plan is different in several respects. Since the average iPod owner buys about 20 tracks from the iTunes, Apple wants to make the premium about $20, arguing that it should cover the average consumer's downloads. Then the owner can make unlimited music downloads from the iTunes Store for the life of the device. Once downloaded, the tracks are yours to keep, even if you get rid of the original iPod or iPhone. And since iPod and phone owners tend to replace devices fairly regularly, the record labels would be getting the fee whether or not the consumer makes any further downloads. Silicon Alley Insider did the math and thinks it's a good deal all around. But according to the Financial Times' sources, the labels are looking for numbers closer to the $80 Nokia is reported to be paying.
There's still the question of DRM, however. Even though the tracks are yours, any non-iTunes Plus tracks will still be beholden to FairPlay restrictions, so this could also be a good way to lock consumers into repeat Apple purchases (unless they're willing to have their music tethered to their computers). The Nokia plan use Plays For Sure, which won't play for sure on iPods or even Zunes, and Comes With Music doesn't allow you to keep listening to tracks once your subscription period has expired
While Apple's program certainly sounds like it could go over well with consumers, the negotiations are not over. Apple will need to get all the labels on board for the plan to work. If we've learned anything from recent music licensing debates, it's that they are contentious. How much do the songwriters deserve? What should be the labels' share? In addition, the labels are sure to want a plan that increases their revenue, rather than a plan that simply compensates them for what the average iPod owner already pays.
While the labels remain leery of finding themselves under Jobs' thumb once more, their embrace of DRM-free formats that can play on the iPod has negated one of Apple's longtime advantages in these licensing negotiations, and could well make the labels more likely to deal. They are also in the position to offer Apple a carrot of their own: access to MP3 files for regular, pay-per-track downloads (iTunes currently has only EMI on board with DRM-free music).
Apple has long maintained that consumers don't want subscriptions, but come on: unlimited choice of tunes on an iPhone, delivered by EDGE, WiFi, or no-doubt-soon-to-come 3G? Not only would the move boost the appeal of Apple's premium portable device, but it sounds like exactly the sort of easy-to-use system that other handset makers and device owners are trying to build at the moment. Apple already has an advantage thanks to a widely-used cross-platform client and a slew of popular devices that could play the content. If it can roll this out at a reasonable price, the payoff could well be substantial.
Expect the labels to complain once more about Apple "building a business on the back of their content" once iPhone sales skyrocket, but they stand to do pretty good business from such a deal, too.
Apple Snags 14 Percent of US-Based PC Retail Sales in February
Growth in Apple's personal computer business continued to outpace the industry average last month, with Macs accounting for a 14 percent unit share and 25 percent dollar share of all US-based PC retail sales, according to market research firm NPD.
The results -- first revealed in an investor note from Pacific Crest Securities analyst Andy Hargreaves on Monday -- represent 60 percent unit growth and 67 percent revenue growth over the same period one year ago. At the same time, overall US PC retail shipments grew just 9 percent on a 5 percent increase in revenues.
Apple saw particular strength in notebook systems, which rose 64 percent in units and 67 percent in revenues, suggesting strong sell-through of the company's new MacBook Air, noted Hargreaves.
"Macbook Air sales appear to be additive to total sales, rather than replacing Macbook Pro sales," he said. "We believe a new set of corporate customers make up a meaningful portion of MacBook Air buyers."
Overall, the US retail segment combined for a 20 percent increase in notebook shipments on an 11 percent rise in revenues.
The Mac maker also saw robust demand for its desktop systems, which grew 55 percent on a 68 percent increase in revenues, compared to the overall retail segment which saw unit sales decline 5 percent on a 2 percent drop in revenues.
"Mac sales do not appear to be negatively impacted by macro environment," Hargreaves concluded. "[The] iMac continues to sell extremely well, with strong sales of larger screen sizes."
Meanwhile, sales of Apple's iPod digital media players remain somewhat limp, and just off their pace from one year ago.
In a separate research note from Piper Jaffray analyst Gene Munster, also issued Monday, it was noted that NPD retail sales data for the month of February suggest total iPod unit sales of 9.5 million to 10.7 million for the three month period ending March.
"Street consensus for March quarter iPods is 10.8 million, representing a 2 percent year-over-year increase; the midpoint of the 9.7m-10.5m range suggests a 4 percent year-over-year decline," Munster wrote. "We see this data point as a slight positive, given this range is a slight increase from what NPD data indicated after 1 month of data."
Both Hargreaves and Munster remain bullish on shares of the Cupertino-based Apple, with Hargreaves noting that the company's current valuation is particularly attractive with the stock trading at just 18 times fiscal year 2008 free cash flow.
How Apple Got Everything Right By Doing Everything Wrong
One Infinite Loop, Apple's street address, is a programming in-joke — it refers to a routine that never ends. But it is also an apt description of the travails of parking at the Cupertino, California, campus. Like most things in Silicon Valley, Apple's lots are egalitarian; there are no reserved spots for managers or higher-ups. Even if you're a Porsche-driving senior executive, if you arrive after 10 am, you should be prepared to circle the lot endlessly, hunting for a space.
But there is one Mercedes that doesn't need to search for very long, and it belongs to Steve Jobs. If there's no easy-to-find spot and he's in a hurry, Jobs has been known to pull up to Apple's front entrance and park in a handicapped space. (Sometimes he takes up two spaces.) It's become a piece of Apple lore — and a running gag at the company. Employees have stuck notes under his windshield wiper: "Park Different." They have also converted the minimalist wheelchair symbol on the pavement into a Mercedes logo.
Jobs' fabled attitude toward parking reflects his approach to business: For him, the regular rules do not apply. Everybody is familiar with Google's famous catchphrase, "Don't be evil." It has become a shorthand mission statement for Silicon Valley, encompassing a variety of ideals that — proponents say — are good for business and good for the world: Embrace open platforms. Trust decisions to the wisdom of crowds. Treat your employees like gods.
It's ironic, then, that one of the Valley's most successful companies ignored all of these tenets. Google and Apple may have a friendly relationship — Google CEO Eric Schmidt sits on Apple's board, after all — but by Google's definition, Apple is irredeemably evil, behaving more like an old-fashioned industrial titan than a different-thinking business of the future. Apple operates with a level of secrecy that makes Thomas Pynchon look like Paris Hilton. It locks consumers into a proprietary ecosystem. And as for treating employees like gods? Yeah, Apple doesn't do that either.
But by deliberately flouting the Google mantra, Apple has thrived. When Jobs retook the helm in 1997, the company was struggling to survive. Today it has a market cap of $105 billion, placing it ahead of Dell and behind Intel. Its iPod commands 70 percent of the MP3 player market. Four billion songs have been purchased from iTunes. The iPhone is reshaping the entire wireless industry. Even the underdog Mac operating system has begun to nibble into Windows' once-unassailable dominance; last year, its share of the US market topped 6 percent, more than double its portion in 2003.
It's hard to see how any of this would have happened had Jobs hewed to the standard touchy-feely philosophies of Silicon Valley. Apple creates must-have products the old-fashioned way: by locking the doors and sweating and bleeding until something emerges perfectly formed. It's hard to see the Mac OS and the iPhone coming out of the same design-by-committee process that produced Microsoft Vista or Dell's Pocket DJ music player. Likewise, had Apple opened its iTunes-iPod juggernaut to outside developers, the company would have risked turning its uniquely integrated service into a hodgepodge of independent applications — kind of like the rest of the Internet, come to think of it.
And now observers, academics, and even some other companies are taking notes. Because while Apple's tactics may seem like Industrial Revolution relics, they've helped the company position itself ahead of its competitors and at the forefront of the tech industry. Sometimes, evil works.
Over the past 100 years, management theory has followed a smooth trajectory, from enslavement to empowerment. The 20th century began with Taylorism — engineer Frederick Winslow Taylor's notion that workers are interchangeable cogs — but with every decade came a new philosophy, each advocating that more power be passed down the chain of command to division managers, group leaders, and workers themselves. In 1977, Robert Greenleaf's Servant Leadership argued that CEOs should think of themselves as slaves to their workers and focus on keeping them happy.
Silicon Valley has always been at the forefront of this kind of egalitarianism. In the 1940s, Bill Hewlett and David Packard pioneered what business author Tom Peters dubbed "managing by walking around," an approach that encouraged executives to communicate informally with their employees. In the 1990s, Intel's executives expressed solidarity with the engineers by renouncing their swanky corner offices in favor of standard-issue cubicles. And today, if Google hasn't made itself a Greenleaf-esque slave to its employees, it's at least a cruise director: The Mountain View campus is famous for its perks, including in-house masseuses, roller-hockey games, and a cafeteria where employees gobble gourmet vittles for free. What's more, Google's engineers have unprecedented autonomy; they choose which projects they work on and whom they work with. And they are encouraged to allot 20 percent of their work week to pursuing their own software ideas. The result? Products like Gmail and Google News, which began as personal endeavors.
Jobs, by contrast, is a notorious micromanager. No product escapes Cupertino without meeting Jobs' exacting standards, which are said to cover such esoteric details as the number of screws on the bottom of a laptop and the curve of a monitor's corners. "He would scrutinize everything, down to the pixel level," says Cordell Ratzlaff, a former manager charged with creating the OS X interface.
At most companies, the red-faced, tyrannical boss is an outdated archetype, a caricature from the life of Dagwood. Not at Apple. Whereas the rest of the tech industry may motivate employees with carrots, Jobs is known as an inveterate stick man. Even the most favored employee could find themselves on the receiving end of a tirade. Insiders have a term for it: the "hero-shithead roller coaster." Says Edward Eigerman, a former Apple engineer, "More than anywhere else I've worked before or since, there's a lot of concern about being fired."
But Jobs' employees remain devoted. That's because his autocracy is balanced by his famous charisma — he can make the task of designing a power supply feel like a mission from God. Andy Hertzfeld, lead designer of the original Macintosh OS, says Jobs imbued him and his coworkers with "messianic zeal." And because Jobs' approval is so hard to win, Apple staffers labor tirelessly to please him. "He has the ability to pull the best out of people," says Ratzlaff, who worked closely with Jobs on OS X for 18 months. "I learned a tremendous amount from him."
Apple's successes in the years since Jobs' return — iMac, iPod, iPhone — suggest an alternate vision to the worker-is-always-right school of management. In Cupertino, innovation doesn't come from coddling employees and collecting whatever froth rises to the surface; it is the product of an intense, hard-fought process, where people's feelings are irrelevant. Some management theorists are coming around to Apple's way of thinking. "A certain type of forcefulness and perseverance is sometimes helpful when tackling large, intractable problems," says Roderick Kramer, a social psychologist at Stanford who wrote an appreciation of "great intimidators" — including Jobs — for the February 2006 Harvard Business Review.
Likewise, Robert Sutton's 2007 book, The No Asshole Rule, spoke out against workplace tyrants but made an exception for Jobs: "He inspires astounding effort and creativity from his people," Sutton wrote. A Silicon Valley insider once told Sutton that he had seen Jobs demean many people and make some of them cry. But, the insider added, "He was almost always right."
"Steve proves that it's OK to be an asshole," says Guy Kawasaki, Apple's former chief evangelist. "I can't relate to the way he does things, but it's not his problem. It's mine. He just has a different OS."
Nicholas Ciarelli created Think Secret — a Web site devoted to exposing Apple's covert product plans — when he was 13 years old, a seventh grader at Cazenovia Junior-Senior High School in central New York. He stuck with it for 10 years, publishing some legitimate scoops (he predicted the introduction of a new titanium PowerBook, the iPod shuffle, and the Mac mini) and some embarrassing misfires (he reported that the iPod mini would sell for $100; it actually went for $249) for a growing audience of Apple enthusiasts. When he left for Harvard, Ciarelli kept the site up and continued to pull in ad revenue. At heart, though, Think Secret wasn't a financial enterprise but a personal obsession. "I was a huge enthusiast," Ciarelli says. "One of my birthday cakes had an Apple logo on it."
Most companies would pay millions of dollars for that kind of attention — an army of fans so eager to buy your stuff that they can't wait for official announcements to learn about the newest products. But not Apple. Over the course of his run, Ciarelli received dozens of cease-and-desist letters from the object of his affection, charging him with everything from copyright infringement to disclosing trade secrets. In January 2005, Apple filed a lawsuit against Ciarelli, accusing him of illegally soliciting trade secrets from its employees. Two years later, in December 2007, Ciarelli settled with Apple, shutting down his site two months later. (He and Apple agreed to keep the settlement terms confidential.)
Apple's secrecy may not seem out of place in Silicon Valley, land of the nondisclosure agreement, where algorithms are protected with the same zeal as missile launch codes. But in recent years, the tech industry has come to embrace candor. Microsoft — once the epitome of the faceless megalith — has softened its public image by encouraging employees to create no-holds-barred blogs, which share details of upcoming projects and even criticize the company. Sun Microsystems CEO Jonathan Schwartz has used his widely read blog to announce layoffs, explain strategy, and defend acquisitions.
"Openness facilitates a genuine conversation, and often collaboration, toward a shared outcome," says Steve Rubel, a senior vice president at the PR firm Edeleman Digital. "When people feel like they're on your side, it increases their trust in you. And trust drives sales."
In an April 2007 cover story, we at Wired dubbed this tactic "radical transparency." But Apple takes a different approach to its public relations. Call it radical opacity. Apple's relationship with the press is dismissive at best, adversarial at worst; Jobs himself speaks only to a handpicked batch of reporters, and only when he deems it necessary. (He declined to talk to Wired for this article.) Forget corporate blogs — Apple doesn't seem to like anyone blogging about the company. And Apple appears to revel in obfuscation. For years, Jobs dismissed the idea of adding video capability to the iPod. "We want it to make toast," he quipped sarcastically at a 2004 press conference. "We're toying with refrigeration, too." A year later, he unveiled the fifth-generation iPod, complete with video. Jobs similarly disavowed the suggestion that he might move the Mac to Intel chips or release a software developers' kit for the iPhone — only months before announcing his intentions to do just that.
Even Apple employees often have no idea what their own company is up to. Workers' electronic security badges are programmed to restrict access to various areas of the campus. (Signs warning NO TAILGATING are posted on doors to discourage the curious from sneaking into off-limit areas.) Software and hardware designers are housed in separate buildings and kept from seeing each other's work, so neither gets a complete sense of the project. "We have cells, like a terrorist organization," Jon Rubinstein, former head of Apple's hardware and iPod divisions and now executive chair at Palm, told BusinessWeek in 2000.
At times, Apple's secrecy approaches paranoia. Talking to outsiders is forbidden; employees are warned against telling their families what they are working on. (Phil Schiller, Apple's marketing chief, once told Fortune magazine he couldn't share the release date of a new iPod with his own son.) Even Jobs is subject to his own strictures. He took home a prototype of Apple's boom box, the iPod Hi-Fi, but kept it concealed under a cloth.
But Apple's radical opacity hasn't hurt the company — rather, the approach has been critical to its success, allowing the company to attack new product categories and grab market share before competitors wake up. It took Apple nearly three years to develop the iPhone in secret; that was a three-year head start on rivals. Likewise, while there are dozens of iPod knockoffs, they have hit the market just as Apple has rendered them obsolete. For example, Microsoft introduced the Zune 2, with its iPod-like touch-sensitive scroll wheel, in October 2007, a month after Apple announced it was moving toward a new interface for the iPod touch. Apple has been known to poke fun at its rivals' catch-up strategies. The company announced Tiger, the latest version of its operating system, with posters taunting, REDMOND, START YOUR PHOTOCOPIERS.)
Secrecy has also served Apple's marketing efforts well, building up feverish anticipation for every announcement. In the weeks before Macworld Expo, Apple's annual trade show, the tech media is filled with predictions about what product Jobs will unveil in his keynote address. Consumer-tech Web sites liveblog the speech as it happens, generating their biggest traffic of the year. And the next day, practically every media outlet covers the announcements. Harvard business professor David Yoffie has said that the introduction of the iPhone resulted in headlines worth $400 million in advertising.
But Jobs' tactics also carry risks — especially when his announcements don't live up to the lofty expectations that come with such secrecy. The MacBook Air received a mixed response after some fans — who were hoping for a touchscreen-enabled tablet PC — deemed the slim-but-pricey subnotebook insufficiently revolutionary. Fans have a nickname for the aftermath of a disappointing event: post-Macworld depression.
Still, Apple's radical opacity has, on the whole, been a rousing success — and it's a tactic that most competitors can't mimic. Intel and Microsoft, for instance, sell their chips and software through partnerships with PC companies; they publish product road maps months in advance so their partners can create the machines to use them. Console makers like Sony and Microsoft work hand in hand with developers so they can announce a full roster of games when their PlayStations and Xboxes launch. But because Apple creates all of the hardware and software in-house, it can keep those products under wraps. Fundamentally the company bears more resemblance to an old-school industrial manufacturer like General Motors than to the typical tech firm.
In fact, part of the joy of being an Apple customer is anticipating the surprises that Santa Steve brings at Macworld Expo every January. Ciarelli is still eager to find out what's coming next — even if he can't write about it. "I wish they hadn't sued me," he says, "but I'm still a fan of their products."
Back in the mid-1990s, as Apple struggled to increase its share of the PC market, every analyst with a Bloomberg terminal was quick to diagnose the cause of the computermaker's failure: Apple waited too long to license its operating system to outside hardware makers. In other words, it tried for too long to control the entire computing experience. Microsoft, Apple's rival to the north, dominated by encouraging computer manufacturers to build their offerings around its software. Sure, that strategy could result in an inferior user experience and lots of cut-rate Wintel machines, but it also gave Microsoft a stranglehold on the software market. Even Wired joined the fray; in June 1997, we told Apple, "You shoulda licensed your OS in 1987" and advised, "Admit it. You're out of the hardware game."
When Jobs returned to Apple in 1997, he ignored everyone's advice and tied his company's proprietary software to its proprietary hardware. He has held to that strategy over the years, even as his Silicon Valley cohorts have embraced the values of openness and interoperability. Android, Google's operating system for mobile phones, is designed to work on any participating handset. Last year, Amazon.com began selling DRM-free songs that can be played on any MP3 player. Even Microsoft has begun to embrace the movement toward Web-based applications, software that runs on any platform.
Not Apple. Want to hear your iTunes songs on the go? You're locked into playing them on your iPod. Want to run OS X? Buy a Mac. Want to play movies from your iPod on your TV? You've got to buy a special Apple-branded connector ($49). Only one wireless carrier would give Jobs free rein to design software and features for his handset, which is why anyone who wants an iPhone must sign up for service with AT&T.
During the early days of the PC, the entire computer industry was like Apple — companies such as Osborne and Amiga built software that worked only on their own machines. Now Apple is the one vertically integrated company left, a fact that makes Jobs proud. "Apple is the last company in our industry that creates the whole widget," he once told a Macworld crowd.
But not everyone sees Apple's all-or-nothing approach in such benign terms. The music and film industries, in particular, worry that Jobs has become a gatekeeper for all digital content. Doug Morris, CEO of Universal Music, has accused iTunes of leaving labels powerless to negotiate with it. (Ironically, it was the labels themselves that insisted on the DRM that confines iTunes purchases to the iPod, and that they now protest.) "Apple has destroyed the music business," NBC Universal chief Jeff Zucker told an audience at Syracuse University. "If we don't take control on the video side, [they'll] do the same." At a media business conference held during the early days of the Hollywood writers' strike, Michael Eisner argued that Apple was the union's real enemy: "[The studios] make deals with Steve Jobs, who takes them to the cleaners. They make all these kinds of things, and who's making money? Apple!"
Meanwhile, Jobs' insistence on the sanctity of his machines has affronted some of his biggest fans. In September, Apple released its first upgrade to the iPhone operating system. But the new software had a pernicious side effect: It would brick, or disable, any phone containing unapproved applications. The blogosphere erupted in protest; gadget blog Gizmodo even wrote a new review of the iPhone, reranking it a "don't buy." Last year, Jobs announced he would open up the iPhone so that independent developers could create applications for it, but only through an official process that gives Apple final approval of every application.
For all the protests, consumers don't seem to mind Apple's walled garden. In fact, they're clamoring to get in. Yes, the iPod hardware and the iTunes software are inextricably linked — that's why they work so well together. And now, PC-based iPod users, impressed with the experience, have started converting to Macs, further investing themselves in the Apple ecosystem.
Some Apple competitors have tried to emulate its tactics. Microsoft's MP3 strategy used to be like its mobile strategy — license its software to (almost) all comers. Not any more: The operating system for Microsoft's Zune player is designed uniquely for the device, mimicking the iPod's vertical integration. Amazon's Kindle e-reader provides seamless access to a proprietary selection of downloadable books, much as the iTunes Music Store provides direct access to an Apple-curated storefront. And the Nintendo Wii, the Sony PlayStation 3, and the Xbox360 each offer users access to self-contained online marketplaces for downloading games and special features.
Tim O'Reilly, publisher of the O'Reilly Radar blog and an organizer of the Web 2.0 Summit, says that these "three-tiered systems" — that blend hardware, installed software, and proprietary Web applications — represent the future of the Net. As consumers increasingly access the Web using scaled-down appliances like mobile phones and Kindle readers, they will demand applications that are tailored to work with those devices. True, such systems could theoretically be open, with any developer allowed to throw its own applications and services into the mix. But for now, the best three-tier systems are closed. And Apple, O'Reilly says, is the only company that "really understands how to build apps for a three-tiered system."
If Apple represents the shiny, happy future of the tech industry, it also looks a lot like our cat-o'-nine-tails past. In part, that's because the tech business itself more and more resembles an old-line consumer industry. When hardware and software makers were focused on winning business clients, price and interoperability were more important than the user experience. But now that consumers make up the most profitable market segment, usability and design have become priorities. Customers expect a reliable and intuitive experience — just like they do with any other consumer product.
All this plays to Steve Jobs' strengths. No other company has proven as adept at giving customers what they want before they know they want it. Undoubtedly, this is due to Jobs' unique creative vision. But it's also a function of his management practices. By exerting unrelenting control over his employees, his image, and even his customers, Jobs exerts unrelenting control over his products and how they're used. And in a consumer-focused tech industry, the products are what matter. "Everything that's happening is playing to his values," says Geoffrey Moore, author of the marketing tome Crossing the Chasm. "He's at the absolute epicenter of the digitization of life. He's totally in the zone."
The Thin Skin of Apple Fans
IN his new book, “True Enough: Learning to Live in a Post-Fact Society,” Farhad Manjoo, a writer for Salon, argues that “new communications technologies are loosening the culture’s grip on what people once called ‘objective reality.’ ”
In an excerpt posted this week, he looks at an area where facts often become particularly slippery, specifically perceived bias in the news media against, of all things, a technology company: Apple.
“Last year,” Mr. Manjoo writes, “I praised the iPhone in something of the way Romeo once praised Juliet: The device, I said, is revolutionary — ‘it marks a new way of life. One day we’ll all have iPhones, or things that aim to do what this first one does, and your life will be better for it.’ ”
But because he mentioned that the phone was a bit pricey, “several readers alleged that I was an Apple-hater.” One wrote him to ask, “Does Salon actually pay you or are you being paid under the table by rival companies?”
Anybody who has ever written about Apple products will tell the same story — introducing even a hint of negativity into a review or article will bring down the wrath of Apple’s most fanatical fans.
What explains this? Mr. Manjoo cites a study (a pdf is available here) by Robert P. Vallone, Lee Ross and Mark R. Lepper, psychologists at Stanford University, in 1985. That study measured perceptions of media bias relating to the Israeli-Palestinian conflict. People who held strong opinions on the conflict going in were more apt to perceive bias in news accounts. Pro-Palestinian subjects saw a pro-Israel bias, and vice versa.
When “a reporter, editor, news network, or pundit mentions the other side’s arguments, it stings,” Mr. Manjoo writes. “Psychologists call this the ‘hostile media phenomenon,’ and it goes far in explaining how both Apple and PC folks can see the opposite bias in the same news story.”
But the phenomenon is particularly stark when it comes to opinionated reviews — however laudatory — of Apple products. That’s because many Apple fans “care little for honest opinion,” Mr. Manjoo writes. “They want to pick up the paper and see in it a reflection of their own nearly religious zeal for the thing they love. They don’t want a review. They want a hagiography.”
CUSTOMERS’ DISSERVICE It may seem counterintuitive to argue that it may be better sometimes for business owners to worry more about their business than about a particular customer. But Alexander Kjerulf, a business consultant, argues just that on his Web site, Chief Happiness Officer (positivesharing.com).
Believing that “the customer is always right” is just plain wrong, he writes. He cites several examples of why companies shouldn’t be spending too much time and money to make a troublesome customer happy. Some customers are simply better for business than others, he says. And it’s unfair that “abusive people get better treatment and conditions than nice people.”
TV WATCHES YOU Gerard Kunkel, the senior vice president for user experience at Comcast, told Chris Albrecht of NewTeeVee that Comcast is experimenting with various camera technologies that will let it know who is watching television. By recognizing body forms, for example, a set-top box would make recommendations to individual viewers (newteevee.com). It could also present individually tailored advertisements, something Mr. Kunkel called the holy grail.
“Perhaps I’ve seen ‘Enemy of the State’ too many times, or perhaps I’m just naïve about the depths to which Comcast currently tracks my every move,” Mr. Albrecht wrote. So “why should I trust them with my must-be-kept-secret, DVR-clogging addiction to ‘Keeping Up With the Kardashians’ ”?
Industry Giants Try to Break Computing’s Dead End
Intel and Microsoft said Tuesday that they planned to finance two groups of university researchers to start over and design a new generation of computing systems intended to break the industry out of a technological cul-de-sac that threatens to end decades of performance increases in computers.
If the research efforts succeed, this would enable the development of new kinds of portable computers and would help computer engineers tackle areas as diverse as speech recognition, image processing, health care systems and music. For example, a music professor at the University of California, Berkeley, David L. Wessel, envisions a new era of digital musical instruments that would begin to match the rich versatility of acoustic instruments like violins and pianos.
The research grant, worth $20 million over five years, will create independent laboratories at Berkeley and at the University of Illinois, Urbana-Champaign, that will be charting a way to reinvent computing. Each will work on hardware, software and a new generation of applications powered by computer chips containing multiple processors. The University of Illinois plans to contribute an additional $8 million to the project and the Berkeley project is applying for an additional $7 million from a state-supported program to match the industry grants.
The computer industry has generally stopped relying on regular increases in the processing speed of chips. In recent years it has bet instead that future advances in speed and energy efficiency will come from putting multiple processors on a single silicon chip. Numbers of computer functions can then be done in parallel rather than sequentially.
The new research agenda was motivated in part by an increasing sense that the industry is in a crisis of a sort because advanced parallel software has failed to emerge quickly. Most programmers today still write programs that solve problems in a serial fashion.
Currently, the most advanced consumer-oriented microprocessors have up to eight processors, or cores, on a chip, but the industry is moving toward chips with 100 or more. The problem, according to academic researchers and industry executives, is that the software to keep dozens of processors busy simultaneously for all kinds of computing problems does not exist.
Although the amounts in the grant are modest, both universities have a reputation for early-stage research that had notable effects on the computer industry.
The director of the new Universal Parallel Computing Research Center at Berkeley, the computer scientist David Patterson, has been associated with significant breakthroughs both in microprocessor and computer storage system design. The University of Illinois laboratory will be led by Marc Snir, a professor of computer science, and Wen-mei Hwu, professor of electrical and computer engineering. The laboratory will include the participation of David Kuck, a University of Illinois researcher who was a pioneer in the field of parallel computing and who is currently an Intel Fellow.
Mr. Patterson began warning about an impending performance limit several years ago. “Three years ago,” he recalled, “we said the world is going to change and we should do something about this.”
Mr. Wessel, the music professor, said he routinely uses three laptop computers in his composing work to get the kind of computer power he needs. “I can’t do as much processing on my laptop as I thirst for,” he said.
A great deal of industry discussion has focused on centralized, or “cloud,” computing. But the new research laboratories will instead seek breakthroughs in mobile computing systems. The new systems will be designed to perform tasks that today’s computers have trouble accomplishing, like recognizing human gestures and speech. An advanced parallel computing system will also help scientists create Web browsers that can more quickly pull in complex data, process it and display it.
The two research teams were chosen from applications from 25 universities in the United States. Both Intel and Microsoft executives said the research funds were a partial step toward filling a void left by the Pentagon’s Defense Advanced Research Projects Agency, or Darpa. The agency has increasingly focused during the Bush administration on military and other classified projects, and pure research funds for computing at universities have declined.
“The academic community has never really recovered from Darpa’s withdrawal,” said Daniel A. Reed, director of scalable and multicore computing at Microsoft, who will help oversee the new research labs.
A Storage Technology that Breaks Moore's Law
A new kind of flash memory technology with potentially greater capacity and durability, lower power requirements, and the same design as flash NAND is primed to challenge today's solid-state disk products.
Fremont, Calif.-based Nanochip Inc. said it has made breakthroughs in its array-based memory research that will enable it to deliver working prototypes to potential manufacturing partners next year. Three investors, including Intel Capital, recently put $14 million into the company, which has been developing the technology since its founding in 1996.
"It's a technology that doesn't depend on Moore's Law," says Gordon Knight, CEO of Nanochip. "This technology should go at least 10 generations."
Knight was alluding to the decades-long trend in which the number of transistors that can be placed on an integrated circuit roughly doubles every two years. Current thinking is that flash memory could hit its limit at around 32 to 45 nanometers. That describes the smallest possible width of a metal line on the circuit or the amount of space between that line and the next line. The capacity of an IC is restricted by the ability to "print" to a smaller and smaller two-dimensional plane, otherwise known as the lithography.
And that, according to Stefan Lai, is where Nanochip's technology shines. "Moore's Law is driven by lithography," says Lai, a member of Nanochip's technical advisory board, as well as vice president of business development at Ovonyx Inc. and former vice president of Intel Corp.'s flash memory group. "Every two years, you need to buy this new machine that allows you to print something that's smaller and finer."
Array-based memory uses a grid of microscopic probes to read and write to a storage material. The storage area isn't defined by the lithography but by the movement of the probes. "If [Nanochip] can move the probes one-tenth the distance, for example, they can get 100 times the density with no change in the lithography," says Lai. "You don't have to buy all these new machines."
Lai said that in principle, Nanochip could develop the ability to move the probe a single atom at a time. The company said its current generation of probes has a radius smaller than 25nm, but it projects that eventually the probes could be shrunk to two or three nanometers apiece. That scale, said Knight will enable development in 10 to 12 years of a memory chip greater than 1TB. For a first generation, anticipated in 2010, Knight says he expects a small number of chips to be in excess of 100GB, but a more realistic number is "tens of gigabytes" per integrated circuit, a capacity comparable to the current generation of flash devices.
Reusing the old equipment for new chips
Knight sees a market for Nanochip's technology in USB drives, solid-state disk drives and even enterprise servers. In each case, he believes, there are advantages with array-based memory.
Unlike flash NAND, where the frequently-changing lithography requires construction of ever-pricier manufacturing plants, Nanochip can manufacture its chips on existing low-cost semiconductor equipment, according to Marlene Bourne, head of analyst firm The Bourne Report. "They're using used equipment [and] adapting to their needs," she says. "Same machinery, same equipment, same materials, same basic processing steps. You're just creating three-dimensional objects instead of a flat [integrated circuit]." That will hold true, she says, even as the company increases the density of its chips. That could provide a cost advantage over solid-state drives, which are currently in the range of $15 to $18 per gigabyte.
Like solid-state drives, array-based memory requires no motor, which reduces its power consumption and heat output in comparison with spinning disk hard drives, says Lai. The mechanism used to move the probes is very low power, he says. Because they don't require "a hundred pieces to make the hard drive work," Lai says he believes Nanochip products will be more rugged.
Unlike traditional disk drives in servers, says Knight, his company's technology prevents the queuing problems that surface when multiple users try to access data. "When you have an array of these chips, you have many, many points of access," he says. An internal controller inside the Nanochip sends the tips down to locate specific data, which is returned in multiplex form and output in serial form, "just like the output of a NAND flash drive or disk drive -- but in fact, the data is spread out over a few hundred tips."
Array-based technology isn't something new and unique, says Bourne. Nanochip is simply applying it in a slightly different way. "The tips that form the core of this memory technology are what's being used in atomic force microscopes," she points out.
IBM's first attempt with Millipede
IBM first showed a similar technology in the late 1990s. The millipede project, which is no longer in active research at IBM's Zurich Research Laboratory, used microelectrical mechanical system (MEMS) components. In MEMS, the electronics or "brains" of the chip are usually fabricated using integrated circuits, while the moving parts are microscopic components etched from silicon in a micromachining process. Millipede itself was based on nanoscale research in which individual iron atoms were arranged with atomic precision on a special copper surface. That research won two IBM scientists the 1986 Nobel Prize in physics.
Millipede works by using a microscopic probe to make an indent in a polymer material. Each indent represents a single bit as part of the write operation. The indentations can then be removed from the material surface during an erase operation.
By using thousands of such probes in parallel, array-based memory achieves high data rates, with each probe able to read, write and erase in its own data field.
Where millipede puts "dents in plastic," Nanochip has found a better material for the read-write process to occur, according to Knight, though he declines say what that better material is. A year and a half ago, Knight says, the company made a breakthrough on a new media type that could be infinitely rewritable. "The media never wears out," Knight claims. "That's really what got the company rolling fast."
In-Stat analyst Steve Cullen believes Nanochip has licensed a material that uses chalcogenide glass from Ovonyx. Knight acknowledges that his company has worked with that kind of recording material but is unwilling to say more on the topic.
Lai, who works for Ovonyx, declines to comment on the material being used by Nanochip but points out that the phase-change semiconductor work being done by Ovonyx has more to do with reducing the size of current circuit technologies. "We will continue to follow Moore's Law."
A potential stumbling block for Nanochip's technology is that the tips on the probes, which have a radius smaller than 25nm, could wear out quickly.
Tip wear is particularly relevant if array-based probes are adopted as storage mechanisms in servers. "Obviously, you have a lot of tip wear that goes into an enterprise server that's operating 24/7, for five, six or seven years," says Knight.
Lai concurs. The tip is a problem, he says, because it touches the surface of the material.
Knight declines to specify how Nanochip has resolved the tip wear dilemma, but he insists the company has had a breakthrough in its research that has addressed the problem.
Probe-based storage in the real world
In-Stat's Cullen claims the new technology will find a home as a replacement for hard drives in notebook computers. "The thing that strikes me about 100GB is that it's a nice size for something to replace a disk in a notebook PC," he says. "All they've got to do is come close to the price of a disk and then offer some other advantage. It may consume less power than a disk. It could be more rugged."
Nanochip is confident in its ability to produce a product with the same size as existing drives. "We'll make the interface so it'll just plug and play," says Knight. "It's a new technology, but you want it to fit right in."
Lai believes that the new memory could herald breakthroughs in mobile devices and biotechnology. "You now need your whole life history stored in your mobile device," he says. "If you want something to store your genome in, it may take a lot of memory, and you'll want to carry it with you."
The big question that remains for Nanochip is whether the company can create working prototypes with the cost advantages that array-based technology is supposed to offer over conventional forms of memory. The fact that IBM appears to have moved on from its Millipede research doesn't alarm Bourne. In fact, she points out, several people from the IBM team have joined Nanochip's board of advisers. Knight said the company has 50 engineers and scientists working around the world on the prototypes, either as part of Nanochip itself or within the companies that his firm is partnering with.
IBM last publicly shared details of its probe-based storage research in a gathering of companies and organizations involved in a joint research project called ProTeM, for "probe-based terabit memory."
According to Evangelos Eleftheriou, an IBM fellow and manager of IBM Labs' storage technologies group, the company built a prototype that achieved a storage capacity of a terabyte per square inch. He says that research will be published in an article appearing in a couple of months in the "IBM Journal of Research and Development." But the group doesn't have plans to develop any products. It will leave that to other companies that might choose to license the research, he says.
The challenge for adoption of any new type of memory, points out Eleftheriou, is that flash itself isn't standing still. "In 2010, it's going to be $1 per gigabyte ... so hopefully the cost per gigabyte [of probe-based arrays] is going to be low."
Now, he says, the areas of interest for probe-based technology at IBM have moved onto topics including archival storage and maskless lithography, a technique that separates individual molecules and places them precisely onto a surface.
"The focus of our research is [to] explore ways to enhance the speed in probe sensing and the way we modify the surface -- how fast we can do those things... There are many things that come together, from positioning control, to materials to micro-machining, micro-fabricating, so it's extremely fascinating altogether."
Microchip-Sized 'Fan' Has No Moving Parts
Engineers harnessing the same physical property that drives silent household air purifiers have created a miniaturized device that is now ready for testing as a silent, ultra-thin, low-power and low maintenance cooling system for laptop computers and other electronic devices.
The compact, solid-state fan, developed with support from NSF's Small Business Innovation Research program, is the most powerful and energy efficient fan of its size. It produces three times the flow rate of a typical small mechanical fan and is one-fourth the size.
Dan Schlitz and Vishal Singhal of Thorrn Micro Technologies, Inc., of Marietta, Ga. will present their RSD5 solid-state fan at the 24th Annual Semiconductor Thermal Measurement, Modeling and Management Symposium (Semi-Therm) in San Jose, Calif., on March 17, 2008. The device is the culmination of six years of research that began while the researchers were NSF-supported graduate students at Purdue University.
"The RSD5 is one of the most significant advancements in electronics cooling since heat pipes. It could change the cooling paradigm for mobile electronics," said Singhal.
The RSD5 incorporates a series of live wires that generate a micro-scale plasma (an ion-rich gas that has free electrons that conduct electricity). The wires lie within un-charged conducting plates that are contoured into half-cylindrical shape to partially envelop the wires.
Within the intense electric field that results, ions push neutral air molecules from the wire to the plate, generating a wind. The phenomenon is called corona wind.
"The technology is a breakthrough in the design and development of semiconductors as it brings an elegant and cost effective solution to the heating problems that have plagued the industry," said Juan Figueroa, the NSF SBIR program officer who oversaw the research.
With the breakthrough of the contoured surface, the researchers were able to control the micro-scale discharge to produce maximum airflow without risk of sparks or electrical arcing. As a result, the new device yields a breeze as swift as 2.4 meters per second, as compared to airflows of 0.7 to 1.7 meters per second from larger, mechanical fans.
The contoured platform is a part of the device heat sink, a trick that enabled Schlitz and Singhal to both eliminate some of the device's bulk and increase the effectiveness of the airflow.
"The technology has the power to cool a 25-watt chip with a device smaller than 1 cubic-cm and can someday be integrated into silicon to make self-cooling chips," said Schlitz.
This device is also more dust-tolerant than predecessors. While dust attraction is ideal for living-room-scale fans that that provide both air flow and filtration, debris can be a devastating obstacle when the goal is to cool an electrical component.
Intel has found a way to stretch a Wi-Fi signal from one antenna to another located more than 60 miles away.
Intel has announced plans to sell a specialized Wi-Fi platform later this year that can send data from a city to outlying rural areas tens of miles away, connecting sparsely populated villages to the Internet. The wireless technology, called the rural connectivity platform (RCP), will be helpful to computer-equipped students in poor countries, says Jeff Galinovsky, a senior platform manager at Intel. And the data rates are high enough--up to about 6.5 megabits per second--that the connection could be used for video conferencing and telemedicine, he says.
The RCP, which essentially consists of a processor, radios, specialized software, and an antenna, is an appealing way to connect remote areas that otherwise would go without the Internet, says Galinovsky. Wireless satellite connections are expensive, he points out. And it's impractical to wire up some villages in Asian and African countries. "You can't lay cable," he says. "It's difficult, expensive, and someone is going to pull it up out of the ground to sell it."
Already, Intel has installed and tested the hardware in India, Panama, Vietnam, and South Africa. Later this year, the company will sell the device in India, with a target price below $500. The point-to-point technology will require two nodes, which could provide "full back-end infrastructure" for less than $1,000, Galinovsky says.
One node is usually installed at the edge of an urban area, wired to a local-area network cable, he explains. Using a directional antenna, the device shoots data to a receiving antenna as far as 60 miles away. Any farther away, and the system encounters problems due to the curvature of the earth. Practically, most links will be set up less than 30 miles away from one another. Once a node is installed in a village, the connection can be dispersed using standard cables and wireless routers, Galinovsky says.
There is nothing particularly innovative in the antenna technology and the router hardware, he says. The trick, he explains, comes in the software that the radios use to communicate with each other. "If you take standard Wi-Fi and focus it," Galinovsky says, "you can't get past a few kilometers." The reason is that one radio will send out data and wait for an acknowledgment from the other radio that the data was received. If the transmitting radio doesn't receive the acknowledgment in a certain amount of time, it will assume that the data was lost, and it will resend it.
Intel's RCP platform rewrites the communication rules of Wi-Fi radios. Galinvosky explains that the software creates specific time slots in which each of the two radios listens and talks, so there's no extra data being sent confirming transmissions. "We're not taking up all the bandwidth waiting for acknowledgments," he says. Since there is an inherent trade-off between the amount of available bandwidth and the distance that a signal can travel, the more bandwidth is available, the farther a signal can travel. (See a video with a technical explanation of the RCP here.)
Importantly, the devices require relatively little power. Running two or three radios in a link, Galinvosky says, requires about five to six watts. This makes it possible to power the radios using solar energy.
The Intel project and forthcoming product "sound like a huge step forward" in terms of usable bandwidth over long-range lengths, says Deborah Estrin, professor of computer science at the University of California, Los Angeles. Estrin develops technology for sensor networks in remote areas that monitor seismic activity, among other things. She says that these sensors are spread out over large areas and need to transmit large amounts of data. Previous low-power, inexpensive wireless communication technologies could only stretch a few kilometers, she says. "What's important is that Intel is getting much longer distances."
Galinvosky says that the RCP is alluring to markets beyond India. "We're seeing a lot of interest in the industry," he says. "Every time we talk about this, they say, 'We need this yesterday.'"
Hopes for Wireless Cities Are Fading
It was hailed as Internet for the masses when Philadelphia officials announced plans in 2005 to erect the largest municipal Wi-Fi grid in the country, stretching wireless access over 135 square miles with the hope of bringing free or low-cost service to all residents, especially the poor.
Municipal officials in Chicago, Houston, San Francisco and 10 other major cities, as well as dozens of smaller towns, quickly said they would match Philadelphia’s plans.
But the excited momentum has sputtered to a standstill, tripped up by unrealistic ambitions and technological glitches. The conclusion that such ventures would not be profitable led to sudden withdrawals by service providers like EarthLink, the Internet company that had effectively cornered the market on the efforts by the larger cities.
Now, community organizations worry about their prospects for helping poor neighborhoods get online.
In Tempe, Ariz., and Portland, Ore., for example, hundreds of subscribers have found themselves suddenly without service as providers have cut their losses and either abandoned their networks or stopped expanding capacity.
“All these cities had this hype hangover late last year when EarthLink announced its intentions to pull out,” said Craig Settles, an independent wireless consultant and author of “Fighting the Good Fight for Municipal Wireless” (Hudson Publishing, 2006). “Now that they’re all sobered up, they’re trying to figure out if it’s still possible to capture the dream of providing affordable and high-speed access to all residents.”
EarthLink announced on Feb. 7 that “the operations of the municipal Wi-Fi assets were no longer consistent with the company’s strategic direction.” Philadelphia officials say they are not sure when or if the promised network will now be completed.
For Cesar DeLaRosa, 15, however, the concern is more specific. He said he was worried about his science project on global warming.
“If we don’t have Internet, that means I’ve got to take the bus to the public library after dark, and around here, that’s not always real safe,” Cesar said, seated in front of his family’s new computer in a gritty section of Hunting Park in North Philadelphia. His family is among the 1,000 or so low-income households that now have free or discounted Wi-Fi access through the city’s project, and many of them worry about losing access that they cannot otherwise afford.
Philadelphia officials say service will not be disconnected.
“We expect EarthLink to live up to its contract,” said Terry Phillis, the city’s chief information officer.
But when City Council leaders here held a hearing in December to question EarthLink about how it intended to keep service running and complete the planned network, the company failed to show up.
Officials in Chicago, Houston, Miami and San Francisco find themselves in a similar predicament with EarthLink and other service providers, and have all temporarily tabled their projects.
Part of the problem was in the business model established in Philadelphia and mimicked in so many other cities, Mr. Settles said.
In Philadelphia, the agreement was that the city would provide free access to city utility poles for the mounting of routers; in return the Internet service provider would agree to build the infrastructure for 23 free hotspots and to provide inexpensive citywide residential service, including 25,000 special accounts that were even cheaper for lower-income households.
But soon it became clear that dependable reception required more routers than initially predicted, which drastically raised the cost of building the networks. Marketing was also slow to begin, so paid subscribers did not sign up in the numbers that providers initially hoped, Mr. Phillis said.
Prices for Internet service on the broader market also began dropping to a level that, while above what many poor people could afford, was below what municipal Wi-Fi providers were offering, so the companies had to lower their rates even further, making investment in infrastructure even more risky, he said.
EarthLink, which has seen a recent decline in profits and subscribers, lost its chief executive, Garry Betty, to cancer in January 2007, and with him went one of the nation’s most vocal advocates of municipal Wi-Fi. Mr. Betty’s successor, Rolla P. Huff, announced plans to cut costs and move the company in a new direction by laying off about 900 workers, about half the company’s work force, and withdrawing from municipal wireless projects.
Chris Marshall, an EarthLink spokesman who declined to be interviewed, said in an e-mail statement, “We concluded that our Municipal Wi-Fi operation is not consistent with our strategic direction and we’ve committed to a plan to sell the Muni Wi-Fi assets.”
For San Francisco residents, EarthLink’s change of plans was an especially big letdown. Unlike most other cities where municipal wireless was going to be offered in free hotspots and at a reduced price for residential service, San Francisco planned to offer citywide wireless free in a three-way deal with EarthLink, which was to build the grid, and Google, which would have paid to advertise through the network.
“It was a huge disappointment for us,” Mayor Gavin Newsom of San Francisco said about EarthLink’s shift in course, “and, with all due respect, it doesn’t seem like a smart way to run a business to work with a city for two years over a major plan and then suddenly one day to call and say you are pulling out.”
Mr. Newsom said that rather than select a single Internet provider to blanket the city, he might team up with multiple nonprofits and companies, and set up smaller free Wi-Fi areas, especially in poor neighborhoods.
Smaller cities, too, have run into problems with municipal wireless efforts.
Tempe, for instance, was one of the first midsize cities in the nation to go live in 2006 with its municipal wireless network, after erecting about 900 routers on utility poles and contracting with Gobility, a Texas-based provider, for residential service at about $20 per month. In December, the company suddenly pulled service after failing to get enough subscribers.
“The entire for-profit model is the reason for the collapse in all these projects,” said Sascha Meinrath, technology analyst at the New America Foundation, a nonprofit research organization in Washington.
Mr. Meinrath said that advocates wanted to see American cities catch up with places like Athens, Leipzig and Vienna, where free or inexpensive Wi-Fi already exists in many areas.
He said that true municipal networks, the ones that are owned and operated by municipalities, were far more sustainable because they could take into account benefits that help cities beyond private profit, including property-value increases, education benefits and quality-of-life improvements that come with offering residents free wireless access.
Mr. Meinrath pointed to St. Cloud, Fla., which spent $3 million two years ago to build a free wireless network that is used by more than 70 percent of the households in the city.
But projects covering larger cities have proved far more difficult to sustain financially, and much of the attention has turned now to Minneapolis, which is rolling out a network based on a new business model that many market analysts believe will avoid the financial risks that EarthLink encountered in Philadelphia and elsewhere.
In Minneapolis, the Internet service provider agreed to build the network as long as the city committed to becoming an “anchor tenant” by subscribing for a minimum number of city workers, like building inspectors, meter readers, police officers and firefighters.
This type of plan is more viable, according to market analysts and city officials, because the companies paying to mount the routers and run the service are guaranteed a base number of subscribers to cover the cost of their investment.
Some companies have also begun offering technological alternatives that may help expand wireless access.
Meraki, a wireless networking company based in Mountain View, Calif., has jumped into the void in San Francisco with a program it calls “Free the Net.” The company sells low-cost equipment that can be placed in a person’s home to broadcast a wireless signal. The company also sells inexpensive repeaters that can be placed on rooftops or outside walls to spread the original customer’s signal farther. The combination of the two types of equipment creates a mesh of free wireless in neighborhoods. The company says it has almost 70,000 users throughout San Francisco.
Back in Philadelphia, Cesar’s older sister, Tomasa DeLaRosa, said she had faith that city officials would find a way to finish the network and keep her Internet service going.
“Our whole house is totally different now,” said Ms. DeLaRosa, 19, who had never had Internet access at home until last December because she could not afford it.
After signing up for a job training program and completing its course work, Ms. DeLaRosa received a free laptop, training and a year’s worth of free wireless service from Esparanza, a community group.
Greg Goldman, chief executive of Wireless Philadelphia, a nonprofit organization that was set up as part of the city’s deal with EarthLink, said that about $20 million had already been spent on the network, and only about $4 million more would be needed to cover the rest of the city.
Mr. Goldman’s organization is responsible for providing bundles that include a free laptop, Internet access, training and technical support to organizations like Esparanza so they can use them as incentives for their low-income clients like Ms. DeLaRosa to complete job training and other programs.
“For us and a lot of people in this neighborhood,” Ms. DeLaRosa said, “the Internet is like a path out of here.”
Accidentally on purpose
Proposed Md. Bill Would Make Intentional Theft Of Wireless Internet Access a Crime
Purposely surfing the Internet on someone else's wireless connection, without permission, would be a crime under a bill Del. LeRoy E. Myers Jr. presented Tuesday.
Myers, R-Washington/Allegany, said his bill is meant to clarify intentional theft vs. accidental use.
He told the House Judiciary Committee that one of his neighbors, after buying a new laptop computer, got onto the Internet, thinking it was through a cable TV hookup.
Actually, the connection was through Myers' home wireless Internet system.
He said he didn't want unintentional use like that to be prosecuted the same as computer hacking.
According to the bill, intentional unauthorized access to another person's computer, network, database or software is a misdemeanor. The penalty is up to three years imprisonment and a fine of up to $1,000.
Myers shared a 2007 news story about a man in Michigan prosecuted for using a wireless Internet connection outside a coffee shop.
A Fox News story says the man parked his truck in front of the shop during lunch breaks and checked his e-mail on his laptop computer.
When a nearby business owner got suspicious, police talked to the man and ruled out that he was spying or stalking someone. However, a prosecutor filed the charge of stealing the wireless connection, the story says.
The charge was a felony punishable by up to five years in jail and a fine of up to $10,000.
His other choice was a jail diversion program, which involved paying a $400 fine, doing 40 hours of community service and being on probation for six months.
It wasn't clear from the story which outcome the man chose.
In the story, the coffee shop owner said the man could have come inside and used the wireless connection for free.
The Maryland public defender's office submitted written testimony opposing the specific ban and penalty suggested in Myers' bill.
Noting that wireless connections are becoming common in neighborhoods, the written testimony says: "A technically unsophisticated user, such as a visiting parent, or simply a houseguest unfamiliar with the home's Internet could and probably would choose the first available network."
The public defender's office also alleged it would be difficult to prove that someone knowingly used an unauthorized connection.
"A more effective way to prevent unauthorized access would be for owners' (sic) to secure their wireless networks with assistance where necessary from Internet service providers or Vendors," the public defender's office wrote.
Del. Joseph F. Vallario Jr., D-Calvert/Prince George's, the committee chairman, asked Myers why his neighbors would ever get their own Internet connection now that they know they can use his.
Wireless Spectrum Auction Raises $19 Billion
The government announced on Tuesday that it had closed the most lucrative government auction in history as wireless companies bid more than $19 billion for the rights to radio spectrum licenses.
In the coming days, the Federal Communications Commission is expected to publish a list of the winning companies. The major participants included AT&T, Verizon and Google, although many experts said they did not expect Google would bid much more than the minimum reserve price of $4 billion for one of the more attractive groups of licenses.
The spectrum licenses are being surrendered to the government by broadcasters as they complete their conversion to digital television by early next year. The licenses are coveted because they will provide the winners with access to some of the best remaining spectrum — enabling them to send signals farther from a cell tower with far less power, through dense walls in cities and over wider territories in rural areas that are now underserved.
At the same time, the major industry players are preparing for the continuation of an explosive surge in consumer interest in wireless devices offering high-speed Internet. That consumer demand fueled the auction.
While Google was not expected to post a winning bid, it has already achieved an important victory by influencing the auction rules. The commission forced the major telephone companies to open their wireless networks to a broader array of telephone equipment and Internet applications. It remains to be seen whether a variety of technical and regulatory issues can be resolved to make the promise of more open networks a reality.
In a telephone conference call with reporters, Kevin J. Martin, the chairman of the commission, appeared delighted that the auction was the largest in government history and would yield nearly twice as much as budget officials in Congress and the administration had estimated. He predicted that the requirements that had prompted the major wireless phone companies to commit themselves to opening their networks to more kinds of devices and applications would ultimately lead to greater innovation.
“This will be significant from a consumer perspective and also in assuring that innovation can occur on the edges of the network and get into the hands of consumers as quickly as possible,” Mr. Martin said.
Still, experts predicted that the auction would probably not lead to greater competition in the wireless industry, which has been consolidating in recent years. The best licenses and the construction requirements imposed on the winners are expected to cost billions of dollars, and the commission rejected requests to tailor the rules to encourage significant bids from rivals to the dominant wireless players, notably Verizon and AT&T.
While the auction will yield billions of dollars more than estimated, it fell short of hopes that it would establish new networks for public safety organizations. A group of licenses known as the D-block was unable to attract the minimum required bid.
The rules for the D-block were written in response to government reports documenting how incompatible wireless devices had led to considerable confusion during the terrorist attacks of Sept. 11, 2001, and the response to natural disasters, including Hurricane Katrina.
The D-block licenses were to be shared by private companies and a group of public safety organizations. It is likely that the rules for those licenses will have to be rewritten and another auction held for them.
At the conclusion of the auction, Representative Edward J. Markey, chairman of the House subcommittee on telecommunications and the Internet, announced that he would hold a hearing on the D-block auction and how the rules might be rewritten.
“The wireless auction just completed has successfully begun the process of opening up the U.S. marketplace for wireless devices and applications,” Mr. Markey said. “The subcommittee’s inquiry will be done with an eye toward ensuring that in any overarching plan to fulfill public safety objectives, the private sector can operate wireless networks commercially while simultaneously fulfilling an important role for first responders.”
Ethical Concerns Swirl Around D Block Spectrum Auction
Lawmakers and public interest groups want an investigation into the failure of the FCC's auction of the emergency communications spectrum.
A portion of the wireless spectrum, for which bidding closed March 18, failed to meet the government's reserve price.
Now, public interest groups and members of Congress are urging the FCC, which has imposed a gag order on participants, to lift its veil of secrecy over the identity of bidders.
The failure of the spectrum block, reserved for use by emergency transmissions and private-public partnerships, came as a surprise, especially given that industry observers had already picked out a front-runner for the auction.
Now, accusations have been levied that a consulting firm hired to help the government hand over the spectrum may have acted improperly and discouraged potential bidders by suggesting that any winning bid would have to pay $50 million in annual fees, in addition to the auction price.
The 700 MHz spectrum auction, which ended after 50 days of bidding, took in $19.6 billion for the federal government. However, only the A, B, and C blocks of spectrum were sold; the D block, which will eventually go toward a "public-private partnership" for both health and safety officials and private enterprise, did not make its $1.3 billion reserve price.
According to the FCC, "If the license for the D Block is not sold in Auction 73, the Commission may re-offer the D Block license…. Only qualified bidders from Auction 73 will be permitted to participate in [a related auction,] Auction 76." The highest offer for the D block was $472 million.
Harold Feld, vice president of the Media Access Project, a law firm that represents the public interest in all aspects of telecommunications, wrote on his personal blog in January—Feld was on sabbatical from MAP at the time—that Morgan O'Brien, chairman of Cyren Call, an advisor to the PSST (Public Safety Spectrum Trust), may have played a part in the auction's failure.
Half of the public-private partnership of the D-block does indeed go to the public, represented by PSST. Cyren Call is PSST's advisor in the auction, and therefore any private company who wins the license for the D-block will have to negotiate terms with Cyren Call.
In a telephone call, Feld referred to his blog, in which he had written that O'Brien insisted on a number of conditions. These conditions may have included, Feld said, "a fee of $50 million a year for ten years to have access to the public safety spectrum; that Morgan O'Brien would be the only conduit for the public safety community; that he would act on behalf of the D block whether to resell services."
Feld said, "When [potential participants] asked the FCC whether Cyren Call could demand such conditions again, I hasten to add not proven, but the FCC refused to say whether it would regard those conditions as unreasonable or not."
A person familiar with the situation described the scenario as ludicrous.
Cyren Call's vice president of communications, Tim O'Regan could not respond to any questions about his company's role in the auction process, due to anti-collusion rules imposed on the auction on Dec. 3, 2007, which prevents participants (and affiliated participants, like Cyren Call) from publicly discussing the auction. "I'm not saying ‘no comment.' I'm saying we're prevented from addressing the matter until the anti-collusion rules are lifted," O'Regan said.
That the reserve price was not met came as a surprise to auction-watchers, who believed that sharing spectrum with emergency aid workers in exchange for a reduced rate would entice bidders to make a play. Frontline Wireless, a startup company organized by industry heavyweights such as Reed Hundt (the former Chairman of the FCC), was the frontrunner for the D block, even before the auction began. In fact, Frontline helped craft the public-private partnership proposal.
But Stagg Newman, former CTO of the FCC and current chief technologist of Frontline, said Frontline did not bid in the auction. Did Frontline fail to bid because of concerns about the process? Newman, again, could not say, bound by the same anti-collusion rules that prevent Cyren Call from discussing the proceedings.
"Those of us who participated, and filed to participate, aren't allowed to say anything until the FCC makes public all the results," Newman said.
"With the principals all silenced by the FCC's anti-collusion rules, no one can prove anything," Feld said. "That's why we need the FCC to lift the veil and conduct a thorough examination. If my sources misrepresented what happened, I want to know it as well. But either way, truth will [win] out once the FCC lifts the anti-collusion rules."
But these rumors of interference carry so much weight that the Media Access Project, as well as Public Knowledge, a public interest group, are calling on the FCC to investigate the auction.
U.S. Rep. Cliff Stearns, R-Fla., a ranking member of the House Subcommittee on Telecommunications and the Internet, told eWEEK in an e-mail, "I have asked for a hearing and Chairman Markey has said one is forthcoming. The FCC should release all the bidding information and lift the gag rule for all blocks, including the public safety D block, so that we can examine what went wrong. In the meantime, the FCC should take no additional action regarding the licensing of the D block until Congress can consider next steps."
Public Knowledge says there are several significant questions that need to be examined. Perhaps one reason the auction did not succeed is because of the rules and regulations of the public-private partnership.
Then there is the question of when a company should cede its bandwidth to the local government? Or to the federal government? Art Brodsky, Public Knowledge's communications director, believes that "there's a lot of murkiness there, which has to be made clear."
Winners of Airwaves Auction Announced
The nation's cell phone companies won big in a record-setting government airways auction, the Federal Communications Commission announced Thursday.
AT&T Inc. and Verizon Wireless, the nation's two biggest cell phone carriers, bid a combined $16 billion of the record $19.6 billion pledged in the auction, according to an AP analysis of the results. Verizon Wireless bid $9.4 billion while AT&T Inc. bid $6.6 billion.
The results raised concern that the auction had failed to attract any new competitors to the cellular telephone market to challenge the dominant carriers.
Google Inc. was not among the winners, meaning the search engine giant will not be entering the wireless business.
One new entrant, however, Frontier Wireless LLC, which is owned by EchoStar Communications Inc., won nearly enough licenses to create a nationwide footprint.
The auction, overseen by the FCC, attracted a record $19.6 billion in bids. Bidders were anonymous, but the agency released the names Thursday.
Verizon Wireless, a joint venture between Verizon Communications Inc. and British telecom giant Vodaphone Group, won nearly every license in the consumer-friendly "C block."
The spectrum, which encompasses about a third of the spectrum at auction, is subject to "open access" provisions pushed by FCC Chairman Kevin Martin, meaning users of the network will be able to use whatever phones or software they wish.
Verizon won the regional licenses in the block cover every state with the exception of Alaska.
Google posted a package bid for the C block licenses early in the auction, assuring that the open-access provision would be put in place, but it was not enough to win.
Also Thursday, Martin said he had ordered an investigation into the circumstances surrounding the failure of a block of airwaves to be used for a nationwide emergency communications network to attract a winning bidder.
Competition Fuels Broadband Use in Europe
Fierce competition from new providers has pushed the level of broadband subscriptions in eight European countries above the levels in the United States and Japan, according to figures to be released Wednesday.
Growth could accelerate further if the European Commission succeeds in a drive to jolt those countries still dominated by former state monopolies, according to the top telecommunications regulator in Brussels.
The commission says the European Union added 19 million broadband lines in 2007, the equivalent of more than 50,000 households per day.
“We have four countries that are world leaders — Sweden, Denmark, the Netherlands and Finland,” said Viviane Reding, the European telecommunications commissioner. “We have eight countries which have higher penetration rates than the U.S. and Japan. We are not doing badly at all.”
In addition to the three Nordic countries and the Netherlands, four others — Britain, Belgium, Luxembourg and France — had surpassed the United States by July 2007. By January 2008, Germany had also done so.
In an interview Tuesday, Ms. Reding vowed to press ahead with an effort to give regulators powers to force the so-called incumbent telecommunications companies to run their businesses in a way that would make it easier for new competitors to enter the market. In countries like Germany and France, former state monopolies have fought fiercely against such a move.
The European telecommunications market is now worth 300 billion euros (about $470 billion), or 2 percent of European gross domestic product, the commission says.
“Those who say that Europe is not a place of investment should cotton on,” Ms. Reding said. “It has roughly the same level of investment as the U.S. and the same as Russia and Japan together.”
Half of the European Union countries could match the United States in broadband use by 2010, Ms. Reding said, if regulators take a tough stance to pry markets open. European Union broadband rates vary from 35.6 percent in Denmark to 7.6 percent in Bulgaria. The United States level was 22.1 percent as of July 2007, according to the Organization for Economic Cooperation and Development.
Ms. Reding emphasized her determination to encourage greater competition in the market and to give regulators the power to force “functional separation” — obliging the owners of telecommunications networks to free the networks from their operating divisions.
In seven member states, more than 60 percent of the broadband market is in the hands of incumbents, she said.
“The dynamic market force is new entrants,” Ms. Reding added.
That analysis was disputed by the European Telecommunications Network Operators’ Association, which represents the big operators.
“The report clearly shows that, overall, existing tools with the regulatory framework are sufficient to achieve continuously increasing competition on markets for the benefits of consumers,” Michael Bartholomew, director of the association, said in a statement. “The real challenge that now needs to be tackled is to foster infrastructure-based competition and encourage the deployment of high-speed access networks. Functional separation is not the right answer to encourage this risky investment.”
Asked about the recent increase in broadband penetration in Germany, Ms. Reding said it had occurred only under pressure from Brussels to encourage competition. “The German regulator was rather passive,” she said. “After I pushed him, he started to push his market.”
The commissioner said vast differences remained within the market, even in Western Europe. While French broadband penetration rates are reasonably similar in cities and rural areas there is a big disparity between the two in Germany.
And asked when half the European Union would match United States broadband penetration rates, she replied, “With independent regulators who are not afraid to use the remedy of functional separation where other solutions don’t work, we will be there by 2010.”
The commission is battling to win agreement on a new European telecommunications regulator, a plan opposed by many big networks and several governments. Under the proposals, a European authority made up of national regulators could enforce functional separation.
Ms. Reding said that she was confident that her ideas would prevail and pointed out that opponents predicted that her initiative to cap mobile phone roaming charges, which has been enacted, would never work. “They said, when I started on roaming charges, that it was a dead duck,” she said.
The commissioner said she welcomed a proposal from the British regulator, Ofcom, which suggested several amendments to her proposal.
And she accused opponents of functional separation of special pleading. “Those who have a stronghold will do everything so that regulation doesn’t bite,” she said. “Those who understand what is in the interests of investment and consumers will want these reforms to go ahead.”
Study: Violent Crime Caused by Family Violence, not Videogames
The debate over videogames and violence shows no signs of abating, and despite having grown a little weary of the topic, I'll admit that there's a part of me that remains ever intrigued by the latest research in this area. So in that spirit, here's another scientific article to further fuel the discussion. This one comes from the March issue of Criminal Justice and Behavior and presents the results of two studies examining the relationship between videogames and real life violence.
In the first study, a group of students were randomly assigned to play either Medal of Honor: Allied Assault or Myst III: Exile. (Why these researchers can't find better and more recent games for these studies is beyond me.) A comparison group was allowed to choose which of these games to play based on a written description. After playing, the subjects participated in a laboratory task (description here) designed to measure aggression. From the article:
"Individuals who played Medal of Honor were no more aggressive after playing than were individuals who played Myst III....[A]lthough males appeared to prefer to play violent video games relative to females, there was no evidence from this study to suggest that people who prefer violent video games are more innately aggressive than those who do not..."
In the second study, several hundred students filled out questionnaires concerning their levels of exposure to family violence, past criminal behavior, aggression, and videogame playing habits. The results were then analyzed using statistics to see if any connections or patterns could be found. From the article:
"[O]nce exposure to family violence was controlled, direct exposure to violent video games did not hold any predictive power regarding the commission of violent crimes. The results did suggest, however, that the interaction between aggressive personality and violent-video-game exposure is predictive of violent crime."
What this means is that even though playing violent videogames does not appear to predict violent crime, there is evidence that a certain percentage of highly aggressive people are drawn to violent videogames. This isn't very surprising. It makes a lot of sense that people who are already aggressive, perhaps because of a violent family environment, would be more likely to seek out violent media, whether that be movies, music, or videogames.
Of course, this article won't end the videogame violence debate, but it still makes for an interesting read and a welcome addition to the scientific literature on the topic.
Florida Supreme Court Sanctions Jack Thompson
The Florida Supreme Court will no longer accept anything directly from controversial Miami attorney Jack Thompson. If Thompson wants to file with the Court, he’s going to need to hire another lawyer to do it on his behalf.
That’s the ruling handed down by the Court late this morning.
As GamePolitics reported last month, the Florida Supreme Court issued a show cause order demanding that Thompson explain why the sanction shouldn’t be issued. Clearly, Thompson’s multiple responses did nothing to change the collective mind of the Justices.
The order issued by the Court may be found here, and the following passages are quoted from it:
After submitting inappropriate and pornographic materials to this Court, Thompson was specifically warned that should he continue to submit inappropriate filings, this Court would consider imposing a sanction limiting Thompson’s ability to submit further filings…
Since that order, Thompson has filed numerous additional filings which led this Court to issue an order directing Thompson to show cause why we should not limit his filings… We now sanction Thompson…
Thompson engaged, to the point of abuse… in a relentless and frivolous pursuit for vindication of his claim that he is being victimized by The Florida Bar…
Thompson’s multiple responses are rambling, argumentative, and contemptuous… What we cannot tolerate, however, is Thompson’s continued inability to maintain a minimum standard of decorum and respect for the judicial system to which all litigants, and especially attorneys, must adhere…
A thorough review of Thompson’s filings lead to one conclusion. He has abused the processes of the Court… Accordingly… the Clerk of this Court is hereby instructed to reject for filing any future pleadings, petitions, motions, documents, or other filings submitted by John Bruce Thompson, unless signed by a member in good standing of The Florida Bar other than himself.
…Further, if Thompson submits a filing in violation of this order, he may be subjected to contempt proceedings or other appropriate sanctions.
For his part, Thompson has sent out an e-mail describing the Supreme Court’s order as both “good news” and “idiotic.”
GP: Just to be clear, this sanction imposed by the Florida Supreme Court is a separate issue from Thompson’s Bar Trial which we have been reporting on in a series of articles this week. No decision has been rendered by the referee in the Bar Trial as of yet.
UPDATE: No one has ever accused Jack Thompson of knowing when to shut up, and he has, in the hours since the Florida Supreme Court edict, filed two new motions with the Court.
Today’s order, of course, warns that he will face contempt charges should he file anything under his own name, so that step would now appear inevitable.
Britain Reverses Ban on Bloody Video Game
The same British ratings board that banned “Manhunt 2“, a video game noted for its “sustained and cumulative casual sadism,” has grudgingly approved the game for sale to anyone over 18 years of age.
In announcing the decision, nine months after its initial ruling, David Cooke, head of the British Board of Film Classification, reiterated his group’s opinion that the game “posed a real potential harm risk,” GameIndustry.biz said. But several rounds of legal wrangling left the board “no alternative but to issue an ‘18′ certificate to the game.”
Rockstar Games issued a statement saying that the company was “pleased” that the game would finally reach British shelves. The company also vowed to market it “responsibly.” “Manhunt 2″ will be issued in a less bloody form than originally submitted, though hackers have found a way to negate similar changes in some U.S. versions.
The game seems to have landed a violent blow against Britain’s current system of regulating video games, according to Darren Waters of BBC News. With its “credibility bruised and battered” after being forced to reverse its initial decision, “the board’s role as the body which classifies games is now under definite scrutiny,” he wrote.
For Rockstar, the win comes at a strange time. Its next blockbuster — “Grand Theft Auto IV” — has presold more than 66 million copies, but the company may be acquired by a rival video game maker before the game hits shelves next month. In this morning’s New York Times, Matt Richtel reports the latest in the ongoing saga.
Appeals Court Overrules Minnesota Law Keeping Kids from Renting, Buying Violent Video Games
A federal appeals court on Monday upheld an injunction against a Minnesota law that targeted at children under 17 who rent or buy violent video games.
A three-judge panel of the 8th Circuit U.S. Court of Appeals agreed with a lower-court judge that Minnesota went too far when it passed its law two years ago because the state couldn't prove that such games hurt children.
The law would have hit kids under 17 with a $25 fine if they rented or bought a video game rated "M" for mature or "AO" for adults only. It also would have required stores to put up signs warning of the fines.
Game makers and retailers swiftly challenged the law, arguing it was an unconstitutional restriction of free speech. U.S. District Judge James Rosenbaum ruled in their favor in July 2006.
But the appellate opinion, written by Judge Roger l. Wollman, showed the judges weren't entirely happy about it.
"Whatever our intuitive (dare we say commonsense) feelings regarding the effect" of violent video games, precedent requires undeniable proof that such violence causes psychological dysfunction, Wollman wrote.
"The requirement of such a high level of proof may reflect a refined estrangement from reality, but apply it we must," he wrote.
Law is Little Help to Parents Fighting Use of Children's Photos on Web
An Orange County mother was shocked last fall to discover that someone had photographed her 13-year-old son in his tight-fitting swimsuit at a high school water polo meet and posted the image on an adult Web site that invited lewd comments.
Scouring the Internet, parents soon found other such photos, hundreds of them. But their horror turned to disbelief upon learning that police could do little to stop the practice.
The Orange County parents banded together to raise a ruckus that has launched a law enforcement review, prompted legislation to crack down on the practice and prodded debate pitting constitutional rights against children's privacy.
"It's disgusting because they're victimizing kids," said Joan Gould, a spokeswoman for the group, including the mother whose discovery sparked the outcry. "It's demoralizing to young kids."
California's penal code does not specifically ban such photography, which is protected by free-speech rights, because the photos themselves are not lewd and are taken at school athletic events open to the public.
"I think the most frustrating thing for all of the parents is finding out that it's legal," said Gould, a national water polo spokeswoman who helped the parents investigate the incident.
The uproar is part of a much broader issue, the marriage of Internet and digital camera technology that allows photographs to be transmitted worldwide at the push of a button.
Lena Smyth, co-founder of Mothers Against Sexual Predators, said there are numerous variations on the same exploitative theme: titillating photos of children sent from cellular phone to cellular phone; young girls' photos posted and rated on a pedophile's site; self-titled "art" Web sites that charge a monthly fee for access to children's photos.
"There are certain areas where the law has not kept up with technology," she said.
The power of the Internet to create an overnight sensation, without permission, was demonstrated last year when sports blogs and other Web sites posted photos of Allison Stokke, a high school pole vaulter, competing in a standard spandex uniform with bare midriff. A Google search of her name now generates 334,000 results.
Attorney Allan Stokke, Allison's father, said she was irritated and "there may have been crude people saying crude things." But, he added, "I don't see any legal way to stop that sort of thing."
Assemblyman Cameron Smyth, R-Valencia - whose wife is Lena Smyth of Mothers Against Sexual Predators - has proposed Assembly Bill 2104 to outlaw the posting of a minor's photo, without consent, on a Web site containing obscene matter. Violators would be jailed for one year and fined $5,000.
Calvin Massey, a professor of constitutional law at the University of California Hastings College of the Law in San Francisco, has not read AB 2104 but said its approach might well survive legal challenge because it does not restrict access or photography at school events.
Government cannot simply bar speech it finds offensive, Massey said, but "I think the depiction of minors in a context in which it's pandering to those who are interested in child pornography is an adequate justification."
But Adult Web sites can be based anywhere, photographers often aren't known, and images might have been stolen or exchanged hands repeatedly before posting.
"The intent, I think, is worthwhile," Los Angeles Sheriff's Sgt. Wayne Bilowit said of AB 2104.
"But I don't know how it plays out in the real world."
FBI Posts Fake Hyperlinks to Snare Child Porn Suspects
The FBI has recently adopted a novel investigative technique: posting hyperlinks that purport to be illegal videos of minors having sex, and then raiding the homes of anyone willing to click on them.
Undercover FBI agents used this hyperlink-enticement technique, which directed Internet users to a clandestine government server, to stage armed raids of homes in Pennsylvania, New York, and Nevada last year. The supposed video files actually were gibberish and contained no illegal images.
A CNET News.com review of legal documents shows that courts have approved of this technique, even though it raises questions about entrapment, the problems of identifying who's using an open wireless connection--and whether anyone who clicks on a FBI link that contains no child pornography should be automatically subject to a dawn raid by federal police.
Roderick Vosburgh, a doctoral student at Temple University who also taught history at La Salle University, was raided at home in February 2007 after he allegedly clicked on the FBI's hyperlink. Federal agents knocked on the door around 7 a.m., falsely claiming they wanted to talk to Vosburgh about his car. Once he opened the door, they threw him to the ground outside his house and handcuffed him.
Vosburgh was charged with violating federal law, which criminalizes "attempts" to download child pornography with up to 10 years in prison. Last November, a jury found Vosburgh guilty on that count, and a sentencing hearing is scheduled for April 22, at which point Vosburgh could face three to four years in prison.
The implications of the FBI's hyperlink-enticement technique are sweeping. Using the same logic and legal arguments, federal agents could send unsolicited e-mail messages to millions of Americans advertising illegal narcotics or child pornography--and raid people who click on the links embedded in the spam messages. The bureau could register the "unlawfulimages.com" domain name and prosecute intentional visitors. And so on.
"The evidence was insufficient for a reasonable jury to find that Mr. Vosburgh specifically intended to download child pornography, a necessary element of any 'attempt' offense," Vosburgh's attorney, Anna Durbin of Ardmore, Penn., wrote in a court filing that is attempting to overturn the jury verdict before her client is sentenced.
In a telephone conversation on Wednesday, Durbin added: "I thought it was scary that they could do this. This whole idea that the FBI can put a honeypot out there to attract people is kind of sad. It seems to me that they've brought a lot of cases without having to stoop to this."
Durbin did not want to be interviewed more extensively about the case because it is still pending; she's waiting for U.S. District Judge Timothy Savage to rule on her motion. Unless he agrees with her and overturns the jury verdict, Vosburgh--who has no prior criminal record--will be required to register as a sex offender for 15 years and will be effectively barred from continuing his work as a college instructor after his prison sentence ends.
How the hyperlink sting operation worked
The government's hyperlink sting operation worked like this: FBI Special Agent Wade Luders disseminated links to the supposedly illicit porn on an online discussion forum called Ranchi, which Luders believed was frequented by people who traded underage images. One server allegedly associated with the Ranchi forum was rangate.da.ru, which is now offline with a message attributing the closure to "non-ethical" activity.
In October 2006, Luders posted a number of links purporting to point to videos of child pornography, and then followed up with a second, supposedly correct link 40 minutes later. All the links pointed to, according to a bureau affidavit, a "covert FBI computer in San Jose, California, and the file located therein was encrypted and non-pornographic."
Some of the links, including the supposedly correct one, included the hostname uploader.sytes.net. Sytes.net is hosted by no-ip.com, which provides dynamic domain name service to customers for $15 a year.
When anyone visited the upload.sytes.net site, the FBI recorded the Internet Protocol address of the remote computer. There's no evidence the referring site was recorded as well, meaning the FBI couldn't tell if the visitor found the links through Ranchi or another source such as an e-mail message.
With the logs revealing those allegedly incriminating IP addresses in hand, the FBI sent administrative subpoenas to the relevant Internet service provider to learn the identity of the person whose name was on the account--and then obtained search warrants for dawn raids.
Excerpt from FBI affidavit in Nevada case that shows visits to the hyperlink-sting site.
The search warrants authorized FBI agents to seize and remove any "computer-related" equipment, utility bills, telephone bills, any "addressed correspondence" sent through the U.S. mail, video gear, camera equipment, checkbooks, bank statements, and credit card statements.
While it might seem that merely clicking on a link wouldn't be enough to justify a search warrant, courts have ruled otherwise. On March 6, U.S. District Judge Roger Hunt in Nevada agreed with a magistrate judge that the hyperlink-sting operation constituted sufficient probable cause to justify giving the FBI its search warrant.
The defendant in that case, Travis Carter, suggested that any of the neighbors could be using his wireless network. (The public defender's office even sent out an investigator who confirmed that dozens of homes were within Wi-Fi range.)
But the magistrate judge ruled that even the possibilities of spoofing or other users of an open Wi-Fi connection "would not have negated a substantial basis for concluding that there was probable cause to believe that evidence of child pornography would be found on the premises to be searched." Translated, that means the search warrant was valid.
Entrapment: Not a defense
So far, at least, attorneys defending the hyperlink-sting cases do not appear to have raised unlawful entrapment as a defense.
"Claims of entrapment have been made in similar cases, but usually do not get very far," said Stephen Saltzburg, a professor at George Washington University's law school. "The individuals who chose to log into the FBI sites appear to have had no pressure put upon them by the government...It is doubtful that the individuals could claim the government made them do something they weren't predisposed to doing or that the government overreached."
The outcome may be different, Saltzburg said, if the FBI had tried to encourage people to click on the link by including misleading statements suggesting the videos were legal or approved.
In the case of Vosburgh, the college instructor who lived in Media, Penn., his attorney has been left to argue that "no reasonable jury could have found beyond a reasonable doubt that Mr. Vosburgh himself attempted to download child pornography."
Vosburgh faced four charges: clicking on an illegal hyperlink; knowingly destroying a hard drive and a thumb drive by physically damaging them when the FBI agents were outside his home; obstructing an FBI investigation by destroying the devices; and possessing a hard drive with two grainy thumbnail images of naked female minors (the youths weren't having sex, but their genitalia were visible).
The judge threw out the third count and the jury found him not guilty of the second. But Vosburgh was convicted of the first and last counts, which included clicking on the FBI's illicit hyperlink.
In a legal brief filed on March 6, his attorney argued that the two thumbnails were in a hidden "thumbs.db" file automatically created by the Windows operating system. The brief said that there was no evidence that Vosburgh ever viewed the full-size images--which were not found on his hard drive--and the thumbnails could have been created by receiving an e-mail message, copying files, or innocently visiting a Web page.
From the FBI's perspective, clicking on the illicit hyperlink and having a thumbs.db file with illicit images are both serious crimes. Federal prosecutors wrote: "The jury found that defendant knew exactly what he was trying to obtain when he downloaded the hyperlinks on Agent Luder's Ranchi post. At trial, defendant suggested unrealistic, unlikely explanations as to how his computer was linked to the post. The jury saw through the smokes (sic) and mirrors, as should the court."
And, as for the two thumbnail images, prosecutors argued (note that under federal child pornography law, the definition of "sexually explicit conduct" does not require that sex acts take place):
The first image depicted a pre-pubescent girl, fully naked, standing on one leg while the other leg was fully extended leaning on a desk, exposing her genitalia... The other image depicted four pre-pubescent fully naked girls sitting on a couch, with their legs spread apart, exposing their genitalia. Viewing this image, the jury could reasonably conclude that the four girls were posed in unnatural positions and the focal point of this picture was on their genitalia.... And, based on all this evidence, the jury found that the images were of minors engaged in sexually explicit conduct, and certainly did not require a crystal clear resolution that defendant now claims was necessary, yet lacking.
Prosecutors also highlighted the fact that Vosburgh visited the "loli-chan" site, which has in the past featured a teenage Webcam girl holding up provocative signs (but without any nudity).
Civil libertarians warn that anyone who clicks on a hyperlink advertising something illegal--perhaps found while Web browsing or received through e-mail--could face the same fate.
When asked what would stop the FBI from expanding its hyperlink sting operation, Harvey Silverglate, a longtime criminal defense lawyer in Cambridge, Mass. and author of a forthcoming book on the Justice Department, replied: "Because the courts have been so narrow in their definition of 'entrapment,' and so expansive in their definition of 'probable cause,' there is nothing to stop the Feds from acting as you posit."
Porn-Friendly .xxx Domain Backer Loses Suit Against Federal Agencies
The company behind the proposed .xxx top-level domain, which was rejected after the Bush administration intervened, has been trying to dig up embarrassing government documents through a federal lawsuit.
Make that "was trying." A federal judge on March 12 granted summary judgment to the Bush administration in the Freedom of Information Act lawsuit brought by the ICM Registry.
By way of background, ICM Registry had proposed the porn-friendly .xxx domain in 2004 to the Internet Corporation for Assigned Names and Numbers, four years after ICANN rejected the idea the first time. In June 2005, ICANN approved .xxx--but the Bush administration objected two months later, and ICANN's board subsequently reversed itself by a 9-5 vote.
ICM Registry's Stuart Lawley, an indefatigable entrepreneur who made his fortune by founding a U.K. Internet service provider, didn't give up. He filed a FOIA request to learn how conservative groups pressured the Bush administration, and he released the first round of documents in May 2006. But the State Department and Commerce Department withheld others--claiming they were part of an internal "deliberative process"--and those are the documents at issue in the current lawsuit.
Robert Corn-Revere, an attorney at Davis Wright Tremaine who is representing ICM Registry, said a lawsuit against ICANN for denying the .xxx top-level domain is now possible.
"ICM Registry is planning to examine and pursue all of its legal options," Corn-Revere said Tuesday. "We were waiting to see what the outcome of this FOIA litigation was."
ICM had argued that if there's reason to suspect government misconduct such as improper influence by Focus on the Family et al., the documents should be turned over straightaway (this is known as the misconduct exemption).
U.S. District Judge James Robertson in Washington, D.C., sided in part with ICM, saying that argument would be valid if the administration "opposed .xxx for nefarious purposes" but that none had been demonstrated. Robertson, however, didn't actually read the withheld FOIA'd documents for himself, which is something that ICM could raise, if it chooses to appeal.
The key part of Robertson's ruling is:
Whatever the boundaries of the misconduct exception, they cannot be as expansive as ICM declares them to be...Absent some showing that consideration of domain name and Internet policy is outside these departments' and agencies' domains--and none has been made--or that they opposed .xxx for nefarious purposes, their action is not misconduct within the meaning of the exception to the deliberative process privilege.
If the government "leaned on" ICANN or any other decision maker that it did not directly control, (its) policy choice to do so is discoverable under FOIA. That choice (if it was made) was not "political abuse," however, and so the deliberations that underlay it are properly exempt from disclosure.
Descriptions of some of the documents that were not released certainly make their contents sound intriguing. One is a State Department staffer's response to a Wall Street Journal article about alternatives to the domain name system; others deal with meetings between the State Department and a delegation from Japan.
On the Commerce Department side, the documents--again, these have not been released in full--include:
- Document EP59: This is an e-mail containing a Commerce employee's opinions regarding the effects of a .xxx domain on children's access to pornography.
- Document EP61: This reflects the opinions of Commerce Department employees on the effects of .xxx on children's access to pornography and on what Web sites would be in a .xxx domain.
- Documents EP90-92: These are part of an e-mail chain, and the redactions relate to the opinions of Commerce employees on how to present Commerce's role in making changes to the authoritative root zone file to the public.
Documents EP125-126: These are e-mails, and the redactions include the opinions of a Ms. Atwell on the roles of ICANN and Commerce in the approval of the .xxx domain, and her opinions regarding control of children's access to pornography. (By Atwell, this presumably means Meredith Atwell Baker, then the deputy assistant secretary for communications and information in the National Telecommunications and Information Administration.)
- Documents EP46, EP98: These are e-mails from which questions and opinions of White House employee Helen Domenici have been redacted. These include the status of .xxx approval, as well as opinions regarding approval. (Domenici is listed as Assistant Associate Director for Telecommunications and Information Technology in the White House's Office of Science and Technology Policy.)
- Documents 000001-000007: These are all drafts of a document entitled "USG/DOC Options Regarding GAC Consideration of the Proposed .xxx Domain." (Because it's not a final policy and contains options not acted on, the judge ruled, it can't be obtained through FOIA.)
- Documents 000008-000010: These are all drafts of a document entitled "USG Opinions for Including .xxx in the Authoritative Root Zone file."
- Document 000016: This is a draft entitled "USG Procedural Options Regarding the Creation of .xxx"
- Document 000019: This is a draft entitled "The Department of Commerce's Role in Additions to the Internet Domain Name System Authoritative Root Zone File."
To be sure, the Commerce and State Departments did release a good number of documents in redacted and non-redacted form. Corn-Revere said: "I expect we will be releasing the other documents we have."
But those additional files could have provided a valuable glimpse into how much pressure conservative groups applied--and into what the U.S. government thinks of ICANN and its own role in approving new domain name suffixes. Unfortunately, given the limitations of FOIA and a judge who was unwilling to go along with ICM's arguments, we may never know the whole story.
|19-03-08, 07:54 AM||#2|
Join Date: May 2001
Location: New England
Put Young Children on DNA List, Urge Police
Mark Townsend and Anushka Asthana
• 'We must target potential offenders'
• Teachers' fury over 'dangerous' plan
Primary school children should be eligible for the DNA database if they exhibit behaviour indicating they may become criminals in later life, according to Britain's most senior police forensics expert.
Gary Pugh, director of forensic sciences at Scotland Yard and the new DNA spokesman for the Association of Chief Police Officers (Acpo), said a debate was needed on how far Britain should go in identifying potential offenders, given that some experts believe it is possible to identify future offending traits in children as young as five.
'If we have a primary means of identifying people before they offend, then in the long-term the benefits of targeting younger people are extremely large,' said Pugh. 'You could argue the younger the better. Criminologists say some people will grow out of crime; others won't. We have to find who are possibly going to be the biggest threat to society.'
Pugh admitted that the deeply controversial suggestion raised issues of parental consent, potential stigmatisation and the role of teachers in identifying future offenders, but said society needed an open, mature discussion on how best to tackle crime before it took place. There are currently 4.5 million genetic samples on the UK database - the largest in Europe - but police believe more are required to reduce crime further. 'The number of unsolved crimes says we are not sampling enough of the right people,' Pugh told The Observer. However, he said the notion of universal sampling - everyone being forced to give their genetic samples to the database - is currently prohibited by cost and logistics.
Civil liberty groups condemned his comments last night by likening them to an excerpt from a 'science fiction novel'. One teaching union warned that it was a step towards a 'police state'.
Pugh's call for the government to consider options such as placing primary school children who have not been arrested on the database is supported by elements of criminological theory. A well-established pattern of offending involves relatively trivial offences escalating to more serious crimes. Senior Scotland Yard criminologists are understood to be confident that techniques are able to identify future offenders.
A recent report from the think-tank Institute for Public Policy Research (IPPR) called for children to be targeted between the ages of five and 12 with cognitive behavioural therapy, parenting programmes and intensive support. Prevention should start young, it said, because prolific offenders typically began offending between the ages of 10 and 13. Julia Margo, author of the report, entitled 'Make me a Criminal', said: 'You can carry out a risk factor analysis where you look at the characteristics of an individual child aged five to seven and identify risk factors that make it more likely that they would become an offender.' However, she said that placing young children on a database risked stigmatising them by identifying them in a 'negative' way.
Shami Chakrabarti, director of the civil rights group Liberty, denounced any plan to target youngsters. 'Whichever bright spark at Acpo thought this one up should go back to the business of policing or the pastime of science fiction novels,' she said. 'The British public is highly respectful of the police and open even to eccentric debate, but playing politics with our innocent kids is a step too far.'
Chris Davis, of the National Primary Headteachers' Association, said most teachers and parents would find the suggestion an 'anathema' and potentially very dangerous. 'It could be seen as a step towards a police state,' he said. 'It is condemning them at a very young age to something they have not yet done. They may have the potential to do something, but we all have the potential to do things. To label children at that stage and put them on a register is going too far.'
Davis admitted that most teachers could identify children who 'had the potential to have a more challenging adult life', but said it was the job of teachers to support them.
Pugh, though, believes that measures to identify criminals early would save the economy huge sums - violent crime alone costs the UK £13bn a year - and significantly reduce the number of offences committed. However, he said the British public needed to move away from regarding anyone on the DNA database as a criminal and accepted it was an emotional issue.
'Fingerprints, somehow, are far less contentious,' he said. 'We have children giving their fingerprints when they are borrowing books from a library.'
Last week it emerged that the number of 10 to 18-year-olds placed on the DNA database after being arrested will have reached around 1.5 million this time next year. Since 2004 police have had the power to take DNA samples from anyone over the age of 10 who is arrested, regardless of whether they are later charged, convicted, or found to be innocent.
Concern over the issue of civil liberties will be further amplified by news yesterday that commuters using Oyster smart cards could have their movements around cities secretly monitored under new counter-terrorism powers being sought by the security services.
Meet the Diebnew, same as the Diebold
Interesting Email from Sequoia
A copy of an email I received has been passed around on various mailing lists. Several people, including reporters, have asked me to confirm its authenticity. Since everyone seems to have read it already, I might as well publish it here. Yes, it is genuine.
Sender: Smith, Ed [address redacted]@sequoiavote.com
To: firstname.lastname@example.org, email@example.com
Subject: Sequoia Advantage voting machines from New Jersey
Date: Fri, Mar 14, 2008 at 6:16 PM
Dear Professors Felten and Appel:
As you have likely read in the news media, certain New Jersey election officials have stated that they plan to send to you one or more Sequoia Advantage voting machines for analysis. I want to make you aware that if the County does so, it violates their established Sequoia licensing Agreement for use of the voting system. Sequoia has also retained counsel to stop any infringement of our intellectual properties, including any non-compliant analysis. We will also take appropriate steps to protect against any publication of Sequoia software, its behavior, reports regarding same or any other infringement of our intellectual property.
Very truly yours,
Sequoia Voting Systems
[contact information and boilerplate redacted]
Plan for Voting Machine Probe Dropped after Lawsuit Threat
Diane C. Walsh
Union County has backed off a plan to let a Princeton University computer scientist examine voting machines where errors occurred in the presidential primary tallies, after the manufacturer of the machines threatened to sue, officials said today.
A Sequoia executive, Edwin Smith, put Union County Clerk Joanne Rajoppi on notice that an independent analysis would violate the licensing agreement between his firm and the county. In a terse two-page letter Smith also argued the voting machine software is a Sequoia trade secret and cannot be handed over to any third party.
Last week Rajoppi persuaded the statewide clerk's association to have an independent study of the machines done by Edward Felten, a professor of computer science and public affairs at Princeton University. The Constitutional Officers Association of New Jersey called for the independent review to ensure the integrity of the election process.
Sequoia maintains the errors, which were documented in at least five counties, occurred due to mistakes by poll workers. The firm, which is based in Colorado, examined machines in Middlesex Count, and concluded that poll workers had pushed the wrong buttons on the control panels, resulting in errors in the numbers of ballots cast.
But officials found it odd that such an error never occurred before and the clerk's association wanted further testing.
On the advice of county's attorneys, however, Rajoppi said today she must forego all plans for independent analysis.
That upset Penny Venetis, a Rutgers University law professor representing a group of activists trying to have electronic voting machines scrapped.
"We shouldn't have a corporation dictating how elections are run in the state," Venetis said. "If an elected official believes there was an anomaly and the matter has to be investigated, then the official should be able to consult with computer experts without interference."
The Union County clerk said she intends to write to the state Attorney General's Office again in hopes of convincing the state to call for an independent study. The attorney general oversees the election process.
Evidence of New Jersey Election Discrepancies
Press reports on the recent New Jersey voting discrepancies have been a bit vague about the exact nature of the evidence that showed up on election day. What has the county clerks, and many citizens, so concerned? Today I want to show you some of the evidence.
The evidence is a “summary tape” printed by a Sequoia AVC Advantage voting machine in Hillside, New Jersey when the polls closed at the end of the presidential primary election. The tape is timestamped 8:02 PM, February 5, 2008.
The summary tape is printed by poll workers as part of the ordinary procedure for closing the polls. It is signed by several poll workers and sent to the county clerk along with other records of the election.
Let me show you closeups of two sections of the tape. (Here’s the full tape, in TIF format.)
You can see the vote totals on this machine for each candidate. On the Democratic side, the tally is Obama 182, Clinton 179. On the Republican side it’s Giuliani 1, Romney 13, McCain 40, Paul 3, Huckabee 4.
Above is the “Option Switch Totals” section, which shows the number of times each party’s ballot was activated: 362 Democratic and 60 Republican.
This doesn’t add up. The machine says the Republican ballot was activated 60 times; but it shows a total of 61 votes cast for Republican candidates. It says the Democratic ballot was activated 362 times; but it shows a total of 361 votes for Democratic candidates. (New Jersey has a closed primary, so voters can cast ballots only in their own registered party.)
What’s alarming here is not the size of the discrepancy but its nature. This is a single voting machine, disagreeing with itself about how many Republicans voted on it. Imagine your pocket calculator couldn’t make up its mind whether 1+13+40+3+4 was 60 or 61. You’d be pretty alarmed, and you wouldn’t trust your calculator until you were very sure it was fixed. Or you’d get a new calculator.
This wasn’t an isolated instance, either. In Union County alone, at least eight other AVC Advantage machines exhibited similar problems, as did dozens more machines in other counties.
Sequoia, the vendor, is trying to prevent any independent investigation of what happened.
The New Iron Curtain
Why I can never go to America again
Since 2004, all visitors to the USA, even from countries that are normally considered among its closest friends, are fingerprinted on entry to the country.
Think about that for a moment. Fingerprinting. Why do that?
Well, you might want to create an infallible way of checking that a person who leaves the country is the same as the person who entered it under that name. I could understand that. But that's not the reason, because nobody rechecks the prints when we leave. And they certainly don't destroy the record after we've gone.
Or you might want to compare the prints against a database of people who have, for instance, been deported from the country or tried to enter it illegally. Again -- if that were the purpose, I'd have no problem with it. In that case, what you would do would be to get the visitor to place their finger on a scanner, then run the print against the established database. No need to store the visitor's record at all. But no, that doesn't happen either: the prints are collected and kept, apparently forever, by the KGB -- sorry, I mean DHS.
So what exactly are they doing with those prints?
Initially, the story was that they would be used solely for immigration control. These records were, we were reassured, incompatible with law enforcement databases. That reassurance lasted all of two years. And now visitors' fingerprints are indeed compared against the FBI's database. Way to make us feel welcome.
The only conclusion that seems to make sense is that, from the day I next set foot on US soil, my prints will routinely be scanned against every crime scene in America.
What's so frightening about that? If I don't do anything bad I have nothing to fear, right? And even if someone did make a mistake, surely I'd be safe if I wasn't even in the country at the time.
Historically, fingerprints worked well in crime detection because they were compared against those of a relatively small population, people who were already shortlisted as potential suspects. Generally, detectives would check records from people who'd previously been charged and/or convicted of similar crimes in the same general area. That's not a bad way of making a shortlist.
But now, with the integration of databases, that shortlist has grown, and at the same time it's getting ever easier to get onto it. The police generally fingerprint everyone they arrest, even if they are then released without charge -- and as far as I can tell, the records once collected are never, ever destroyed. Worse, if CSI and its relatives are to be believed, prints from crime scenes are now routinely compared against those taken from non-criminals, such as military personnel. I don't know if anyone's done a study into the connection between fingerprint identification and the ever-growing proportion of ex-service-people convicted of crimes, but I think it'd be an interesting subject.
If you're a foreigner, merely stepping off the plane is grounds for something that I can only see as a kind of low-grade arrest. The DHS's database is already larger than the FBI's, and the DHS itself is surprisingly coy about what it does with all that information. All I can find are vague platitudes about "making America safer". How, exactly, they do this is of course secret.
But the sordid truth about fingerprints is that they are not nearly as infallible as they're made out to be. The (very few) systematic trials of fingerprint identification have shown a frighteningly high level of false-positive identifications (where a fingerprint expert pronounced a match wrongly). And yet the vast majority of people -- even those in law enforcement who really should know better -- still believe that fingerprint identification is the gold standard of positive ID.
Worse: the US government has made it clear in recent years that being outside the country is no defence against its law-enforcement agencies. Last December, a senior lawyer for the US government told the Court of Appeal in London that the US Supreme Court had sanctioned kidnapping foreign nationals if they were wanted for crimes in the USA. No more messing about with judicial procedures and extradition agreements: if they want me, they can and will come and snatch me off any street in the world.
So what does that leave me with? A significant percentage chance, per year, of being "infallibly" identified as having been at the scene of some crime, at which point there's nothing standing between me and officially-sanctioned abduction. And, of course, absolutely no guarantee of access to a lawyer or court at any point of the process.
No thanks. My only defence, such as it is, is to keep my name the hell out of that database.
The USA is, of course, perfectly right to apply reasonable measures to defend its borders and its people. And I'm still, for the moment, at liberty not to go there.
What makes me sad is that, for the first thirty years of my life, I was accustomed to seeing travel grow easier. I revelled in the way the world was opening up to me. There was so much to do. In America there are dozens of sights I'd still love to see, friends I'd love to visit, places to walk or climb or just gape. I took it for granted that I'd always be able to visit the Grand Canyon, or San Francisco, or Seattle, Shiloh, Boston, Harper's Ferry, Jamestown, Yellowstone, Philadelphia, the Alamo on my next visits.
But now, as far as I'm concerned, those places might as well not exist any more. They're lost to me, probably forever. (Ironically, however, I'm now free to visit Hungary or Poland more or less at will.)
Technology has been perverted. Instead of making life easier, now it's used to erect new barriers and create new hazards. And that, I think, is a damn' shame.
Wiretapping's True Danger
History says we should worry less about privacy and more about political spying.
As the battle over reforms to the Foreign Intelligence Surveillance Act rages in Congress, civil libertarians warn that legislation sought by the White House could enable spying on "ordinary Americans." Others, like Sen. Orrin Hatch (R-Utah), counter that only those with an "irrational fear of government" believe that "our country's intelligence analysts are more concerned with random innocent Americans than foreign terrorists overseas."
But focusing on the privacy of the average Joe in this way obscures the deeper threat that warrantless wiretaps poses to a democratic society. Without meaningful oversight, presidents and intelligence agencies can -- and repeatedly have -- abused their surveillance authority to spy on political enemies and dissenters.
The original FISA law was passed in 1978 after a thorough congressional investigation headed by Sen. Frank Church (D-Idaho) revealed that for decades, intelligence analysts -- and the presidents they served -- had spied on the letters and phone conversations of union chiefs, civil rights leaders, journalists, antiwar activists, lobbyists, members of Congress, Supreme Court justices -- even Eleanor Roosevelt and the Rev. Martin Luther King Jr. The Church Committee reports painstakingly documented how the information obtained was often "collected and disseminated in order to serve the purely political interests of an intelligence agency or the administration, and to influence social policy and political action."
Political abuse of electronic surveillance goes back at least as far as the Teapot Dome scandal that roiled the Warren G. Harding administration in the early 1920s. When Atty. Gen. Harry Daugherty stood accused of shielding corrupt Cabinet officials, his friend FBI Director William Burns went after Sen. Burton Wheeler, the fiery Montana progressive who helped spearhead the investigation of the scandal. FBI agents tapped Wheeler's phone, read his mail and broke into his office. Wheeler was indicted on trumped-up charges by a Montana grand jury, and though he was ultimately cleared, the FBI became more adept in later years at exploiting private information to blackmail or ruin troublesome public figures. (As New York Gov. Eliot Spitzer can attest, a single wiretap is all it takes to torpedo a political career.)
In 1945, Harry Truman had the FBI wiretap Thomas Corcoran, a member of Franklin D. Roosevelt's "brain trust" whom Truman despised and whose influence he resented. Following the death of Chief Justice Harlan Stone the next year, the taps picked up Corcoran's conversations about succession with Justice William O. Douglas. Six weeks later, having reviewed the FBI's transcripts, Truman passed over Douglas and the other sitting justices to select Secretary of the Treasury (and poker buddy) Fred Vinson for the court's top spot.
"Foreign intelligence" was often used as a pretext for gathering political intelligence. John F. Kennedy's attorney general, brother Bobby, authorized wiretaps on lobbyists, Agriculture Department officials and even a congressman's secretary in hopes of discovering whether the Dominican Republic was paying bribes to influence U.S. sugar policy. The nine-week investigation didn't turn up evidence of money changing hands, but it did turn up plenty of useful information about the wrangling over the sugar quota in Congress -- information that an FBI memo concluded "contributed heavily to the administration's success" in passing its own preferred legislation.
In the FISA debate, Bush administration officials oppose any explicit rules against "reverse targeting" Americans in conversations with noncitizens, though they say they'd never do it.
But Lyndon Johnson found the tactic useful when he wanted to know what promises then-candidate Richard Nixon might be making to our allies in South Vietnam through confidant Anna Chenault. FBI officials worried that directly tapping Chenault would put the bureau "in a most untenable and embarrassing position," so they recorded her conversations with her Vietnamese contacts.
Johnson famously heard recordings of King's conversations and personal liaisons with various women. Less well known is that he received wiretap reports on King's strategy conferences with other civil rights leaders, hoping to use the information to block their efforts to seat several Mississippi delegates at the 1964 Democratic National Convention. Johnson even complained that it was taking him "hours each night" to read the reports.
Few presidents were quite as brazen as Nixon, whom the Church Committee found had "authorized a program of wiretaps which produced for the White House purely political or personal information unrelated to national security." They didn't need to be, perhaps. Through programs such as the National Security Agency's Operation Shamrock (1947 to 1975), which swept up international telegrams en masse, the government already had a vast store of data, and presidents could easily run "name checks" on opponents using these existing databases.
It's probably true that ordinary citizens uninvolved in political activism have little reason to fear being spied on, just as most Americans seldom need to invoke their 1st Amendment right to freedom of speech. But we understand that the 1st Amendment serves a dual role: It protects the private right to speak your mind, but it serves an even more important structural function, ensuring open debate about matters of public importance. You might not care about that first function if you don't plan to say anything controversial. But anyone who lives in a democracy, who is subject to its laws and affected by its policies, ought to care about the second.
Harvard University legal scholar William Stuntz has argued that the framers of the Constitution viewed the 4th Amendment as a mechanism for protecting political dissent. In England, agents of the crown had ransacked the homes of pamphleteers critical of the king -- something the founders resolved that the American system would not countenance.
In that light, the security-versus-privacy framing of the contemporary FISA debate seems oddly incomplete. Your personal phone calls and e-mails may be of limited interest to the spymasters of Langley and Ft. Meade. But if you think an executive branch unchecked by courts won't turn its "national security" surveillance powers to political ends -- well, it would be a first.
DHS Data Mining–It’s as Bad as You Thought
The Department of Homeland Security just sent a report to Congress about its data mining activities. This is the third such report as required under Section 806 of the Federal Agency Data Mining Reporting Act of 2007.
Under the Act, DHS was compelled to go back and report on its data mining activities in 2006 and a previous report for data mining activities in 2007.
It seem as though Congress did not like the previous reports and thought that DHS was using a definition of data mining that was too narrow which might have excluded too many DHS programs.
So, Congress, in House Report 109-609, gave DHS a detailed definition of what data mining means to them to be added on top of the definition that DHS was already using.
.... a query or search or other analysis of 1 or more electronic databases, whereas (A) at least 1 of the databases was obtained from or remains under the control of a non federal entity, or the information was acquired intially by another department or agency of the Federal Government for purposes other than intelligence or law enforcement; (B) a department or agency of the Federal Government or non federal entity acting on behalf of the Federal Government is conducting the query, or search or other analysis to find a predictive pattern indicating terrorist or criminal activity; and (C) the search does not use a specific individual person's identifiers to acquire information concerning that individual.
DHS has apparently gone back to the drawing board and is taking another crack at the 2007 report, using the newer definition.
And guess what they found? Yep, a whole bunch of activities that DHS had not reported to Congress as being data mining turned out to be.....wait for it.....data mining! Who'da thunk it?
The not previously reported data mining includes an inbound and outbound cargo analysis program, ADVISE (Analysis, Dissemination, Visualization, Insight and Semantic Enhancement program pilot), and ICE's DARTTS program (Data Analysis and Research for Trade Transparency System). [Anybody interested in money laundering in the wake of the Spitzer scandal really wants to read this link, it is full of examples of how the Bank Secrecy Act reporting requirements actually work in the field.]
Anyway, DHS should have done a Privacy Impact Assessment of these programs to see if they a) infringed on people's privacy, and b) what mitigation measures could be taken to ameliorate that infringement.
Since DHS didn't include these programs in its reporting, you already guessed that it didn't do the privacy assessment, right?
Sigh. None of this actually surprised anyone here, did it?
DHS promises to go back and do the privacy assessments and to be good little boys and girls in future, but basically, this is Congress catching them red handed.
Do Americans Care About Big Brother?
Pity America's poor civil libertarians. In recent weeks, the papers have been full of stories about the warehousing of information on Americans by the National Security Agency, the interception of financial information by the CIA, the stripping of authority from a civilian intelligence oversight board by the White House, and the compilation of suspicious activity reports from banks by the Treasury Department. On Thursday, Justice Department Inspector General Glenn Fine released a report documenting continuing misuse of Patriot Act powers by the FBI. And to judge from the reaction in the country, nobody cares.
A quick tally of the record of civil liberties erosion in the United States since 9/11 suggests that the majority of Americans are ready to trade diminished privacy, and protection from search and seizure, in exchange for the promise of increased protection of their physical security. Polling consistently supports that conclusion, and Congress has largely behaved accordingly, granting increased leeway to law enforcement and the intelligence community to spy and collect data on Americans. Even when the White House, the FBI or the intelligence agencies have acted outside of laws protecting those rights — such as the Foreign Intelligence Surveillance Act — the public has by and large shrugged and, through their elected representatives, suggested changing the laws to accommodate activities that may be in breach of them.
Civil libertarians are in a state of despair. "People don't realize how damaging it is to a democratic society to allow the government to warehouse information about innocent Americans," says Mike German, national security counsel at the American Civil Liberties Union.
Or do they? In all the examples of diminished civil liberties, there are few, if any, where the motivating factor was something other than law and order or national security. There are no scandalous examples of the White House using the Patriot Act powers for political purposes or of individual agents using them for personal gain. The Justice IG report released Thursday, for example, examined some 50,000 National Security Letters issued in 2006 to see whether the FBI misused that specialized kind of warrantless subpoena. The IG found some continuing abuse of the power, but blamed it for the most part on sloppiness and bad management, not nefarious intent. In a press release accompanying the report, Fine said, "The FBI and Department of Justice have shown a commitment to addressing these problems."
There may, nonetheless, be reasons to feel wary of the civil liberties vs. security trade-off into which Americans have bought. If the misuse documented in the Justice IG report stems from incompetence, Americans may not be getting the security they bargain for in sacrificing their civil liberties. It's also possible the Justice IG may yet find among the abused Patriot Act powers examples of an FBI agent stalking his girlfriend or doing a favor for a political operative friend. Fine is still preparing a report on the illegal use of "exigent letters" in unauthorized demands for records from business.
For now, however, civil libertarians will have to continue to argue that the danger lies not in how the government's expanded powers are being used now, but how they might be used in the future. "The government can collect information about the average citizen without any concern for their rights, but the citizen can't find out what the government is doing, and that's inimical to government of we the people," says the ACLU's German. So far, that argument hasn't convinced the people.
Time Magazine Invents Facts to Claim that Americans Support Bush's Domestic Spying Abuses
No matter how corrupt and sloppy the establishment press becomes, they always find a way to go lower. Time Magazine has just published what it purports to be a news article by Massimo Calabresi claiming that "nobody cares" about the countless abuses of spying powers by the Bush administration; that "Americans are ready to trade diminished privacy, and protection from search and seizure, in exchange for the promise of increased protection of their physical security"; and that the case against unchecked government surveillance powers "hasn't convinced the people." Not a single fact -- not one -- is cited to support these sweeping, false opinions.
Worse still -- way worse -- this "news article" decrees the Bush administration to be completely innocent, even well-motivated, even in those instances where technical, irrelevant lawbreaking has been found, as it proclaims:
In all the examples of diminished civil liberties, there are few, if any, where the motivating factor was something other than law and order or national security.
Does Calabresi or his Time editors have the slightest idea how secret, illegal spying powers have been used, towards what ends they've been employed and with what motives? No, they have absolutely no idea. Not even members of Congressional Intelligence Committees know because the Bush administration has kept all of that concealed. So Time just makes up facts to defend the Bush administration with wholly baseless statements that one would expect to come pouring out of the mouths only of Dana Perino and Bill Kristol -- the "motivating factor" for secret, illegal spying was nothing "other than law and order or national security."
This article literally has more factual errors -- pure, retraction-level falsehoods -- than it has paragraphs. It makes Joe Klein look like a knowledgable and conscientious surveillance expert. It's one of the most falsehood-plagued articles I've seen in quite some time. Let's just count the ways this article includes demonstrably false assertions, purely based on facts:
(1) Time claims that "nobody cares" about the Government's increased spying powers and that "polling consistently supports that conclusion." They don't cite a single poll because that assertion is blatantly false.
Just this weekend, a new poll released by Scripps Howard News Service and Ohio University proves that exactly the opposite is true. That poll shows that the percentage of Americans who believe the Federal Government is "very secretive" has doubled in the last two years alone (to 44%) and that "nearly nine in 10 say it's important to know presidential and congressional candidates' positions on open government when deciding who to vote for."
The same poll also found that 77% of Americans believe that "the federal government opened mail and monitored phone calls of people in the U.S. without first getting permission from a federal judge," and 64% believe "that the federal government has opened mail or monitored telephone conversations involving members of the news media." Only a small minority (20%) believe that the Federal Government is "Very Open" or "Somewhat Open." Exactly as was true for The Politico's very untimely article last week falsely claiming that Americans are increasingly supporting the Iraq War again -- on the very day that a new USA Today poll showed that Americans overwhelmingly favor unconditional timetables for withdrawal -- Time today asserts a falsehood that is squarely negated by a poll released the day before.
The proposition that "polls consistently" find that Americans don't mind incursions into their civil liberties is a rank falsehood. From a December, 2005 CNN poll, days after the NSA scandal was first disclosed:
Nearly two-thirds said they are not willing to sacrifice civil liberties to prevent terrorism, as compared to 49 percent saying so in 2002. More importantly, ever since it was revealed that the Bush administration has been spying on Americans without the warrants required by law, polls have consistently shown that huge numbers of Americans -- usually majorities -- oppose warrantless spying, exactly the opposite of what Time just claimed.
Much of the polling on warrantless eavesdropping occurred throughout 2006 when the NSA scandal was being debated. Here's what a Quinnipiac poll concluded:
By a 76-19 percent margin, American voters say the government should continue monitoring phone calls or e-mail between suspected terrorists in other countries and people in the U.S., according to a Quinnipiac University national poll released today. But voters say 55-42 percent that the government should get court orders for this surveillance.
Voters in "purple states," 12 states in which there was a popular vote margin of 5 percentage points or less in the 2004 Presidential election, plus Missouri, considered the most accurate barometer of Presidential voting, want wiretap warrants 57 - 39 percent.
Red states, where President George W. Bush's margin was more than 5 percent in 2004, disagree 51 - 46 percent with the President that the government does not need warrants. Blue state voters who backed John Kerry by more than 5 percent want warrants 57 - 40 percent, the independent Quinnipiac (KWIN-uh-pe-ack) University poll finds.
A total of 57 percent of voters are "extremely" or "quite" worried that phone and e-mail taps without warrants could be misused to violate people's privacy. But 54 percent believe these taps have prevented some acts of terror.
"Don't turn off the wiretaps, most Americans say, but the White House ought to tell a judge first. Even red state voters, who backed President Bush in 2004, want to see a court okay for wiretaps," said Maurice Carroll, Director of the Quinnipiac University Polling Institute.
From the beginning, pluralities in the vast majority of states -- 37 out of 50 -- believed the President "clearly" broke the law with his NSA spying. A CBS poll found that Americans believe (51-43%) that "the President does not have the legal authority to authorize wiretapes without a warrant to fight terrorism." And back when Russ Feingold introduced his resolution to censure the President for breaking the law in spying on Americans, a plurality of Americans supported censure of Bush despite the fact that Feingold was virtually alone among political figures in advocating it. And most Americans opposed immunity for telecoms accused of breaking the law in how they spied on Americans:
Opposition to immunity is widespread, cutting across ideology and geography. Majorities of liberals, moderates, and conservatives agree that courts should decide the outcomes of these legal actions (liberals: 64% let courts decide, 26% give immunity; moderates: 58% let courts decide, 34% give immunity; conservatives: 50% let courts decide, 38% give immunity).
As is so often true, the facts are exactly the opposite of what Time, in defending the Bush administration, tells its readers. Can one find polls in which pluralites of Americans support warrantless eavesdropping and other secret spying programs? If one looks hard enough for polls emphasizing "spying on terrorists," perhaps one can, but Time's assertion that "polling consistently supports the conclusion" that Americans want to give up civil liberties for security is patently false.
(2) This is Time's next claim:
Even when the White House, the FBI or the intelligence agencies have acted outside of laws protecting those rights -- such as the Foreign Intelligence Surveillance Act -- the public has by and large shrugged and, through their elected representatives, suggested changing the laws to accommodate activities that may be in breach of them.
Have Calabresi and his editors been on vacation for the last four months? During that time, there has been a protracted, bitter debate in Congress over the President's demands for permanent, warrantless eavesdropping powers and amnesty for telecoms which broke the law in spying on Americans. It provoked filibusters and all sorts of obstructionism in the Senate, and House Democrats -- including virtually every conservative "Blue Dog" -- just chose warrantless eavesdropping and telecom amnesty as the issue on which to defy, for the first time ever, the President's national security orders.
Additionally, while it is true that the GOP-led Congress largely endorsed every one of the President's policies, including his lawbreaking, the American voting public threw the Republicans out of power in 2006. When Democrats, once in power, began copying their behavior in endorsing even the President's illegal behavior, their approval ratings plummeted. Just last week, they refused to give legal sanction to the President's illegal spying; demanded that the lawsuits arising from that spying proceed; and even passed a bill requiring a full-scale investigation into what the President did when spying on Americans for all those years. These events were bizarrely ignored by Time because they negate the narrative they want to push.
(3) Time's defense of the Bush administration -- that "law and order or national security" has motivated even the illegal spying -- is perhaps most indefensible of all. The administration has blocked every Congressional and judicial attempt to investigate how it has used these spying powers. Thus, nobody has any idea what has motivated the spying or what the level of abuse is.
As Julian Sanchez wrote in a superb Op-Ed in the Los Angeles Times this weekend, the Federal Government abused its warrantless spying power for decades -- to spy on political opponents and other dissidents -- but nobody had any idea that was going on until the Church Committee conducted a full-fledged investigation. As Sanchez wrote:
If you think an executive branch unchecked by courts won't turn its "national security" surveillance powers to political ends -- well, it would be a first.
We have had no investigation into how the Bush administration has used these spying powers. There has been no Church Committee, no intensive media investigation, no judicial process. The only "investigations" into any of these surveillance activities has come from the executive branch itself. All we have are slothful, government-worshiping reporters like Calabresi and Time editors who sit back content in their own ignorance, having no idea how the Bush administration used its spying powers, citing their own total ignorance as proof that the Government did nothing wrong -- they did everything for our own Good, for our Protection.
Time's vouching for the Good Motives of the Bush administration is completely false for a separate reason. Even with as little as we know about what they've done, there most certainly are examples of politically-motivated spying, even though Calabresi and his editors are apparently unaware of them. From Democracy Now in 2006:
Earlier this week, the Servicemembers Legal Defense Network released documents showing that the Pentagon conducted surveillance on a more extensive level than first reported late last year. De-classified documents show that the agency spied on "Don't Ask, Don't Tell' protests and anti-war protests at several universities around the country. They also show that the government monitored student e-mails and planted undercover agents at least one protest.
But the Pentagon has not released all information on its surveillance activities. The American Civil Liberties Union recently filed a federal lawsuit to force the agency to turn over additional records. The lawsuit charges that the Pentagon is refusing to comply with Freedom of Information Act requests seeking records on the ACLU, the American Friends Service Committee, Greenpeace, Veterans for Peace and United for Peace and Justice, as well as 26 local groups and activists.
Even NBC reported previously:
A year ago, at a Quaker Meeting House in Lake Worth, Fla., a small group of activists met to plan a protest of military recruiting at local high schools. What they didn't know was that their meeting had come to the attention of the U.S. military.
A secret 400-page Defense Department document obtained by NBC News lists the Lake Worth meeting as a "threat" and one of more than 1,500 "suspicious incidents" across the country over a recent 10-month period. . . .
The Defense Department document is the first inside look at how the U.S. military has stepped up intelligence collection inside this country since 9/11, which now includes the monitoring of peaceful anti-war and counter-military recruitment groups.
Are Time reporters and editors just blissfully ignorant of these incidents or do they conceal them because they negate their clean, crisp storyline?
(4) The whole Time article is based upon one of the most pervasive journalistic fallacies: namely, that the choices the establishment press makes as to what they will cover and not cover is reflective of what "Americans" generally care about. Thus, Calabresi begins the article by listing a whole series of recent revelations about the Bush administration's ever-increasing Surveillance State powers and abuses and concludes: "to judge from the reaction in the country, nobody cares."
But the only ones who "don't care" are establishment media outlets like Time, not the "ordinary Americans" on whose behalf they always fantasize that they speak. It's the media that has ignored those stories.
Here is a Nexus count of how much media coverage certain stories have received over the last 30 days, including the Surveillance State stories which Calabresi cites as proof that Americans don't care about their constitutional liberties:
* "Spitzer and prostitutes" -- 2,323 results
* "Spitzer and Kristen" -- 1,087 results
* "Obama and Rezko" -- 1,263 results
* "Obama and Jeremiah Wright" -- 466 results
* "Wall Street Journal and data mining" -- 9 results
* "FBI and National security letters" -- 149 results
* "Intelligence Oversight Board" -- 21 results
This is what establishment journalists like Calabresi always do. Their industry obsesses on the most vapid, inconsequential chatter. They ignore the stories that actually matter. And then they claim that Americans only care about vapid gossip and not substantive issues -- and point to their own shallow coverage decisions as "proof" of what Americans care about. That thought process was vividly evident with their obsession with the Edwards hair "story," when they all chattered about it endlessly, promoted it in headlines, and then, when criticized for that, claimed that it was obviously something Americans were interested in, pointing to their own media fixation as proof that Americans cared.
The Time Magazines of the world ignore stories about Bush's abuses of spying powers. Therefore, Americans don't care about such abuses. That's the self-referential, self-loving rationale on which this entire article is based. And the whole article is filled with demonstrable falsehoods, all in service of arguing that the Bush administration has done nothing wrong, and even if they did, Americans don't mind at all.
UPDATE: Yet another serious factual error in Calabresi's article that I neglected to mention:
There are no scandalous examples of the White House using the Patriot Act powers for political purposes or of individual agents using them for personal gain.
Has Time ever heard of the U.S. Attorneys scandal, which just resulted in the filing of a Congressional lawsuit to compel recalcitrant Bush aides to comply with Subpoenas? From Harper's Scott Horton on Saturday:
This was largely part of an effort to disguise the obvious fact that the dismissals were the implementation of a political plan which had been formulated in the White House, largely under the guidance of Karl Rove. They were also designed to disguise the fact that an elaborate scheme had been concocted to circumvent the process through which candidates are reviewed and confirmed by the Senate using a secret amendment to the USA PATRIOT Act.
It's not surprising that this scandal would be whitewashed from the pages of Time, in light of what its Managing Editor, Rick Stengel, decreed last year while on The Chris Matthews Show:
Mr. STENGEL: I am so uninterested in the Democrats wanting Karl Rove, because it is so bad for them. Because it shows business as usual, tit for tat, vengeance. That's not what voters want to see.
Ms. BORGER: Mm-hmm.
MATTHEWS: So instead of like an issue like the war where you can say it's bigger than all of us, its more important than politics, this is politics.
Mr. STENGEL: Yes, and it's much less. It's small bore politics.
The principal theme of Time Magazine appears to be that corruption and even blatant lawbreaking by the Bush administration is a total non-story, something that nobody cares about and therefore shouldn't be investigated or reported (Joe Klein's first reaction in Time following disclosure of the NSA scandal was to defend the lawbreaking and sternly warn Nancy Pelosi and Democrats generally that they had better not object to the warrantless spying program or else they would be (justifiably) out of power forever).
Identically, Calabresi's declaration that the FBI's unquestionably illegal use of NSL powers under the Patriot Act was harmless and benign because the Bush DOJ said so is equally gullible and dishonest. As Patrick Meighan pointed out in comments:
In other words, we know that the Justice Department has not intentionally abused its unchecked investigative powers because the Justice Department looked at the Justice Department and decided that the Justice Department did not intentionally abuse its unchecked investigative powers.
In 2008, that's what's supposed to pass for checks and balances.
It is not surprising that this is the view of Bush followers, but it's also the predominant view of our ornery watchdog journalists as well. The Founders envisioned that the media would be the watchdog over government deceit and corruption, but nobody is more aggressive in dismissing concerns of government lawbreaking and deceit than the Time Magazines of our country. That's their primary function.
MI5 Seeks Powers to Trawl Records in New Terror Hunt
Counter-terrorism experts call it a 'force multiplier': an attack combining slaughter and electronic chaos. Now Britain's security services want total access to commuters' travel records to help them meet the threat
Millions of commuters could have their private movements around cities secretly monitored under new counter-terrorism powers being sought by the security services.
Records of journeys made by people using smart cards that allow 17 million Britons to travel by underground, bus and train with a single swipe at the ticket barrier are among a welter of private information held by the state to which MI5 and police counter-terrorism officers want access in order to help identify patterns of suspicious behaviour.
The request by the security services, described by shadow Home Secretary David Davis last night as 'extraordinary', forms part of a fierce Whitehall debate over how much access the state should have to people's private lives in its efforts to combat terrorism.
It comes as the Cabinet Office finalises Gordon Brown's new national security strategy, expected to identify a string of new threats to Britain - ranging from future 'water wars' between countries left drought-ridden by climate change to cyber-attacks using computer hacking technology to disrupt vital elements of national infrastructure.
The fear of cyber-warfare has climbed Whitehall's agenda since last year's attack on the Baltic nation of Estonia, in which Russian hackers swamped state servers with millions of electronic messages until they collapsed. The Estonian defence and foreign ministries and major banks were paralysed, while even its emergency services call system was temporarily knocked out: the attack was seen as a warning that battles once fought by invading armies or aerial bombardment could soon be replaced by virtual, but equally deadly, wars in cyberspace.
While such new threats may grab headlines, the critical question for the new security agenda is how far Britain is prepared to go in tackling them. What are the limits of what we want our security services to know? And could they do more to identify suspects before they strike?
One solution being debated in Whitehall is an unprecedented unlocking of data held by public bodies, such as the Oyster card records maintained by Transport for London and smart cards soon to be introduced in other cities in the UK, for use in the war against terror. The Office of the Information Commissioner, the watchdog governing data privacy, confirmed last night that it had discussed the issue with government but declined to give details, citing issues of national security.
Currently the security services can demand the Oyster records of specific individuals under investigation to establish where they have been, but cannot trawl the whole database. But supporters of calls for more sharing of data argue that apparently trivial snippets - like the journeys an individual makes around the capital - could become important pieces of the jigsaw when fitted into a pattern of other publicly held information on an individual's movements, habits, education and other personal details. That could lead, they argue, to the unmasking of otherwise undetected suspects.
Critics, however, fear a shift towards US-style 'data mining', a controversial technique using powerful computers to sift and scan millions of pieces of data, seeking patterns of behaviour which match the known profiles of terrorist suspects. They argue that it is unfair for millions of innocent people to have their privacy invaded on the off-chance of finding a handful of bad apples.
'It's looking for a needle in a haystack, and we all make up the haystack,' said former Labour minister Michael Meacher, who has a close interest in data sharing. 'Whether all our details have to be reviewed because there is one needle among us - I don't think the case is made.'
Jago Russell, policy officer at the campaign group Liberty, said technological advances had made 'mass computerised fishing expeditions' easier to undertake, but they offered no easy answers. 'The problem is what do you do once you identify somebody who has a profile that suggests suspicions,' he said. 'Once the security services have identified somebody who fits a pattern, it creates an inevitable pressure to impose restrictions.'
Individuals wrongly identified as suspicious might lose high-security jobs, or have their immigration status brought into doubt, he said. Ministers are also understood to share concerns over civil liberties, following public opposition to ID cards, and the debate is so sensitive that it may not even form part of Brown's published strategy.
But if there is no consensus yet on the defence, there is an emerging agreement on the mode of attack. The security strategy will argue that in the coming decades Britain faces threats of a new and different order. And its critics argue the government is far from ready.
The cyber-assault on Estonia confirmed that the West now faces a relatively cheap, low-risk means of warfare that can be conducted from anywhere in the world, with the power to plunge developed nations temporarily into the stone age, disabling everything from payroll systems that ensure millions of employees get paid to the sewage treatment processes that make our water safe to drink or the air traffic control systems keeping planes stacked safely above Heathrow.
And it is one of the few weapons which is most effective against more sophisticated western societies, precisely because of their reliance on computers. 'As we become more advanced, we become more vulnerable,' says Alex Neill, head of the Asia Security programme at the defence think-tank RUSI, who is an expert on cyber-attack.
The nightmare scenario now emerging is its use by terrorists as a so-called 'force multiplier' - combining a cyber-attack to paralyse the emergency services with a simultaneous atrocity such as the London Tube bombings.
Victims would literally have nowhere to turn for help, raising the death toll and sowing immeasurable panic. 'Instead of using three or four aircraft as in 9/11, you could do one major event and then screw up the communications network behind the emergency services, or attack the Underground control network so you have one bomb but you lock up the whole network,' says Davis. 'You take the ramifications of the attack further. The other thing to bear in mind is that we are ultimately vulnerable because London is a financial centre.'
In other words, cyber-warfare does not have to kill to bring a state to its knees: hackers could, for example, wipe electronic records detailing our bank accounts, turning millionaires into apparent paupers overnight.
So how easy would it be? Estonia suffered a relatively crude form of attack known as 'denial of service', while paralysing a secure British server would be likely to require more sophisticated 'spy' software which embeds itself quietly in a computer network and scans for secret passwords or useful information - activating itself later to wreak havoc.
Neill said that would require specialist knowledge to target the weakest link in any system: its human user. 'You will get an email, say, that looks like it's from a trusted colleague, but in fact that email has been cloned. There will be an attachment that looks relevant to your work: it's an interesting document, but embedded in it invisibly is "malware" rogue software which implants itself in the operating systems. From that point, the computer is compromised and can be used as a platform to exploit other networks.'
Only governments and highly sophisticated criminal organisations have such a capability now, he argues, but there are strong signs that al-Qaeda is acquiring it: 'It is a hallmark of al-Qaeda anyway that they do simultaneous bombings to try to herd victims into another area of attack.'
The West, of course, may not simply be the victim of cyber-wars: the United States is widely believed to be developing an attack capability, with suspicions that Baghdad's infrastructure was electronically disrupted during the 2003 invasion.
So given its ability to cause as much damage as a traditional bomb, should cyber-attack be treated as an act of war? And what rights under international law does a country have to respond, with military force if necessary? Next month Nato will tackle such questions in a strategy detailing how it would handle a cyber-attack on an alliance member. Suleyman Anil, Nato's leading expert on cyber-attack, hinted at its contents when he told an e-security conference in London last week that cyber-attacks should be taken as seriously as a missile strike - and warned that a determined attack on western infrastructure would be 'practically impossible to stop'.
Tensions are likely to increase in a globalised economy, where no country can afford to shut its borders to foreign labour - an issue graphically highlighted for Gordon Brown weeks into his premiership by the alleged terrorist attack on Glasgow airport, when it emerged that the suspects included overseas doctors who entered Britain to work in the NHS.
A review led by Homeland Security Minister Admiral Sir Alan West into issues raised by the Glasgow attack has been grappling with one key question: could more be done to identify rogue elements who are apparently well integrated with their local communities?
Which is where, some within the intelligence community insist, access to personal data already held by public bodies - from the Oyster register to public sector employment records - could come in. The debate is not over yet.
International Cyber-Cop Unit Girds for Uphill Battles
An group of international cyber cops is ramping up plans to fight online crime across borders.
The unit, known as the Strategic Alliance Cyber Crime Working Group, met this month in London and is made up of high-level online law enforcement representatives from the FBI, Australia, Canada, New Zealand, and the United Kingdom. One of the main goals of the group, which was founded in 2006, is to fight cyber crime in a common way by sharing intelligence, swapping tools and best practices, and strengthening and synchronizing their respective laws.
And it has its work cut out for it.
The Government Accountability Office last year said there is concern about threats that nation-states and terrorists pose to our national security through attacks on US computer-reliant critical infrastructures and theft of our sensitive information.
For example, according to the US-China Economic and Security Review Commission report, Chinese military strategists write openly about exploiting the vulnerabilities created by the U.S. military’s reliance on advanced technologies and the extensive infrastructure used to conduct operations.
Also, according to FBI testimony, terrorist organizations have used cybercrime to raise money to fund their activities. Despite the reported loss of money and information and known threats from adversaries, there remains a lack of understanding about the precise magnitude of cybercrime and its impact because cybercrime is not always detected or reported.
The group hopes to impact some of those problems. At the London meeting, participating countries outlined ways to share forensic tools, possibilities for joint training, and strategies for a public awareness campaign to help reduce cyber crime. According to the FBI, the group is one outgrowth of the larger Strategic Alliance Group—a formal partnership between these nations dedicated to tackling larger global crime issues, particularly organized crime.
The group so far has:
• Collectively developed a comprehensive overview of the transnational cyber threat—including current and emerging trends, vulnerabilities, and strategic initiatives for the working group to pursue (note: the report is available only to law enforcement);
• Set up a special area on Law Enforcement Online, the FBI’s secure Internet portal, to share information and intelligence;
• Launched a series of information bulletins on emerging threats and trends (for example, it drafted a bulletin recently describing how peer-to-peer, or P2P, file sharing programs can inadvertently leak vast amounts of sensitive national security, financial, medical, and other information);
• Began exploring an exchange of cyber experts to serve on joint international task forces and to learn each other’s investigative techniques firsthand; and
• Shared training curriculums and provided targeted training to international cyber professionals.
The GAO noted cybercrime laws vary widely across the international community. For example, Australia enacted its Cybercrime Act of 2001 to address this type of crime in a manner similar to the US Computer Fraud and Abuse Act. In addition, Japan enacted the Unauthorized Computer Access Law of 1999 to cover certain basic areas similar to those addressed by the U.S. federal cybercrime legislation.
Countries such as Nigeria with minimal or less sophisticated cybercrime laws have been noted sources of Internet fraud and other cybercrime. In response, they have looked to the examples set by industrialized nations to create or enhance their cybercrime legal framework. A proposed cybercrime bill, the Computer Security and Critical Information Infrastructure Protection Bill, is being debated before Nigeria’s General Assembly for consideration. Because political or natural boundaries are not an obstacle to conducting cybercrime, international agreements are essential to fighting cybercrime. For example, in November 2001, the United States and 29 other countries signed the Council of Europe’s Convention on Cybercrime as a multilateral instrument to address the problems posed by criminal activity on computer networks. Nations supporting this convention agree to have criminal laws within their own nation to address cybercrime, such as hacking, spreading viruses or worms, and similar unauthorized access to, interference with, or damage to computer systems. It also enables international cooperation in combating crimes such as child sexual exploitation, organized crime, and terrorism through provisions to obtain and share electronic evidence. The U.S. Senate ratified this convention in August 2006. As the 16th of 43 countries to support the agreement, the United States agrees to cooperate in international cybercrime investigations.
The governments of European countries such as Denmark, France, and Romania have ratified the convention. Other countries including Germany, Italy, and the United Kingdom have signed the convention although it has not been ratified by their governments. Non-European countries including Canada, Japan, and South Africa have also signed but not yet ratified the convention, the GAO report said.
In the US alone, the GAO said the annual loss due to computer crime was estimated to be $67.2 billion for US organizations, according to a 2005 FBI survey. The estimated losses associated with particular crimes include $49.3 billion in 2006 for identity theft and $1 billion annually due to phishing. These projected losses are based on direct and indirect costs that may include actual money stolen, estimated cost of intellectual property stolen, and recovery cost of repairing or replacing damaged networks and equipment.
Meanwhile the Strategic Alliance Cyber Crime Working Group will meet again in May, to bring together legal and legislative experts from the five countries to talk about common challenges, differing approaches, and potential ways to streamline investigations and harmonize laws on everything from data retention standards to privacy requirements, the FBI said.
Estonia Calls for EU Law to Combat Cyber Attacks
Estonia has called on the European Union to make cyber attacks a criminal offence to stop Internet users from freezing public and private Web sites for political revenge.
Estonian President Toomas Hendrik Ilves said he believed the Russian government was behind an online attack on Estonia over its decision to move a Red Army monument from a square in the capital Tallin. Russia has denied any involvement.
The decision triggered two nights of rioting by mainly Russian-speaking protesters, who argued that the Soviet-era memorial was a symbol of sacrifices made during World War Two.
The rioting coincided with repeated requests to Web sites, forcing them to crash or freeze. Network specialists said at the time at least some of the computers used could be traced to the Russian government or government agencies.
"Russian officials boasted about having done it (cyber attacks) afterwards -- one in a recent interview a month and a half ago saying we can do much more damage if we wanted to," he told Reuters in an interview.
The European Commission has sole right to initiate EU law and its Information Society and Media Commissioner Viviane Reding agreed action was needed.
"What happened in Estonia should be a wake-up call for Europe. Cyber attacks on one member state concern the whole of Europe. They must therefore receive a firm European response," Reding told Reuters from Budapest.
Reding said that last November she proposed setting up a new European telecoms market authority.
NATO also has opened a cyber defence "centre of excellence" in Estonia to study solutions to combating online attacks.
Mock cyber attacks on Estonia's new online voting system have given the country a better idea of how to handle a real attack when it came, Ilves said.
"Other (EU) member states helped in fending off the attacks by siphoning off some of the attacks," Ilves said.
Serious RFID Vulnerability Discovered
A group of a Dutch university's digital security researchers discovers a major security flaw in a popular RFID tag; discovery can have serious commercial and national security implications; as important as the discovery itself was how the researchers handled the situation
RFID technology is gaining new adopters, and some governmental organizations now develop policies to push for an even faster adoption of the technology (see HSDW story). This story is not going to help this trend: A week ago researchers and students of the Digital Security group of the Radboud University Nijmegen have discovered a serious security flaw in a widely used type of contactless smartcard, also called RFID tag. It concerns the Mifare Classic RFID card produced by NXP (formerly Philips Semiconductors). Earlier, German researchers Karsten Nohl en Henryk Plötz pointed out security weaknesses of this cards. Worldwide around one billion of these cards have been sold. This type of card is used for the Dutch "ov-chipkaart" (the RFID card for public transport throughout the Netherlands) and public transport systems in other countries (for instance, the subway in London and Hong Kong). Mifare cards are also widely used as company cards to control access to buildings and facilities. All this means that the flaw has a broad impact. Because some cards can be cloned, it is in principle possible to access buildings and facilities with a stolen identity. This has been demonstrated on an actual system. In many situations where these cards are used there will be additional security measures; it is advisable to strengthen these where possible.
The Digital Security group found weaknesses in the authentication mechanism of the Mifare Classic. In particular:
1. The working of the CRYPTO1 encryption algorithm has been reconstructed in detail
2. There is a relatively easy method to retrieve cryptographic keys, which does not rely on expensive equipment
Combining these ingredients, the group succeeded on mounting an actual attack, in which a Mifare Classic access control card was successfully cloned. In situation where there are no additional security measures, this would allow unauthorized access by people with bad intentions.
The Mifare Classic is a contactless smartcard developed in the mid-1990s. It is a memory card which offers some memory protection. The card is not programmable. The cryptographic operations it can perform are implemented in hardware, using a so-called linear shift feedback register (LSFR) and a "filter function." The encryption algorithm this implements is a proprietary algorithm CRYPTO1 which is a trade secret of NXP. The security of the card relies in part on the secrecy of CRYPTO1 algorithm, which is known as "security by obscurity."
Mifare Classic cards are typically used for authentication. Here the goal is that two parties prove who they are. This is done by demonstrating that they know some common secret information, a so-called shared secret (cryptographic) key. Both parties, in this case the Mifare card and the card reader, carry out certain operations and then check each other's results to be sure of whom they are dealing with. Authentication is needed to control access to facilities and buildings, and Mifare cards are commonly used for this purpose. Successful authentication is also a prerequisite to reading or writing part of the memory of the Mifare Classic. The card's memory is divided into sectors, each protected by two cryptographic keys. Proper key management is a subject in its own right. Roughly speaking, there are two possibilities:
1. All cards and all card readers used for a some application have the same keys for authentication. This is common when cards are used for access control
2. Each card has its own cryptographic keys. To check the keys of a card, the card reader should then first determine which card it is talking to and then look up or calculate the associated key(s). This is called key diversification. It is claimed that this approach is used for the Dutch public transport card.
Now, the Digital Security group found weaknesses in the authentication mechanism of the Mifare Classic. In particular:
1. The working of the CRYPTO1 encryption algorithm has been reverse engineered, and the group developed our own implementation of the algorithm
2. The group found a relatively easy method to retrieve cryptographic keys, which does not rely on expensive equipment
To reverse engineer the CRYPTO1 encryption algorithm the group used flawed authentication attempts. If one does not precisely follow the rules of the prescribed protocol, one can obtain some information about of the way it works. Combining such information is was possible to reconstruct the algorithm. Once the algorithm is known, one can find out the keys that are used by a so-called brute force attack, that is, simply trying all possible keys. In this case the keys are 48 bits long. Trying all the keys then requires around nine hours on advanced equipment, according to the recent TNO report 34643 `Security Analysis of the Dutch OV-chipkaart, published February 26th 2008.
Here too, however, certain flaws in the authentication protocol could be exploited, as the group discovered. This led members of the digital security group to the second point: there is a way to relatively easily retrieve the key without carrying out a lengthy brute force attack. This can be done by first carrying out many failed authentication attempts, which do provide some information. Storing the results of this in a big table, one can look for a match and retrieve the key. The table only has to be constructed once, and can be prepared in advance by repeatedly running the CRYPTO1 algorithm on a fixed input. The group's proof-of-concept demonstration of this attack still required many authentication attempts once this table had been constructed. Recording these attempts took several hours, but could be carried out by a hidden antenna to eavesdrop on a card reader. It seems that the complexity can be further reduced, possibly dramatically so, making the attack much simpler.
Once the secret cryptographic key is retrieved, there will be possibilities for abuse. How severe these possibilities are will depend on the situation. If all cards share the same key, then the system will be extremely vulnerable. This may be the case if cards are used for access control to buildings and facilities, both in the private and public sector. There is however no information on how common this is. For such a setting we demonstrated an actual attack, where a card of, say, an employee can be cloned by bumping into that person with a portable card reader. The person whose identity is being stolen may then be completely unaware that anything has happened. In a situation in which diversified keys are used, abuse will be more difficult, but not impossible. No actual attacks have been demonstrated for such a scenario.
At the technical level there are currently no known countermeasures. Shielding cards when they are not in use, for example, in a metal container, reduces the risk of an attacker secretly reading out a card. When the card is being used, however, it is still possibly to eavesdrop on the communication, with a hidden antenna near the access point. Strengthening of traditional access control measures is therefore advisable. Access to sensitive facilities will (or should) be protected by several protection mechanisms anyway, of which the RFID tag is only one.
The Dutch group's hacking of teh RFID card is not the first such attempt. In December 2007 Karten Nohl and Henryk Plötz announced that they had reconstructed CRYPTO1 at a hackers' conference in Berlin. The Dutch group has been in touch with them, and the group's work builds on their results. Nohl and Plötz kept some information about CRYPTO1 to themselves. To reverse engineer CRYPTO1, they carried out a physical attack in which they studied the layout of the hardware implementing the algorithm on an actual Mifare Classic chip. Their approach is completely different from the Dutch group's approach, as the latter only exploited weaknesses of the protocol and did not look looking at the hardware implementation.
The Dutch researchers say they face a dilemma: When discovering a security flaw there is a question on how to handle this information. Immediate publication of the details can encourage attacks and do serious damage. Keeping the flaw secret for a long period may mean that necessary steps to counter the vulnerability are not taken. It is common practice in the security community to try to strike a balance between these concerns, and reveal flaws after some delay. This is the approach the group has taken. On Friday 7 March the government was informed, because national security issues might be at stake. On 8 March, experts of the Dutch Signals Security Bureau (NBV) of the General Intelligence and Security Service (AIVD) visited Nijmegen to assess the situation, in which they concluded that the approach the digital security group demonstrated was an effective attack. On 9 March, NXP was informed and on Monday, 10 March, Trans Link Systems (the company developing the Dutch public transport card). The group spoke to representatives of both companies about the technical details, and is collaborating with them to analyze the impact and think of possible countermeasures. On 12 March 12, minister Ter Horst has informed the Dutch Parliament of the problem.
How to Hack RFID-Enabled Credit Cards for $8
A number of credit card companies now issue credit cards with embedded RFIDs (radio frequency ID tags), with promises of enhanced security and speedy transactions.
But on today's episode of Boing Boing tv, hacker and inventor Pablos Holman shows Xeni how you can use about $8 worth of gear bought on eBay to read personal data from those credit cards -- cardholder name, credit card number, and whatever else your bank embeds in this manner.
Fears over data leaks from RFID-enabled cards aren't new, and some argue they're overblown -- but this demo shows just how cheap and easy the "sniffing" can be.
This episode is part of our ongoing series of interviews with some of the thinkers, hackers, and tinkerers at the O'Reilly Emerging Technology conference this year.
State Agency Moves to Plug USB Flash Drive Security Gap
Security officials are issuing USB flash drives to workers in the state of Washington's Division of Child Support as part of a new security procedure established to eliminate the use of nonapproved thumb drives by workers collecting and transporting confidential data.
The state has so far distributed 150 of 200 SanDisk Corp. Cruzer Enterprise thumb drives to unit supervisors in the division who manage collections teams in 10 field offices, said officials (see also "Review: 7 secure USB drives").
Brian Main, the division's data security officer, said the new drives promise to help officials keep better track of mobile data by integrating them with Web-based management software that can centrally monitor, configure and prevent unauthorized access to the miniature storage devices.
"We do periodic risk analysis of our systems, and one of the things that came up is the use of thumb drives -- they were everywhere," said Main. "We had a hard time telling which were privately owned and which were owned by the state." He also said that officials had difficulty keeping track of what data was stored on the workers' thumb drives.
Main said the division plans to manage and back up the new drives using SanDisk's Central Management & Control server software, which will soon be installed at the division's headquarters in Olympia. The software, which relies on a Web connection to directly communicate with agents on the tiny flash drives, can also remotely monitor and flush any lost drives, he said.
Each field office will run a copy of the software to handle localized management needs, he said.
Officials in the division's training operations will get Cruzer Enterprise devices with 4GB of memory to store large presentations and screenshots. Enforcement personnel will get devices that store 1GB, Main said.
Main said the division first looked at Verbatim America LLC's thumb drives in its effort to improve security but ultimately turned to the SanDisk technology because of its support for Microsoft Corp.'s Windows Vista operating system.
Cruzer Enterprise provides 256-bit AES encryption and requires users to create a password upon activation. The device automatically deletes all of its content once someone has tried 10 times to access it using incorrect passwords. Main said the self-encrypting capability was removes the "human component" from managing confidential data, a key feature for the agency.
The Division of Child Support collects about $700 million annually in child-support payments form noncustodial parents. The agency, part of the state's Department of Social and Health Services, manages 350,000 active child-support cases annually, noted Main.
Sensitive data transported by off-site workers includes tax documents, employer records, criminal histories and federal passport data of some agency clients, Main said. At the least, he noted, the drives include the names, dates of birth and Social Security numbers of children serviced by the agency.
The state began rolling out the Cruzer drives late last year after recalling the thumb drives used by workers. Most of those had been purchased independently by the employees, causing myriad problems for security personnel, Main said. The new policy requires workers to use the drives supplied by the agency. Main said he eventually plans to destroy all existing thumb drives collected as part of the security policy change.
Most companies are too enamored of the convenience, portability and low cost of USB flash drives to consider their threat to security, said Larry Ponemon, chairman of Ponemon Institute LLC, a Traverse City, Mich.-based research firm.
"I think a lot of organizations are asleep at the switch. They don't see this as a huge problem, and it obviously has the potential to be the mother of all data-protection issues," said Ponemon. "A lot of organizations believe if you have a good [security] policy and you educate people and ask them to be good, that's sufficient. The reality is, thumb drives create a lot of uncertainty because they contain enormous an amount of information."
A December 2007 survey of 691 IT security practitioners by Ponemon Institute asked respondents if they believed most employees would report a lost laptop or memory stick. While 78% said that employees would likely notify IT about a lost laptop, only 25% expected that workers would report a lost USB flash drive.
"The general perception is no one will report a lost USB memory stick because they're so cheap -- and the embarrassment factor. It's hard to even know all the different instances where information [on them] is lost or stolen," remarked Ponemon.
The agency is in talks with ControlGuard to deploy the security provider's Endpoint Access Manager Server and Endpoint Agents across its network. Access Management Server sends security policy information from a central location to agents installed at specific data points to enforce protection and monitor activities. Main said the technology would allow his office to restrict authentication and control data output access on PCs, hard drives and printers.
Second Mass Hack Exposed
Hot on the heels of a recent hack in which 10,000 sites were compromised, researchers have disclosed a new large-scale attack..
Researchers at McAfee estimated that the attack has been active for roughly one week, and in that time frame has managed to place itself on roughly 200,000 web pages.
Rather than attempt to exploit browser vulnerabilities, the attack attempts to trick a user into manually launching its malicious payload.
"This contrasts [Thursday’s] attack in that the vast majority of those were active server pages (.ASP)," explained McAfee researcher Craig Schmugar on a company blog posting.
"The ASP attacks are different than the phpBB ones in that the payload and method are quite different. Various exploits are used in the ASP attacks, where the phpBB ones rely on social engineering."
The infected pages bring up what appears to be a pornographic web site. Upon loading the page, a 'fake codec' social engineering attack is attempted. The user is told that in order to view the movie on the page, a special video codec must be installed.
The user then downloads a trojan program which installs a malware package on the users system then delivers a fraudulent error message telling the user that the supposed codec could not be installed.
Advanced Software Identifies Complex Cyber Network Attacks
By their very nature networks are highly interdependent and each machine’s overall susceptibility to attack depends on the vulnerabilities of the other machines in the network; new software allows IT managers to address this problem
A chain is only as strong as its weakest link, and a computer network is only as secure as the least-secure computer attached to it. Researchers at George Mason University’s Center for Secure Information Systems have developed new software that can reduce the impact of cyber attacks by identifying the possible vulnerability paths through an organization’s networks. By their very nature networks are highly interdependent and each machine’s overall susceptibility to attack depends on the vulnerabilities of the other machines in the network. Attackers can thus take advantage of multiple vulnerabilities in unexpected ways, allowing them incrementally to penetrate a network and compromise critical systems. In order to protect an organization’s networks, it is necessary to understand not only individual system vulnerabilities, but also their interdependencies. “Currently, network administrators must rely on labor-intensive processes for tracking network configurations and vulnerabilities, which requires a great deal of expertise and is error prone because of the complexity, volume and frequent changes in security data and network configurations,” says Sushil Jajodia, university professor and director of the Center for Secure Information Systems. “This new software is an automated tool that can analyze and visualize vulnerabilities and attack paths, encouraging ‘what-if analysis’.”
The software developed at Mason, CAULDRON, allows for the transformation of raw security data into roadmaps that allow users to proactively prepare for attacks, manage vulnerability risks and have real-time situational awareness. CAULDRON provides informed risk analysis, analyzes vulnerability dependencies and shows all possible attack paths into a network. In this way, it accounts for sophisticated attack strategies that may penetrate an organization’s layered defenses. CAULDRON’s intelligent analysis engine reasons through attack dependencies, producing a map of all vulnerability paths that are then organized as an attack graph that conveys the impact of combined vulnerabilities on overall security. To manage attack graph complexity, CAULDRON includes hierarchical graph visualizations with high-level overviews and detail drilldown, allowing users to navigate into a selected part of the big picture to get more information. “One example of this software in use is at the Federal Aviation Administration. They recently installed CAULDRON in their Cyber Security Incident Response Center and it is helping them prioritize security problems, reveal unseen attack paths and protect across large numbers of attack paths,” says Jajodia. “While currently being used by the FAA and defense community, the software is applicable in almost any industry or organization with a network and resources they want to keep protected, such as banking or education.”
Stanford Researchers Developing 3-D Camera With 12,616 Lenses
The camera you own has one main lens and produces a flat, two-dimensional photograph, whether you hold it in your hand or view it on your computer screen. On the other hand, a camera with two lenses (or two cameras placed apart from each other) can take more interesting 3-D photos.
But what if your digital camera saw the world through thousands of tiny lenses, each a miniature camera unto itself? You'd get a 2-D photo, but you'd also get something potentially more valuable: an electronic "depth map" containing the distance from the camera to every object in the picture, a kind of super 3-D.
Stanford electronics researchers, lead by electrical engineering Professor Abbas El Gamal, are developing such a camera, built around their "multi-aperture image sensor." They've shrunk the pixels on the sensor to 0.7 microns, several times smaller than pixels in standard digital cameras. They've grouped the pixels in arrays of 256 pixels each, and they're preparing to place a tiny lens atop each array.
"It's like having a lot of cameras on a single chip," said Keith Fife, a graduate student working with El Gamal and another electrical engineering professor, H.-S. Philip Wong. In fact, if their prototype 3-megapixel chip had all its micro lenses in place, they would add up to 12,616 "cameras."
Point such a camera at someone's face, and it would, in addition to taking a photo, precisely record the distances to the subject's eyes, nose, ears, chin, etc. One obvious potential use of the technology: facial recognition for security purposes.
But there are a number of other possibilities for a depth-information camera: biological imaging, 3-D printing, creation of 3-D objects or people to inhabit virtual worlds, or 3-D modeling of buildings.
The technology is expected to produce a photo in which almost everything, near or far, is in focus. But it would be possible to selectively defocus parts of the photo after the fact, using editing software on a computer
Knowing the exact distance to an object might give robots better spatial vision than humans and allow them to perform delicate tasks now beyond their abilities. "People are coming up with many things they might do with this," Fife said. The three researchers published a paper on their work in the February edition of the IEEE ISSCC Digest of Technical Papers.
Their multi-aperture camera would look and feel like an ordinary camera, or even a smaller cell phone camera. The cell phone aspect is important, Fife said, given that "the majority of the cameras in the world are now on phones."
Here's how it works:
The main lens (also known as the objective lens) of an ordinary digital camera focuses its image directly on the camera's image sensor, which records the photo. The objective lens of the multi-aperture camera, on the other hand, focuses its image about 40 microns (a micron is a millionth of a meter) above the image sensor arrays. As a result, any point in the photo is captured by at least four of the chip's mini-cameras, producing overlapping views, each from a slightly different perspective, just as the left eye of a human sees things differently than the right eye.
The outcome is a detailed depth map, invisible in the photograph itself but electronically stored along with it. It's a virtual model of the scene, ready for manipulation by computation. "You can choose to do things with that image that you weren't able to do with the regular 2-D image," Fife said. "You can say, 'I want to see only the objects at this distance,' and suddenly they'll appear for you. And you can wipe away everything else."
Or the sensor could be deployed naked, with no objective lens at all. By placing the sensor very close to an object, each micro lens would take its own photo without the need for an objective lens. It has been suggested that a very small probe could be placed against the brain of a laboratory mouse, for example, to detect the location of neural activity.
Other researchers are headed toward similar depth-map goals from different approaches. Some use intelligent software to inspect ordinary 2-D photos for the edges, shadows or focus differences that might infer the distances of objects. Others have tried cameras with multiple lenses, or prisms mounted in front of a single camera lens. One approach employs lasers; another attempts to stitch together photos taken from different angles, while yet another involves video shot from a moving camera.
But El Gamal, Fife and Wong believe their multi-aperture sensor has some key advantages. It's small and doesn't require lasers, bulky camera gear, multiple photos or complex calibration. And it has excellent color quality. Each of the 256 pixels in a specific array detects the same color. In an ordinary digital camera, red pixels may be arranged next to green pixels, leading to undesirable "crosstalk" between the pixels that degrade color.
The sensor also can take advantage of smaller pixels in a way that an ordinary digital camera cannot, El Gamal said, because camera lenses are nearing the optical limit of the smallest spot they can resolve. Using a pixel smaller than that spot will not produce a better photo. But with the multi-aperture sensor, smaller pixels produce even more depth information, he said.
The technology also may aid the quest for the huge photos possible with a gigapixel camera—that's 140 times as many pixels as today's typical 7-megapixel cameras. The first benefit of the Stanford technology is straightforward: Smaller pixels mean more pixels can be crowded onto the chip.
The second benefit involves chip architecture. With a billion pixels on one chip, some of them are sure to go bad, leaving dead spots, El Gamal said. But the overlapping views provided by the multi-aperture sensor provide backups when pixels fail.
The researchers are now working out the manufacturing details of fabricating the micro-optics onto a camera chip.
The finished product may cost less than existing digital cameras, the researchers say, because the quality of a camera's main lens will no longer be of paramount importance. "We believe that you can reduce the complexity of the main lens by shifting the complexity to the semiconductor," Fife said.
Understanding Anonymity and the Need for Biometrics
Mark A. Shiffrin and Avi Silberschatz
Every time we leave our homes, we enter a world dominated by strangers and anonymity. Although facial or voice recognition may help us authenticate a few of those we encounter, what about the many people we don't know? In particular, how do we authenticate ourselves to each other when we need to know who we are dealing with?
Confusing privacy with anonymity has delayed implementation of robust, virtually tamper-proof biometric authentication to replace paper-based forms of ID that neither assure privacy nor reliably prove identity. The debate over Real ID and sensitivity to creation of any form of national ID reveal a fear that anything that identifies us to others will intrude on privacy. This has led to a preoccupation with forms of ID rather than the fundamental question of how we can reliably identify ourselves to each other. This is a crucial issue: We live in a society where we are often unknown to the people we encounter, including people who need to know exactly who they are dealing with.
While anonymity implies privacy, it does not confer it. We delude ourselves into thinking we have privacy if the person next to us doesn't know our name. If we use cash and avoid technological conveniences such as credit cards and windshield-mounted RFID devices to pay highway tolls, we may think we are going about life anonymously. We are allowing ourselves to believe that our public acts, how we communicate to others by word or deed in public space, are now somehow private.
In the tight-knit communities in which people used to live, people presumed that neighbors always knew whenever someone ventured outside of his or her front door, because everyone knew each other and could see public conduct. In the global virtual neighborhood, we now live among strangers. We may have anonymity as we encounter people who are not familiar with us, but it is only an illusion that public acts are now private.
Outside our homes, we have always lived in a public space where our open acts are no longer private. Anonymity has not changed that, but has provided an illusion of privacy and security. A credit card, rather than a shopkeeper, might record our purchases. Or, the RFID chip in our EZ pass might recognize that we cross a bridge at a given moment, instead of a toll taker. But these are records of public acts in which we openly engage in a public space with no reasonable expectation of confidentiality.
In public space, we engage in open acts where we have no expectation of privacy, as well as private acts that cannot take place within our homes and therefore require authenticating identity to carve a sphere of privacy. Such private acts might involve receiving medical treatment or conducting financial transactions. Individuals have a strong interest in maintaining control of treatment records that we rightly consider confidential, and knowing that finances cannot be misappropriated or snooped without consent.
The false privacy of anonymity allows others to steal what remains private to us in public space. Personal identity is unique and should remain in our control. Our lives outside our homes include not only open acts, but also those private transactions that have to take place in space we cannot control.
The lack of reliable authentication becomes a threat to control of our own identity and confidential information, because it enables others to take advantage of living among strangers to assume a false identity undetected. Strangers can falsely assume our identities when they steal identifying information like social security or credit card numbers. They can also threaten our personal, economic and national security when they garb themselves in legitimacy by forging ID or misusing someone else's ID with or without that person's collusion.
Biometric authentication has a role in maintaining and defending our control of our own identity and personal data. This emerging technology makes it virtually impossible to assume someone else's unique identity. It is a way of providing the same kind of security in the virtual neighborhood that we once had in rooted neighborhoods, where the uniqueness of individual identity was assured by neighbors authenticating each other through facial recognition.
We have to expect that people will see us when we are in public and that our open public acts will be just that. But we have to worry that, in an anonymous world without authenticated identity, privacy will be violated when others can assume our identifying characteristics and take control of transactions and interactions outside the home that are indeed personal and unique to us. This is a threat to the sphere of privacy we take with us outside our homes, including not only our interest in maintaining control of our names and reputations, but also of transactions and records that are highly confidential to us. Authenticated identity can address this threat, as well as the threat posed to society by strangers exploiting the vulnerability of anonymity to assume false identity.
Growth of Facial Recognition Biometrics, I
More and more private and government organizations turn to facial recognition biometric (just think DMVs), but privacy concerns slow broader adoption
After a driver sits for a photo at the Illinois Secretary of State office to renew a license, officials use facial-recognition technology to give the resulting image a close look. First, state officials verify that the face matches the images portrayed on previous licenses issued under the driver’s name. The second, more extensive run-through determines if the same face appears on other Illinois driver’s licenses with different names. Washington Technology's Alice Lipowicz writes that since starting the program in 1999, the state has uncovered more than 5,000 cases of multiple identity fraud, according to Beth Langen, policy and program division administrator at the Illinois Secretary of State office. The state pays Digimarc Corp. about 25 cents per license for the service, she said. “We are very pleased. It is a fraud for which we have no other tool” to combat, Langen said.
About 40 percent of the nation’s drivers will undergo such facial-recognition database checks when they renew their licenses in twenty states. It is just one indication that after years of ups and downs, facial-recognition technology in government agencies is gaining momentum on several fronts. Facial-image-matching applications have been available for more than a decade but are just beginning to attain widespread use in government. Using captured facial images which are adjusted for lighting, the technology extracts data from the image -- such as the length of a nose or a jaw line -- and uses an algorithm to compare the data from one image to other images. Facial recognition got off to a bad start when tested at the Super Bowl in Tampa, Florida, in 2001. Surveillance images of faces from the crowd generated so many false positives that the test was deemed a failure. Experts concede there still are high error rates if facial recognition is applied to images taken under less-than-ideal conditions. That type of application also spurs the greatest concern about privacy and civil rights violations.
Now, however, facial recognition is considered reliable in environments in which the lighting, facial expression, angle of the head, and distance of the subject from the camera can be controlled, and interference from hats, sunglasses, and such can be minimized. The most recent test results announced in March 2007 by the National Institute of Standards and Technology (NIST) showed error rates of 1 percent or less, a huge improvement compared with previous tests. Spending for 2008 on contracts related to facial recognition is estimated at $400 million, said Peter Cheesman, a spokesman at International Biometric Group (IBG), a New York-based consulting firm. That includes $254 million for civilian agencies, $68 million for law enforcement, and about $75 million for surveillance and access control, he said. State driver’s license bureaus are in the forefront. The twenty or so state motor vehicle departments that have facial-recognition systems or are in the process of implementing them typically perform one-to-one and one-to-many matches within their states.
Growth in such applications is continuing, driven by concerns about identity theft and fraud. Along with Colorado, Illinois, Iowa, Kentucky, Wisconsin, Washington, and many others, Oregon is the latest state to install facial recognition. “Doing facial matching in state motor vehicle departments is acceptable, logical and inexpensive. More states will move toward it,” said Raj Nanavati, partner at IBG.
Voice Biometrics Gaining a Foot Hold
Philips and PerSay combine encryption software with technology that manages users' "voiceprints" and speech verification; both potential customers and privacy advocates say they like it
With the help of encryption, voice biometrics technology has taken a big step forward in strengthening its privacy and security measures, according to the Information and Privacy Commissioner of Ontario. The major advancements have come from Europe, where Netherlands-based electronics giant Philips has taken its biometric encryption technology and applied it to Israel-based PerSay’s "voiceprint" and speech-verification products. According to Ontario privacy commissioner Ann Cavoukian, the combination of these technologies has ushered in a new layer of privacy and security. “In the past, voice biometrics has basically been conducted in the clear and it hasn’t been encrypted,” Cavoukian said. “So when your voiceprint is sent across the network and back to the server, the information could be vulnerable. Now, you can replace that with a highly protected system that will give you the benefits of voice biometrics, but with enhanced privacy and security.”
IT World Canada's Rafael Ruffolo writes that biometric encryption is a process which securely binds a PIN or a cryptographic key to a biometric -- which includes physical characteristics such as fingerprints, retinas, palm prints, or voice recognition. Cavoukian referred to biometric encryption as a positive-sum technology and encouraged any organization sitting on the fence for voice biometrics to consider adopting it with this new encryption system. “Based on these developments, I’d encourage anyone that is considering voice biometrics to look at the Philips/PerSay model and explore the encryption technology,” Cavoukian said. “I could see why people would hold back until there was a viable encryption system. But now I’m truly delighted because nothing could be more superior than biometric encryption.”
One application for the technology involves remote voice authentication. In standard remote authentication systems, a customer’s voiceprint is collected at a terminal and subsequently sent to a processing server, which compares the voiceprint with a stored template/biometric before sending it back to the terminal for authentication. With biometric encryption, however, the process is altered and the biometrically encrypted template is sent to the terminal, as opposed to sending the voiceprint out to the server. As a result, no audio is ever sent over the network. Michiel van der Veen, general manager at Philips priv-ID Biometrics, said that creating better privacy technologies will help speed up the penetration of biometric solutions within the commercial market. And because of how convenient the technology can be, he said, biometrics will play an increasingly larger role in the average consumers’ life. “If you start thinking about using sensitive biometric information in all kinds of applications, it means that your biometric identity is exposed in all kinds of commercial solutions and can suddenly become available to a whole lot of people,” van der Veen said. “The current solutions already respect privacy and adhere to strict guidelines. But when you add privacy solutions like we are offering today, then you basically make privacy inherent. “No matter who is using the solution you will be able to guarantee the data and the voiceprint is not misused for other purposes.”
Cavoukian said that the most remarkable aspect of combining biometric encryption and voice recognition was the technical challenges both Philips and PerSay were able to overcome. “What often happens is we see degradation in performance or a loss of accuracy because encrypting the voiceprint gives you far less information to work with,” she said. “The beauty of this is that not only were they successful in applying biometric encryption to voice, but we also noticed that there was no degradation of the voice technology either.” One of the biggest markets for voice biometrics is among the financial sector, where banks are increasingly offering more and more of its services via the telephone. Cavoukian said increased privacy measures for these voice authenticated systems would be a perfect fit. “This encryption technology would be ideally suited for sensitive tasks such as banking, checking your market account, or trading over the phone,” she said. “This is really just the beginning of the many possibilities I see for biometric technology.”
Identifying Manipulated Images
New tools that analyze the lighting in images help spot tampering.
Photo-editing software gets more sophisticated all the time, allowing users to alter pictures in ways both fun and fraudulent. Last month, for example, a photo of Tibetan antelope roaming alongside a high-speed train was revealed to be a fake, according to the Wall Street Journal, after having been published by China's state-run news agency. Researchers are working on a variety of digital forensics tools, including those that analyze the lighting in an image, in hopes of making it easier to catch such manipulations.
Tools that analyze lighting are particularly useful because "lighting is hard to fake" without leaving a trace, says Micah Kimo Johnson, a researcher in the brain- and cognitive-sciences department at MIT, whose work includes designing tools for digital forensics. As a result, even frauds that look good to the naked eye are likely to contain inconsistencies that can be picked up by software.
Many fraudulent images are created by combining parts of two or more photographs into a single image. When the parts are combined, the combination can sometimes be spotted by variations in the lighting conditions within the image. An observant person might notice such variations, Johnson says; however, "people are pretty insensitive to lighting." Software tools are useful, he says, because they can help quantify lighting irregularities--they can give solid information during evaluations of images submitted as evidence in court, for example--and because they can analyze more complicated lighting conditions than the human eye can. Johnson notes that in many indoor environments, there are dozens of light sources, including lightbulbs and windows. Each light source contributes to the complexity of the overall lighting in the image.
Johnson's tool, which requires an expert user, works by modeling the lighting in the image based on clues garnered from various surfaces within the image. (It works best for images that contain surfaces of a fairly uniform color.) The user indicates the surface he wants to consider, and the program returns a set of coefficients to a complex equation that represents the surrounding lighting environment as a whole. That set of numbers can then be compared with results from other surfaces in the image. If the results fall outside a certain variance, the user can flag the image as possibly manipulated.
Hany Farid, a professor of computer science at Dartmouth College, who collaborated with Johnson in designing the tool and is a leader in the field of digital forensics, says that "for tampering, there's no silver button." Different manipulations will be spotted by different tools, he points out. As a result, Farid says, there's a need for a variety of tools that can help experts detect manipulated images and can give a solid rationale for why those images have been flagged.
Neal Krawetz, who owns a computer consulting firm called Hacker Factor, presented his own image-analysis tools last month at the Black Hat 2008 conference in Washington, DC. Among his tools was one that looks for the light direction in an image. The tool focuses on an individual pixel and finds the lightest of the surrounding pixels. It assumes that light is coming from that direction, and it processes the image according to that assumption, color-coding it based on light sources. While the results are noisy, Krawetz says, they can be used to spot disparities in lighting. He says that his tool, which has not been peer-reviewed, is meant as an aid for average people who want to consider whether an image has been manipulated--for example, people curious about content that they find online.
Cynthia Baron, associate director of digital media programs at Northeastern University and author of a book on digital forensics, is familiar with both Krawetz's and Farid's work. She says that digital forensics is a new enough field of research that even the best tools are still some distance away from being helpful to a general user. In the meantime, she says, "it helps to be on the alert." Baron notes that, while sophisticated users could make fraudulent images that would evade detection by the available tools, many manipulations aren't very sophisticated. "It's amazing to me, some of the things that make their way onto the Web and that people believe are real," she says. "Many of the things that software can point out, you can see with the naked eye, but you don't notice it."
Johnson says that he sees a need for tools that a news agency, for example, could use to quickly perform a dozen basic checks on an image to look for fraud. While it might not catch all tampering, he says, such a tool would be an important step, and it could work "like an initial spam filter." As part of developing that type of tool, he says, work needs to be done on creating better interfaces for existing tools that would make them accessible to a general audience.
Pleasing Google's Tech-Savvy Staff
Information Officer Finds Security in Gadget Freedom of Choice
How do you run the information-technology department at a company whose employees are considered among the world's most tech-savvy?
Douglas Merrill, Google Inc.'s chief information officer, is charged with answering that question. His job is to give Google workers the technology they need, and to keep them safe -- without imposing too many restrictions on how they do their job. So the 37-year-old has taken an unorthodox approach.
Unlike many IT departments that try to control the technology their workers use, Mr. Merrill's group lets Google employees download software on their own, choose between several types of computers and operating systems, and use internal software built by the company's engineers. Lately, he has also spent time evangelizing to outside clients about Google's own enterprise-software products -- such as Google Apps, an enterprise version of Google's Web-based services including email, word processing and a calendar.
Mr. Merrill, who has surfer-length hair and follows a T-shirt dress code, studied social and political organization at the University of Tulsa in Tulsa, Okla., and then went on to earn master's and doctorate degrees in psychology from Princeton University. His education in IT came largely from jobs as an information scientist at RAND Corp., senior manager at Price Waterhouse and senior vice president at Charles Schwab & Co. He joined Google in late 2003.
We sat down with Mr. Merrill to talk about Google's approach to IT. Excerpts:
The Wall Street Journal: What's the structure of the IT organization at Google?
Mr. Merrill: We're a decentralized technology organization, in that almost everyone at Google is some type of technologist. At most organizations, technology is done by one organization, and is very locked-down and very standardized. You don't have the freedom to do anything. Google's model is choice. We let employees choose from a bunch of different machines and different operating systems, and [my support group] supports all of them. It's a little bit less cost-efficient -- but on the other hand, I get slightly more productivity from my [Google's] employees.
WSJ: How do you support all of those different options effectively?
Mr. Merrill: We offer a lot more self-service. For example, let's say you want a new application to do something. You could take your laptop to a tech stop [areas in Google offices where workers can get technical support], but you can also go to an internal Web site where you download it and install the software. We allow all users to download software for themselves.
WSJ: Isn't that a security risk?
Mr. Merrill: The traditional security model is to try to tightly lock down endpoints [like computers and smartphones themselves], and it makes people sleep better at night, but it doesn't actually give them security. We put security into the infrastructure. We have antivirus and antispyware running on people's machines, but we also have those things on our mail server. We have programs in our infrastructure to watch for strange behavior. This means I don't have to worry about the endpoint as much. The traditional security model didn't really work. We had to find a new one.
WSJ: You depend in large part on open-source software or software that's built internally. What are some examples? What are the benefits?
Mr. Merrill: We do buy software where it makes sense to -- for example, we have a general ledger [accounting software] from Oracle; Oracle did a good job. Where it makes more sense to buy, we'll buy; where it makes more sense to build our own, we'll build. An example: Our [customer-relationship management] software is tightly integrated with our ad system, so we had to build our own.
We also believe there should be competition -- for instance, in operating systems, because different operating systems do different things well. We run search off of Linux. We run the Summer of Code where we pay college students to work on open-source projects that they think are useful.
WSJ: What's driving the "consumerization" of tech in the enterprise, where companies are borrowing tech ideas from the consumer Internet?
Mr. Merrill: Fifteen years ago, enterprise technology was higher-quality than consumer technology. That's not true anymore. It used to be that you used enterprise technology because you wanted uptime, security and speed. None of those things are as good in enterprise software anymore [as they are in some consumer software]. The biggest thing to ask is, "When consumer software is useful, how can I use it to get costs out of my environment?"
Google Apps is hosted on my infrastructure, and [the Premier Edition] costs roughly $50 a seat. You can go from an average of 50 megabytes of [email] storage to 10 gigabytes and more. There's better response time, you can reach email from anywhere in the world, and it's more financially effective.
WSJ: When you make that pitch to other CIOs, what are they most skeptical about?
Mr. Merrill: When I talk to Fortune 100 CIOs, they want to understand, "What is your security model? Is it really as reliable? What's the catch?"
The answer is, I had to build this massive infrastructure to run Google, so adding all the enterprise data isn't a big deal. I already had to build security standards because search logs are really private. Very few [Google employees] have access to consumer data, [and those who do] have to go through background checks. We have a rich relationship with the security community -- so when people find problems, they tell us. We have more than 150 security engineers who do nothing but security. We don't have a security priesthood: Every engineer is trained. We use automated tools that check every engineer's code.
We're able to invest in information security in a way that most people aren't. We did it because of search. In some sense, Google Apps is just a byproduct.
U.S. Adapts Cold-War Idea to Fight Terrorists
Eric Schmitt and Thom Shanker
In the days immediately after the attacks of Sept. 11, 2001, members of President Bush’s war cabinet declared that it would be impossible to deter the most fervent extremists from carrying out even more deadly terrorist missions with biological, chemical or nuclear weapons.
Since then, however, administration, military and intelligence officials assigned to counterterrorism have begun to change their view. After piecing together a more nuanced portrait of terrorist organizations, they say there is reason to believe that a combination of efforts could in fact establish something akin to the posture of deterrence, the strategy that helped protect the United States from a Soviet nuclear attack during the cold war.
Interviews with more than two dozen senior officials involved in the effort provided the outlines of previously unreported missions to mute Al Qaeda’s message, turn the jihadi movement’s own weaknesses against it and illuminate Al Qaeda’s errors whenever possible.
A primary focus has become cyberspace, which is the global safe haven of terrorist networks. To counter efforts by terrorists to plot attacks, raise money and recruit new members on the Internet, the government has mounted a secret campaign to plant bogus e-mail messages and Web site postings, with the intent to sow confusion, dissent and distrust among militant organizations, officials confirm.
At the same time, American diplomats are quietly working behind the scenes with Middle Eastern partners to amplify the speeches and writings of prominent Islamic clerics who are renouncing terrorist violence.
At the local level, the authorities are experimenting with new ways to keep potential terrorists off guard.
In New York City, as many as 100 police officers in squad cars from every precinct converge twice daily at randomly selected times and at randomly selected sites, like Times Square or the financial district, to rehearse their response to a terrorist attack. City police officials say the operations are believed to be a crucial tactic to keep extremists guessing as to when and where a large police presence may materialize at any hour. “What we’ve developed since 9/11, in six or seven years, is a better understanding of the support that is necessary for terrorists, the network which provides that support, whether it’s financial or material or expertise,” said Michael E. Leiter, acting director of the National Counterterrorism Center.
“We’ve now begun to develop more sophisticated thoughts about deterrence looking at each one of those individually,” Mr. Leiter said in an interview. “Terrorists don’t operate in a vacuum.”
In some ways, government officials acknowledge, the effort represents a second-best solution. Their preferred way to combat terrorism remains to capture or kill extremists, and the new emphasis on deterrence in some ways amounts to attaching a new label to old tools.
“There is one key question that no one can answer: How much disruption does it take to give you the effect of deterrence?” said Michael Levi, a fellow at the Council on Foreign Relations and the author of a new book, “On Nuclear Terrorism.”
The New Deterrence
The emerging belief that terrorists may be subject to a new form of deterrence is reflected in two of the nation’s central strategy documents.
The 2002 National Security Strategy, signed by the president one year after the Sept. 11 attacks, stated flatly that “traditional concepts of deterrence will not work against a terrorist enemy whose avowed tactics are wanton destruction and the targeting of innocents.”
Four years later, however, the National Strategy for Combating Terrorism concluded: “A new deterrence calculus combines the need to deter terrorists and supporters from contemplating a W.M.D. attack and, failing that, to dissuade them from actually conducting an attack.”
For obvious reasons, it is harder to deter terrorists than it was to deter a Soviet attack.
Terrorists hold no obvious targets for American retaliation as Soviet cities, factories, military bases and silos were under the cold-war deterrence doctrine. And it is far harder to pinpoint the location of a terrorist group’s leaders than it was to identify the Kremlin offices of the Politburo bosses, making it all but impossible to deter attacks by credibly threatening a retaliatory attack.
But over the six and a half years since the Sept. 11 attacks, many terrorist leaders, including Osama bin Laden and his deputy, Ayman al-Zawahri, have successfully evaded capture, and American officials say they now recognize that threats to kill terrorist leaders may never be enough to keep America safe.
So American officials have spent the last several years trying to identify other types of “territory” that extremists hold dear, and they say they believe that one important aspect may be the terrorists’ reputation and credibility with Muslims.
Under this theory, if the seeds of doubt can be planted in the mind of Al Qaeda’s strategic leadership that an attack would be viewed as a shameful murder of innocents — or, even more effectively, that it would be an embarrassing failure — then the order may not be given, according to this new analysis.
Senior officials acknowledge that it is difficult to prove what role these new tactics and strategies have played in thwarting plots or deterring Al Qaeda from attacking. Senior officials say there have been several successes using the new approaches, but many involve highly classified technical programs, including the cyberoperations, that they declined to detail.
They did point to some older and now publicized examples that suggest that their efforts are moving in the right direction.
George J. Tenet, the former director of the Central Intelligence Agency, wrote in his autobiography that the authorities were concerned that Qaeda operatives had made plans in 2003 to attack the New York City subway using cyanide devices.
Mr. Zawahri reportedly called off the plot because he feared that it “was not sufficiently inspiring to serve Al Qaeda’s ambitions,” and would be viewed as a pale, even humiliating, follow-up to the 9/11 attacks.
And in 2002, Iyman Faris, a naturalized American citizen from Kashmir, began casing the Brooklyn Bridge to plan an attack and communicated with Qaeda leaders in Pakistan via coded messages about using a blowtorch to sever the suspension cables.
But by early 2003, Mr. Faris sent a message to his confederates saying that “the weather is too hot.” American officials said that meant Mr. Faris feared that the plot was unlikely to succeed — apparently because of increased security.
“We made a very visible presence there and that may have contributed to it,” said Paul J. Browne, the New York City Police Department’s chief spokesman. “Deterrence is part and parcel of our entire effort.”
Terrorists hold little or no terrain, except on the Web. “Al Qaeda and other terrorists’ center of gravity lies in the information domain, and it is there that we must engage it,” said Dell L. Dailey, the State Department’s counterterrorism chief.
Some of the government’s most secretive counterterrorism efforts involve disrupting terrorists’ cyberoperations. In Iraq, Afghanistan and Pakistan, specially trained teams have recovered computer hard drives used by terrorists and are turning the terrorists’ tools against them.
“If you can learn something about whatever is on those hard drives, whatever that information might be, you could instill doubt on their part by just countermessaging whatever it is they said they wanted to do or planned to do,” said Brig. Gen. Mark O. Schissler, director of cyberoperations for the Air Force and a former deputy director of the antiterrorism office for the Joint Chiefs of Staff.
Since terrorists feel safe using the Internet to spread ideology and gather recruits, General Schissler added, “you may be able to interfere with some of that, interrupt some of that.”
“You can also post messages to the opposite of that,” he added.
Other American efforts are aimed at discrediting Qaeda operations, including the decision to release seized videotapes showing members of Al Qaeda in Mesopotamia, a largely Iraqi group with some foreign leaders, training children to kidnap and kill, as well as a lengthy letter said to have been written by another terrorist leader that describes the organization as weak and plagued by poor morale.
Even as security and intelligence forces seek to disrupt terrorist operations, counterterrorism specialists are examining ways to dissuade insurgents from even considering an attack with unconventional weapons. They are looking at aspects of the militants’ culture, families or religion, to undermine the rhetoric of terrorist leaders.
For example, the government is seeking ways to amplify the voices of respected religious leaders who warn that suicide bombers will not enjoy the heavenly delights promised by terrorist literature, and that their families will be dishonored by such attacks. Those efforts are aimed at undermining a terrorist’s will.
“I’ve got to figure out what does dissuade you,” said Lt. Gen. John F. Sattler, the Joint Chiefs’ director of strategic plans and policy. “What is your center of gravity that we can go at? The goal you set won’t be achieved, or you will be discredited and lose face with the rest of the Muslim world or radical extremism that you signed up for.”
Efforts are also under way to persuade Muslims not to support terrorists. It is a delicate campaign that American officials are trying to promote and amplify — but without leaving telltale American fingerprints that could undermine the effort in the Muslim world. Senior Bush administration officials point to several promising developments.
Saudi Arabia’s top cleric, Grand Mufti Sheik Abdul Aziz al-Asheik, gave a speech last October warning Saudis not to join unauthorized jihadist activities, a statement directed mainly at those considering going to Iraq to fight the American-led forces.
And Abdul-Aziz el-Sherif, a top leader of the armed Egyptian movement Islamic Jihad and a longtime associate of Mr. Zawahri, the second-ranking Qaeda official, has just completed a book that renounces violent jihad on legal and religious grounds.
Such dissents are serving to widen rifts between Qaeda leaders and some former loyal backers, Western and Middle Eastern diplomats say.
“Many terrorists value the perception of popular or theological legitimacy for their actions,” said Stephen J. Hadley, Mr. Bush’s national security adviser. “By encouraging debate about the moral legitimacy of using weapons of mass destruction, we can try to affect the strategic calculus of the terrorists.”
As the top Pentagon policy maker for special operations, Michael G. Vickers creates strategies for combating terrorism with specialized military forces, as well as for countering the proliferation of nuclear, biological or chemical weapons.
Much of his planning is old school: how should the military’s most elite combat teams capture and kill terrorists? But with each passing day, more of his time is spent in the new world of terrorist deterrence theory, trying to figure out how to prevent attacks by persuading terrorist support networks — those who enable terrorists to operate — to refuse any kind of assistance to stateless agents of extremism.
“Obviously, hard-core terrorists will be the hardest to deter,” Mr. Vickers said. “But if we can deter the support network — recruiters, financial supporters, local security providers and states who provide sanctuary — then we can start achieving a deterrent effect on the whole terrorist network and constrain terrorists’ ability to operate.
“We have not deterred terrorists from their intention to do us great harm,” Mr. Vickers said, “but by constraining their means and taking away various tools, we approach the overall deterrent effect we want.”
Much effort is being spent on perfecting technical systems that can identify the source of unconventional weapons or their components regardless of where they are found — and letting nations around the world know the United States has this ability.
President Bush has declared that the United States will hold “fully accountable” any nation that shares nuclear weapons with another state or terrorists.
Rear Adm. William P. Loeffler, deputy director of the Center for Combating Weapons of Mass Destruction at the military’s Strategic Command, said Mr. Bush’s declaration meant that those who might supply arms or components to terrorists were just as accountable as those who ordered and carried out an attack.
It is, the admiral said, a system of “attribution as deterrence.”
Wikileaks Releases Early Atomic Bomb Diagram
Wikileaks has released a diagram of the first atomic weapon, as used in the Trinity test and subsequently exploded over the Japanese city of Nagasaki, together with an extremely interesting scientific analysis. Wikileaks has not been able to fault the document or find reference to it elsewhere. Given the high quality of other Wikileaks submissions, the document may be what it purports to be, or it may be a sophisticated intelligence agency fraud, designed to mislead the atomic weapons development programs of countries like Iran. The neutron initiator is particularly novel. "When polonium is crushed onto beryllium by explosion, reaction occurs between polonium alpha emissions and beryllium leading to Carbon-12 & 1 neutron. This, in practice, would lead to a predictable neutron flux, sufficient to set off device."
The ICBM Turns 50
A cheerful note on a grim anniversary: They still haven't been fired.
To the voluminous list of ironies that attended the Cold War doctrine of mutually assured destruction, we can add one more. On its 50th birthday, the intercontinental ballistic missile, that once-commanding symbol of the apocalypse, has become a national security underdog, a defense system whose future is uncertain, whose ranks are dwindling and whose utility in the 21st century is in serious question. That might gladden aging peaceniks whose Volvos sported "Nuclear weapons: May they rust in peace" bumper stickers during the Reagan era, but these days hawks and doves are equally likely to regard the ICBM with suspicion.
Consider the numbers. From a 1969 peak of 1,054, the Air Force now fields 450 missiles. Within the last three years the United States has retired 100 ICBMs, including the entire run of Peacekeepers, which began life as the controversial "MX" missile in the '70s. Mighty Vandenberg Air Force Base, where the first nuclear-tipped Atlas rocket facilities were built in 1958, lives on as a spaceport and missile testing facility, but today 22 square miles of mostly undeveloped coastal land in Santa Barbara County look more like a lost opportunity in real estate than an urgent military asset. The Week in Review is edited and published by Jack Spratts. The last Titan II rocket (decommissioned from missile duty in 1987) took off from Vandenberg in 2003, carrying a payload for the Defense Meteorological Satellite Program; the three-stage Minuteman (1962- ) is now the only land-based ICBM in the U.S. arsenal. Much of the action in America's ongoing wars is conducted by unmanned aerial vehicles, and the Air Force is engaged in various great debates about next-generation weapons, including the very interesting question of whether piloted fighters and bombers have any future. How can the ICBM help but seem like the last Hula Hoop in the age of the RipStik?
During a recent visit to Vandenberg to help mark the semi-centennial of nuclear-tipped missiles, Maj. Gen. Thomas F. Deppe made a compelling case for the ICBM. Wearing boots and digital camouflage and speaking without notes or coffee in a windowless office, the burly vice commander of Air Force Space Command at Colorado's Peterson Air Force Base acknowledged the waning of the fleet but pointed out that the ICBM remains a vital deterrent, at least to clearly delineated state-to-state war: "The beauty of the ICBM is that it tremendously complicates matters for any adversary attacking this country."
Is that true, though? After all, the nuclear umbrella doesn't seem to have complicated the first foreign assault on U.S. soil of the 21st century. But Deppe, who began his Air Force career as an enlisted instrumentation technician in 1967 and has worked in missiles for most of his adult life, points not to the attacks that occurred on Sept. 11, 2001, but to the many that didn't occur in the 50 years before that. "The lesson of the Cold War is that strategic deterrence works," Deppe said. "There are a number of nations, and unfortunately that number is on the rise, that are developing nuclear capability, that have ballistic missile capability that can reach this country. The question of deterrence, and how much is enough, goes back to my earliest years in the Air Force. And really, it's impossible to measure how much is enough. You'll know if you don't have enough, but you'll never know if you have too much. Is 450 the right number? Apparently it is, because we're deterring aggressors. But is 449 not enough?"
Don't expect to find out any time soon. The Air Force is completing a $7-billion upgrade of its Minuteman assets, a "nosecone to nozzle" spiffing up that will keep the missile in place until about 2030. What will come after that? Strategic Command has been considering the possibility of conventional ICBMs for years. In planning for an eventual Minuteman replacement, the Air Force is looking for smarter, more accurate delivery systems, but it is not ignoring the continuing value of being able to deliver nasty surprises from outer space. "The ICBM remains the single most prompt weapon we have," Deppe noted. "It can reach out and touch somebody anywhere in the world in 45 minutes."
Which lends one cheerful note to this grim anniversary: In all these years, the things still haven't been used. Unlike carrier fleets or rapid-deployment forces, the ICBM was not about power projection or foreign intervention but about persuading a lethal adversary not to attack the U.S. The strangest possible outcome of mutually assured destruction was the one that came to pass: Two political and economic systems competed without coming to blows, and the better system prevailed. That's no less astounding now than it was in the '90s -- or for that matter the '50s, when those missileers first went underground with their little keys, awaiting orders that never came.
Far Out! Peace Symbol Turns 50
A new book — out in April — traces the origin and history of the peace sign
Baby Boomers may recall it through a swirl of tear gas, scrawled on walls, on signs in marches and silent sit-ins, or on the helmet covers of weary Vietnam soldiers.
The peace sign, which turns 50 in April, was introduced in a calmer Britain in 1958 to promote nuclear disarmament, and spread fast as times got tense.
Since its inception, it has been revered as a sign of our better angels and cursed as the "footprint of the American chicken."
The symbol that helped define a generation is less evident now, but it is far from forgotten. After what it went through, how could it be?
National Geographic Books is out with "Peace: The Biography of a Symbol," by Ken Kolsbun and Michael Sweeney, which traces the simple symbol from its scratched-out origins based on the semaphore flag positions for N and D (nuclear disarmament) to the influence it had, and retains, in social movements.
While the book details how the symbol came to be and how it spread, it focuses more on the backdrop of the peace movement generally, from its antecedents in the McCarthyism of the 1950s to nuclear proliferation, Vietnam, Kent State and the 1968 Chicago Democratic Convention to its later promotions of other causes.
It has become "a rallying cry for almost any group working for social change," the authors write.
The book is enhanced by numerous photos, some chillingly familiar, some simply nostalgic.
Who can forget the frantic teenager kneeling over the fallen student at Kent State University. Or the student sticking a flower in the barrel of a National Guard rifle? Or the whaling ship bearing down on a Greenpeace raft? Or Woodstock?
The symbol itself was created by a British pacifist textile designer, Gerald Holtom, who initially considered using a cross but got an icy reception from some of the churches he sought as allies.
So on a wet, chilly Good Friday — April 4, 1958 — the symbol as we know it made its debut in London's Trafalgar Square where thousands gathered to support a "ban the bomb" movement and to make a long march to Aldermaston, where atomic weapons research was being done.
While Holtom designed the symbol, the U.S Patent and Trademark Office ruled in 1970 that it is in the public domain. It was quickly commercialized, showing up, among other places, on packages of Lucky Strike cigarettes, but also on a 1999 postage stamp after a public vote to pick 15 commemoratives to honor the 1960s.
Kolsbun is a jack of many trades that include longtime and enthusiastic peace activism, a propensity that shows through. Sweeney is a professor of journalism at Utah State University.
If you recall the mood and times of the '60s and 1970s, the book will take you back. Depending on your level of enthusiasm then, you might imagine a whiff of tear gas. Or recall the better times of the 1967 Summer of Love, which a lot of GIs remember another way.
Holtom clung to his pacifist beliefs to the end, which came on Sept. 18, 1985 at 71. His will requested that his grave marker be carved with two of his peace symbols, inverted.
For reasons unclear, the authors write, they aren't inverted. They're exactly the way he made them.
Maybe that's why.
Arthur C. Clarke, 90, Science Fiction Writer, Dies
Arthur C. Clarke, a writer whose seamless blend of scientific expertise and poetic imagination helped usher in the space age, died early Wednesday in Colombo, Sri Lanka, where he had lived since 1956. He was 90.
Rohan de Silva, an aide, confirmed the death and said Mr. Clarke had been experiencing breathing problems, The Associated Press reported. He had suffered from post-polio syndrome for the last two decades.
The author of almost 100 books, Mr. Clarke was an ardent promoter of the idea that humanity’s destiny lay beyond the confines of Earth. It was a vision served most vividly by “2001: A Space Odyssey,” the classic 1968 science-fiction film he created with the director Stanley Kubrick and the novel of the same title that he wrote as part of the project.
His work was also prophetic: his detailed forecast of telecommunications satellites in 1945 came more than a decade before the first orbital rocket flight.
Other early advocates of a space program argued that it would pay for itself by jump-starting new technology. Mr. Clarke set his sights higher. Borrowing a phrase from William James, he suggested that exploring the solar system could serve as the “moral equivalent of war,” giving an outlet to energies that might otherwise lead to nuclear holocaust.
Mr. Clarke’s influence on public attitudes toward space was acknowledged by American astronauts and Russian cosmonauts, by scientists like the astronomer Carl Sagan and by movie and television producers. Gene Roddenberry credited Mr. Clarke’s writings with giving him courage to pursue his “Star Trek” project in the face of indifference, even ridicule, from television executives.
In his later years, after settling in Ceylon (now Sri Lanka), Mr. Clarke continued to bask in worldwide acclaim as both a scientific sage and the pre-eminent science fiction writer of the 20th century. In 1998, he was knighted by Queen Elizabeth II.
Mr. Clarke played down his success in foretelling a globe-spanning network of communications satellites. “No one can predict the future,” he always maintained. But as a science fiction writer he couldn’t resist drawing up timelines for what he called “possible futures.” Far from displaying uncanny prescience, these conjectures mainly demonstrated his lifelong, and often disappointed, optimism about the peaceful uses of technology — from his calculation in 1945 that atomic-fueled rockets could be no more than 20 years away to his conviction in 1999 that “clean, safe power” from “cold fusion” would be commercially available in the first years of the new millennium.
Popularizer of Science
Mr. Clarke was well aware of the importance of his role as science spokesman to the general population: “Most technological achievements were preceded by people writing and imagining them,” he noted. “I’m sure we would not have had men on the Moon,” he added, if it had not been for H. G. Wells and Jules Verne. “I’m rather proud of the fact that I know several astronauts who became astronauts through reading my books.”
Arthur Charles Clarke was born on Dec. 16, 1917, in the seaside town of Minehead, Somerset, England. His father was a farmer; his mother a post office telegrapher. The eldest of four children, he was educated as a scholarship student at a secondary school in the nearby town of Taunton. He remembered a number of incidents in early childhood that awakened his scientific imagination: exploratory rambles along the Somerset shoreline, with its “wonderland of rock pools”; a card from a pack of cigarettes that his father showed him, with a picture of a dinosaur; the gift of a Meccano set, a British construction toy similar to American Erector Sets.
He also spent time, he said, “mapping the moon” through a telescope he constructed himself out of “a cardboard tube and a couple of lenses.” But the formative event of his childhood was his discovery, at age 13 — the year his father died — of a copy of Astounding Stories of Super-Science, then the leading American science fiction magazine. He found its mix of boyish adventure and far-out (sometimes bogus) science intoxicating.
While still in school, he joined the newly formed British Interplanetary Society, a small band of sci-fi enthusiasts who held the controversial view that space travel was not only possible but could be achieved in the not-so-distant future. In 1937, a year after he moved to London to take a civil service job, he began writing his first science fiction novel, a story of the far, far future that was later published as “Against the Fall of Night” (1953).
Mr. Clarke spent World War II as an officer in the Royal Air Force. In 1943 he was assigned to work with a team of American scientist-engineers who had developed the first radar-controlled system for landing airplanes in bad weather. That experience led to Mr. Clarke’s only non-science fiction novel, “Glide Path” (1963). More important, it led in 1945 to a technical paper, published in the British journal Wireless World, establishing the feasibility of artificial satellites as relay stations for Earth-based communications.
The meat of the paper was a series of diagrams and equations showing that “space stations” parked in a circular orbit roughly 22,240 miles above the equator would exactly match the Earth’s rotation period of 24 hours. In such an orbit, a satellite would remain above the same spot on the ground, providing a “stationary” target for transmitted signals, which could then be retransmitted to wide swaths of territory below. This so-called geostationary orbit has been officially designated the Clarke Orbit by the International Astronomical Union.
Decades later, Mr. Clarke called his Wireless World paper “the most important thing I ever wrote.” In a wry piece entitled, “A Short Pre-History of Comsats, Or: How I Lost a Billion Dollars in My Spare Time,” he claimed that a lawyer had dissuaded him from applying for a patent. The lawyer, he said, thought the notion of relaying signals from space was too far-fetched to be taken seriously.
But Mr. Clarke also acknowledged that nothing in his paper — from the notion of artificial satellites to the mathematics of the geostationary orbit — was new. His chief contribution was to clarify and publicize an idea whose time had almost come: it was a feat of consciousness-raising of the kind he would continue to excel at throughout his career.
A Fiction Career Is Born
The year 1945 also saw the start of Mr. Clarke’s career as a fiction writer. He sold a short story called “Rescue Party” to the same magazine — now re-titled Astounding Science Fiction — that had captured his imagination 15 years earlier.
For the next two years Mr. Clarke attended King’s College, London, on the British equivalent of a G.I. Bill scholarship, graduating in 1948 with first-class honors in physics and mathematics. But he continued to write and sell stories, and after a stint as assistant editor at the scientific journal Physics Abstracts, he decided he could support himself as a free-lance writer. Success came quickly. His primer on space flight, “The Exploration of Space,” became an American Book-of-the-Month Club selection.
Over the next two decades he wrote a series of nonfiction bestsellers as well as his best-known novels, including “Childhood’s End” (1953) and “2001: A Space Odyssey” (1968). For a scientifically trained writer whose optimism about technology seemed boundless, Mr. Clarke delighted in confronting his characters with obstacles they could not overcome without help from forces beyond their comprehension.
In “Childhood’s End,” a race of aliens who happen to look like devils imposes peace on an Earth torn by Cold War tensions. But the aliens’ real mission is to prepare humanity for the next stage of evolution. In an ending that is both heartbreakingly poignant and literally earth-shattering, Mr. Clarke suggests that mankind can escape its suicidal tendencies only by ceasing to be human.
“There was nothing left of Earth,” he wrote. “It had nourished them, through the fierce moments of their inconceivable metamorphosis, as the food stored in a grain of wheat feeds the infant plant while it climbs towards the Sun.”
The Cold War also forms the backdrop for “2001.” Its genesis was a short story called “The Sentinel,” first published in a science fiction magazine in 1951. It tells of an alien artifact found on the Moon, a little crystalline pyramid that explorers from Earth destroy while trying to open. One explorer realizes that the artifact was a kind of fail-safe beacon; in silencing it, human beings have signaled their existence to its far-off creators.
Enter Stanley Kubrick
In the spring of 1964, Stanley Kubrick, fresh from his triumph with “Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb,” met Mr. Clarke in New York, and the two agreed to make the “proverbial really good science fiction movie” based on “The Sentinel.” This led to a four-year collaboration; Mr. Clarke wrote the novel and Mr. Kubrick produced and directed the film; they are jointly credited with the screenplay.
Many reviewers were puzzled by the film, especially the final scene in which an astronaut who has been transformed by aliens returns to orbit the Earth as a “Star-Child.” In the book he demonstrates his new-found powers by detonating from space the entire arsenal of Soviet and United States nuclear weapons. Like much of the plot, this denouement is not clear in the film, from which Mr. Kubrick cut most of the expository material.
As a fiction writer, Mr. Clarke was often criticized for failing to create fully realized characters. HAL, the mutinous computer in “2001,” is probably his most “human” creation: a self-satisfied know-it-all with a touching but misguided faith in his own infallibility.
If Mr. Clarke’s heroes are less than memorable, it’s also true that there are no out-and-out villains in his work; his characters are generally too busy struggling to make sense of an implacable universe to engage in petty schemes of dominance or revenge.
Mr. Clarke’s own relationship with machines was somewhat ambivalent. Although he held a driver’s license as a young man, he never drove a car. Yet he stayed in touch with the rest of the world from his home in Sri Lanka through an ever-expanding collection of up-to-date computers and communications accessories. And until his health declined, he was an expert scuba diver in the waters around Sri Lanka.
He first became interested in diving in the early 1950s, when he realized that he could find underwater, he said, something very close to the weightlessness of outer space. He settled permanently in Colombo, the capital of what was then Ceylon, in 1956. With a partner, he established a guided diving service for tourists and wrote vividly about his diving experiences in a number of books, beginning with “The Coast of Coral” (1956).
Of his scores of books, some like “Childhood’s End,” have been in print continuously. His works have been translated into some 40 languages, and worldwide sales have been estimated at more than $25 million.
In 1962 he suffered a severe attack of polio. His apparently complete recovery was marked by a return to top form at his favorite sport, table tennis. But in 1984 he developed post-polio syndrome, a progressive condition characterized by muscle weakness and extreme fatigue. He spent the last years of his life in a wheelchair.
Clarke’s Three Laws
Among his legacies are Clarke’s Three Laws, provocative observations on science, science fiction and society that were published in his “Profiles of the Future” (1962):
¶“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”
¶“The only way of discovering the limits of the possible is to venture a little way past them into the impossible.”
¶“Any sufficiently advanced technology is indistinguishable from magic.”
Along with Verne and Wells, Mr. Clarke said his greatest influences as a writer were Lord Dunsany, a British fantasist noted for his lyrical, if sometimes overblown, prose; Olaf Stapledon, a British philosopher who wrote vast speculative narratives that projected human evolution to the farthest reaches of space and time; and Herman Melville’s “Moby-Dick.”
While sharing his passions for space and the sea with a worldwide readership, Mr. Clarke kept his emotional life private. He was briefly married in 1953 to an American diving enthusiast named Marilyn Mayfield; they separated after a few months and were divorced in 1964, having had no children.
One of his closest relationships was with Leslie Ekanayake, a fellow diver in Sri Lanka, who died in a motorcycle accident in 1977. Mr. Clarke shared his home in Colombo with his friend’s brother, Hector, his partner in the diving business; Hector’s wife, Valerie; and their three daughters.
Mr. Clarke reveled in his fame. One whole room in his house — which he referred to as the Ego Chamber — was filled with photos and other memorabilia of his career, including pictures of him with Yuri Gagarin, the first man in space, and Neil Armstrong, the first man to walk on the moon.
Mr. Clarke’s reputation as a prophet of the space age rests on more than a few accurate predictions. His visions helped bring about the future he longed to see. His contributions to the space program were lauded by Charles Kohlhase, who planned NASA’s Cassini mission to Saturn and who said of Mr. Clarke, “When you dream what is possible, and add a knowledge of physics, you make it happen.”
At the time of his death he was working on another novel, “The Last Theorem,” Agence France-Presse reported. “ The Last Theorem’ has taken a lot longer than I expected,” the agency quoted him as saying. “That could well be my last novel, but then I’ve said that before.”
Until next week,
Current Week In Review
Recent WiRs -
March 15th, March 8th, March 1st, February 23rd
Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, questions and comments in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.
"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public." - Hugo Black
|Thread Tools||Search this Thread|
|Thread||Thread Starter||Forum||Replies||Last Post|
|Peer-To-Peer News - The Week In Review - September 22nd, '07||JackSpratts||Peer to Peer||3||22-09-07 06:41 PM|
|Peer-To-Peer News - The Week In Review - May 19th, '07||JackSpratts||Peer to Peer||1||16-05-07 09:58 AM|
|Peer-To-Peer News - The Week In Review - December 9th, '06||JackSpratts||Peer to Peer||5||09-12-06 03:01 PM|
|Peer-To-Peer News - The Week In Review - September 16th, '06||JackSpratts||Peer to Peer||2||14-09-06 09:25 PM|
|Peer-To-Peer News - The Week In Review - July 22nd, '06||JackSpratts||Peer to Peer||1||20-07-06 03:03 PM|