P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 23-12-09, 07:56 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - December 26th, '09

Since 2002
































"It's just ridiculous. I bought it from a Korean guy on the street for five bucks. Then I uploaded it. I didn't make any money." – Gilberto Sanchez


"Consumer consumption of home entertainment might be at an all-time high -- a point alluded to three months back when the Digital Entertainment Group released those third-quarter numbers. According to DEG's figures, overall consumer home video transactions were up a solid 6.6% from the previous year." – Thomas K. Arnold



































December 26th, 2009




The Five Legal Cases that Defined the Year in Music
Ben Sheffner

Almost a decade after the major labels launched their legal assault on Napster, courts are still writing the rules of the road for the music business's digital future.

Companies can't set out to build a business based on their users' infringement of copyright, courts had already ruled. But the precise meaning of that dictate remains in doubt. What steps must sites take to combat infringement? What are the proper penalties for those who infringe? This year, courts inched toward resolution of these questions, giving labels, publishers and artists a bit more certainty as they decide whom to work with and whom to sue.

Below are 2009's top five cases that will shape the future of the music business.

UMG RECORDINGS V. VEOH NETWORKS

In September, a federal judge in Los Angeles ruled decisively against Universal Music Group in the label's copyright suit against video-sharing site Veoh.com. UMG had argued to the court that Veoh was liable for copyright infringement by encouraging users to upload videos, which Veoh translated into the proper format, organized and categorized, then ultimately streamed to millions of Web surfers -- all without paying copyright owners. But the court held that Veoh qualified for a "safe harbor" under the 1998 Digital Millennium Copyright Act, because the site followed a policy of promptly taking down videos upon notification from UMG and kicking "repeat infringers" off the site.

In the pre-Internet world, the burden was always on the distributor to obtain proper licenses before exploiting a copyrighted work. But the ruling in the Veoh suit dealt a significant blow to copyright owners' efforts to maintain total control. Under the court's interpretation of the DMCA, a Web-based company can enlist its users to upload unlicensed works, and it's up to the copyright owner to issue takedown notices -- sometimes multiple times. If upheld on appeal, the decision represents a major shift in power from copyright owners toward online companies that rely on user-generated content.

CAPITOL RECORDS V. THOMAS-RASSET; SONY BMG MUSIC ENTERTAINMENT V. TENENBAUM

Of the more than 17,000 individuals the major labels targeted for downloading and "sharing" songs through peer-to-peer networks, only Jammie Thomas-Rasset and Joel Tenenbaum fought all the way to trial. They both lost badly. A Minneapolis jury socked Thomas-Rasset with a whopping $1.9 million verdict for infringing 24 songs, and a Boston jury ordered Tenenbaum to pay $675,000 after he admitted to infringing 30 works.

The labels announced in late 2008 that they would stop initiating new suits against individual file sharers, so more such trials seem unlikely. But the enormous size of these verdicts could have a lasting impact on all copyright owners who litigate or even threaten lawsuits. The awards are under serious attack as unconstitutionally excessive, and in one or both cases, the court could take the unprecedented step of ruling that the Constitution limits copyright statutory damages. Since such a determination would deprive copyright owners of a powerful tactic, it would likely make the enforcement of their rights more complicated and more expensive. Depending on the outcome of post-trial motions and appeals, the labels' victories against Thomas-Rasset and Tenenbaum could prove Pyrrhic.

SWEDEN VS. THE PIRATE BAY

It wasn't your average legal proceeding -- it was part trial, part spectacle. And the case against the operators of the Pirate Bay, the world's most popular access point to the BitTorrent file-sharing network, was odd to U.S. legal observers for another reason: It combined a criminal case brought by the government of Sweden with a civil copyright action pressed by major record labels, movie studios and game publishers. But the result was familiar to that of similar fights in the United States against piracy facilitators like Napster, Grokster, Aimster, TorrentSpy and Usenet.com: a verdict for the plaintiffs and harsh punishment -- a year in prison and an award of $3.5 million in damages -- for the four individual defendants.

But as with the earlier victories, the practical import of the case is harder to pin down. Yes, it's another clear statement that facilitation of piracy is illegal. But the Pirate Bay's servers have already migrated several times to other countries, users can easily migrate to other similar sites, and appeals will drag on for years. The case is a stark reminder that even big legal victories don't necessarily translate into big reductions in copyright infringement. And there are lots of other Pirate Bay wannabes ready to step into the now-convicted defendants' shoes.

BRIDGEPORT MUSIC V. UMG RECORDINGS

If anyone still doubts that recording artists must obtain proper licenses before incorporating samples of others' works into songs, the U.S. Court of Appeals for the Sixth Circuit cleared up that confusion November 4. That's when the court issued a decision upholding a jury verdict of $88,980 against Universal for sampling George Clinton's lyric "Bow wow wow, yippie yo, yippie yea" and the word "dog" from "Atomic Dog" in a 1998 song called "D.O.G. in Me" by R&B group Public Announcement.

Universal had contended that the sampling of the famous musical phrase was a fair use for which a license or payment wasn't required. But the jury didn't buy that argument, and the court of appeals held that the jury's verdict was "not unreasonable." The Sixth Circuit's ruling -- not to mention more than 500 similar sampling lawsuits filed by publisher Bridgeport -- sends a clear message to artists and labels: If you want to sample, first get a license. And don't expect the fair use defense to protect you.

ARISTA RECORDS V. USENET.COM

In 2005, copyright owners achieved one of their most significant legal victories, when the Supreme Court held in MGM v. Grokster that peer-to-peer infringement facilitators could be held liable for "inducing" their users to infringe. But the Grokster decision didn't wipe out piracy, and its strong endorsement of the inducement doctrine hasn't resulted in a slew of subsequent court victories for labels and studios. Nonetheless, a federal court's June 30 decision in Arista Records v. Usenet.com was another setback for sites that seek to build a business based on users' copyright infringement.

Among the factors the court cited as supporting liability were Usenet's overwhelming use of the service for infringement, the fact that the site advertised the availability of infringing works and the technical assistance it provided to users seeking pirated material. The court also noted that Usenet could have, but refused to, employ filters to block downloads of infringing material. Though Usenet may be a relatively small and obscure corner of the Internet, the ruling could still pressure other questionably legal online services to take concrete steps to combat user piracy. And the court's opinion will be cited for years to come by copyright owners seeking to shut down more visible, and harmful, piracy-facilitating sites.
http://www.washingtonpost.com/wp-dyn...121803800.html





U.S. Court Rules Against isoHunt For Inducing Copyright Infringement
Michael Geist

A U.S. federal court in California has issued a summary judgment against Canadian-based isoHunt (and its owner Gary Fung), ruling that the site violates U.S. copyright law by inducing copyright infringement. The judge ruled that the isoHunt case is little different from other U.S. cases such as Napster and Grokster, therefore concluding that there is no need to proceed to a full trial and granting Columbia Pictures request for summary judgment.

The court is dismissive of claims that differences in BitTorrent technology raises different legal issues from those addressed in the other peer-to-peer cases. Moreover, the decision includes an extensive discussion of Fung's comments regarding P2P file sharing, which proved damaging to his case. The court also notably concludes that inducement liability and the safe harbours found under the DMCA are incompatible - if you are found to have induced infringement, you cannot qualify for the safe harbour. IsoHunt will presumably appeal the U.S. decision, but it marks a resounding victory for the movie studios that launched the case.
http://www.michaelgeist.ca/content/view/4643/125/





Big Music: Damn the Numbers, Give Us Antipiracy Laws Anyway

If P2P use is declining or holding steady without new "antipiracy" laws, are those laws still needed? Music trade groups say yes.
Nate Anderson

The UK has just started to consider a new Digital Economy bill that could eventually usher in sanctions for illegal P2P use. From a rightsholder perspective, this makes it an inconvenient time for studies showing that P2P use is actually dropping, so the music industry commissioned a new study of its own which shows that other techniques for infringing copyright are picking up the slack. Would you believe that newsgroup usage is soaring?

In summer 2009, a survey found that UK broadband users who also happened to be "music fans" were using P2P less than ever—only 17 percent swapped copyrighted files with regularity, down from 22 percent back in 2007. And rather than switching to other piratical modes of acquisition, many of these users were just going to legal streaming services like Spotify, YouTube, and MySpace instead.

While good news from one perspective, such a finding could put a damper on the idea (found in the new Digital Economy bill) that the Secretary of State should be granted fast-track authority to impose all sorts of sanctions on Internet-using copyright infringers. BPI, the UK's major label music trade group, has now countered with a new study of its own, one which is big on percentages but light on real numbers.

The study was performed by Harris Interactive in November 2009, and even BPI's writeup of the results has to admit that "peer-to-peer use remains level" at 23 percent. But, says the trade group, "non-P2P methods to acquire music illegally have grown significantly in last six months, and are expected to keep growing," and it has the shocking stats to prove it.

In the last six months, the survey shows big increases in the use of overseas unlicensed MP3 pay sites (47 percent surge!), newsgroups (42 percent increase!), MP3 search engines (28 percent upswing!) and forum, blog and board links to cyberlockers (18 percent explosion!).

Must! Act! Now!

But percentage increases don't mean much on their own, and it's clear that the non-P2P usage stats are relatively low. BPI stresses that P2P remains by far the largest threat to its business, though just about any data can justify a call to pass the Digital Economy bill.

For instance, though the industry's own evidence shows P2P use holding steady, and outside surveys show it declining, BPI boss Geoff Taylor stresses the still-too-high P2P usage level in his call to action. "It’s disappointing that levels of illegal P2P use remain high despite this and the publicity surrounding imminent measures to address the problem," he said. "It's vital that those measures come into force as quickly as possible."

Shift the discussion to non-P2P methods of infringement and the argument changes dramatically. It's not about high levels, but growth rates. "The growth in other, non-P2P methods of downloading music illegally is a concern, and highlights the importance of including a mechanism in the Digital Economy Bill to deal with threats other than P2P," he said in the same press release.

It's easy to look at all of this another way, too, which is that P2P is a serious issue—but it's not singlehandedly destroying music. Despite numerous claims and studies over the years about P2P sucking up anywhere from 40 to 90 percent of backbone Internet bandwidth, the P2P usage rate just isn't that high.

The BPI-commissioned study puts P2P use in the UK at 23 percent (among Internet users aged 16-54). The global music industry trade group IFPI says that P2P use hovers at around 18 percent in Europe as a whole. And the Leading Questions survey from summer 2009 puts the UK P2P use rate at only 17 percent—and it even included 14 and 15-year olds in its study. The numbers are certainly high, but are they we-demand-you-rewrite-the-laws-immediately high?

Plenty of skeptics say no, and not just the usual digital rights advocates. This summer, EU Commissioner Viviane Reding said that increased piracy was a "vote of no-confidence in existing business models and legal solutions," adding that it should be a "wake-up call for policy-makers." The UK's All Party Parliamentary Communications Group also concluded this year that "much of the problem with illegal sharing of copyrighted material has been caused by the rightsholders, and the music industry in particular, being far too slow in getting their act together and making popular legal alternatives available."

At least the music industry has recognized this problem for some time, and BPI's Geoff Taylor begins his recent statement by stressing just how far the industry has come: "There are now more than thirty-five legal digital music services in the UK, offering music fans a great choice of ways to get music legally."
http://arstechnica.com/tech-policy/n...-need-laws.ars





A Year Out, Where's RIAA's Promised ISP Help?
Greg Sandoval

A decade after the rise of Napster and a year after promising a new antipiracy strategy, the Recording Industry Association of America appears to be floundering on the piracy front.

The plan adopted last year by the RIAA, the trade group for the four largest recording companies, in place of its controversial litigation campaign seems to have gone nowhere. The RIAA said at the time that it had struck partnerships with major Internet service providers, the Web's true gatekeepers, and that they would help choke off online piracy.

It was all supposed to be a done deal. The Wall Street Journal, which broke the news about the RIAA's strategy shift, wrote on December 19, 2008, that the RIAA had "hashed out preliminary agreements with major ISPs." According to the Journal story, the ISPs were supposed to join a deterrent program designed to gradually increase pressure on accused copyright violators. As part of the so-called "graduated response," RIAA officials told me that ramifications for repeat offenders would escalate, starting with the sending of multiple letters that could take an increasingly strong tone. Eventually, as the Journal noted, "the ISP may cut off their access altogether."

Music execs had told me much the same thing and I wrote last year that AT&T and Comcast were testing their own graduated responses. But a year after the Journal's initial story, the number of ISPs that have acknowledged adopting the RIAA's graduated response program is zero. In addition, many of the big ISPs, such as AT&T and Comcast, have gone out of their way to deny that they would ever interrupt service to customers simply because they were accused of copyright violations by the film or music industries. To do that, they would need a court order.

Some ISPs, including AT&T, Comcast, and Verizon, appear to be sending greater numbers of their own warning letters--in addition to those sent by content owners--to customers suspected of file sharing. The letters typically notify customers that they have been accused of illegally sharing songs and informed them that such activity is illegal.

But here's the big question about the RIAA's graduated response plan: is it worth anything without a legitimate threat backing it up? It's difficult to believe that sending letters is enough of a deterrent.

Mitch Bainwol, the RIAA's chairman and CEO, acknowledges that his organization hasn't achieved all of the goals it laid out a year ago, but he says that the ISP strategy is well thought out, progressing, and has already seen dramatic results.

"We've seen a million notices [from ISPs to customers suspected of file sharing] go out over the past year and that is certainly meaningful," Bainwol told CNET last week. "Are we prepared to make an announcement that is broad in scope and cuts across ISPs? No. Are we engaged in significant discussions that we believe will ultimately prove productive? Hell yes."

Maybe so, but these deals were supposed to have been done or nearly done a year ago. What happened to those "hashed out preliminary agreements" that the Journal wrote about?

Missing teeth

Multiple music sources have told me over the past month the RIAA leaders were feeling pressure to drop the lawsuit campaign, but were also being lobbied by some at the labels to put some kind of deterrent in place, even if totally toothless. They didn't want the public to think there weren't any consequences to pirating music, even if the reality was exactly that.

According to those sources, the announcement about the ISP strategy last December was little more than a scarecrow.

Bainwol didn't comment on that but did say: "The substance of our pivot to ISPs is in fact accurate. The broader arrangement that cuts across the ISP community is still out there to be tied down. There clearly are discussions going on."

The reason that some at the labels wanted an end to the litigation is that for years it brought down mountains of public scorn. The lawsuits were also expensive and RIAA's members wanted costs slashed, which happened earlier this year.

The decision was made to continue to pursue the suits already in the courts, but the widescale practice of suing individuals was over.

Here's the other reason that several of the music-industry sources say the RIAA acted before any deal was done: to fire a shot across the bow of some of ISPs that were dragging their feet. By spreading the word that the RIAA had sewn up a deal with a group of big ISPs, RIAA managers hoped they were ratcheting up the pressure to join, sources say.

They also turned to Andrew Cuomo, New York's state attorney general, to nudge the ISPs into fighting piracy in the same way he pushed them to combat child pornography, said two music industry sources. This not only rubbed some ISP execs the wrong way, but unlike with the porn problem, the law was all on the side of the ISPs.

Nothing in the Digital Millennium Copyright Act requires ISPs to adopt a graduated response or even send their own warning letters.

"I don't know that the (ISPs) are legally obliged to do it," said Jonathan Zittrain, a noted cyberlaw expert and author. "I don't know any ISP that has been sued over it...The industry has chosen not to provoke a fight."

One reason for that may be that many bandwith providers want greater access to top entertainment content. The best example of that is Comcast's proposed acquisition of NBC Universal. To many in the film and music sectors, it appears that the interests of entertainment companies and ISPs are aligning.

"We've seen great progress and great cooperation from many of the ISPs," Bainwol said. "Getting to a public uniform understanding about how we're going to work together is obviously an extraordinarily complicated endeavor...[piracy] is a problem that developed over years and a solution is going to take time but we're achieving progress toward that goal."

Some progress

To be sure, in some ways the music industry's digital strategy has never been in better shape. It's never been easier or less expensive to acquire music legally than it is at such sources as iTunes, Amazon, and Pandora.

The music sector hasn't obtained a three-strikes policy in the United States, but it's been much more successful in forcing ISPs based overseas to boot repeat copyright offenders from their networks. And some ISPs, including Cox Communications, established antipiracy policies long ago that were similar to the RIAA's graduated response. But since the U.S. is a tougher environment when it comes to discussing service interruption, has Bainwol altered his definition of "graduated response"?

"I'm not locked into any particular definition," Bainwol said. "I think the parties that are negotiating and having discussions about what kind of program is appropriate will define how you work a graduated response program. The question here is: Are we working with the ISPs? Will there be some kind of graduated response program, where the infringer is made aware when they're caught and also when there are escalating tensions.

"We'll be flexible about how we get to a deal," Bainwol continued. "We'll let others define the poles of the position."
http://news.cnet.com/8301-31001_3-10420803-261.html





File-Sharing Bill Could Give Government Control of the Internet

Law inadvertently gives keys to Mandy
OUT-LAW.COM

The Digital Economy Bill would give the Government the power to control the internet access of UK citizens by ministerial order, bypassing Parliament and without an adequate right of appeal, according to one legal expert.

Barrister Francis Davey has examined clause 11 of the Bill and believes that it puts extraordinary powers to control the information available to UK internet users in the hands of the Secretary of State for Business, Innovation and Skills, currently Lord Mandelson.

The Department of Business, Innovation and Skills (BIS) has rejected the interpretation of the law, claiming that the EU's E-Commerce Directive prohibits the activity described by Davey.

"Clause 11 gives the Secretary of State the power by minsterial order to make any ISP [internet service provider] take technical measures against any subscriber," said Davey.

This clause has been widely read as being designed to allow the disconnection of copyright-infringing file sharers, but Davey said its application could be far wider. "Nothing it says has anything to do with copyright infringement or even wrongdoing by the subscriber," he said. "The Secretary of State could use this to order ISPs to block access to a website or to certain kinds of files."

The Digital Economy Bill has been controversial because it gives the Secretary of State powers to order ISPs to take 'technical measures' against subscribers, such as disconnecting the access of those accused of illegal file sharing.

Davey is a barrister with a background in IT and specialises in computer and internet law. His analysis could lead to even further controversy about the Bill. He said that he does not believe that the clause is a deliberate attempt to control access, but he said that if it passes into law it will inevitably be used in that way.

"My suspicion is that this is not an intentional power grab," said Davey. "I think that it is just bad drafting. Whoever was doing it, rather than think of a subtle and complicated power that had the right effect, they have just given the broad power that would allow the Secretary of State to do what was needed."

"But we know from other laws that when a power exists it finds a way of getting used," said Davey. "Look at the asset freezing powers of anti-terrorist legislation. They were used to freeze the assets of [Icelandic bank] Landsbanki. Now that upset a lot of people in Iceland, but when you went back to the legislation it actually didn't say anything about that power only being used against terrorists."

A spokesman for BIS said that it had drawn up the legislation only with filesharing in mind.

"It is clear from clause 11 and the Bill as a whole that technical measuers are for the individual infringer and can't be applied at the network level," he said. "At the network level the law has to comply with the E-Commerce Directive."

The Directive, which became UK law as the E-Commerce Regulations, absolves ISPs of liability for illegal material on their networks as long as they do not know that it is illegal. It also absolves them of the duty to monitor networks for illegal activity.
"The Directive says that we can't draft legislation that imposes a general obligation to monitor networks," said the spokesman. "And this network level activity would require ISPs to monitor websites on their systems."

Davey said, though, that while the E-Commerce Regulations and Directive do prohibit the passing of laws that require ISPs to monitor activity, this is different to what concerns him.

"What you can't do is require an ISP to look at the traffic going over its network and find certain content. But if an ISP is asked to block an IP address they can do that," he said. "Blocking is not monitoring."

Davey also has concerns about the lack of restraints on the powers in the Bill. "It is slightly unusual because of its breadth and the fact that there is no right of appeal or obligation to publish the notices or to go through Parliament," he said.

A person affected by an order can appeal the basis of an order but not the order itself, he said. This means, for example, that if their access was cut off they could claim that they did not conduct illegal file-sharing but could not appeal the nature of the Secretary of State's action, i.e. the cutting off of internet access.

Ofcom will operate a Code of Practice in relation to the powers, but Davey said that Ofcom will not be able to restrain a Secretary of State's use of the powers.

"Ofcom governs the use of the power but can't inhibit it," he said. "The Code of Practice allows certain things to be put in it by Ofcom but it allows the Secretary of State to put things in as well."

The BIS spokesman disagreed, saying that the Bill says that "Ofcom has to consult" on powers, and that the first time the Secretary of State wants to use the powers secondary legislation will have to pass through Parliament. He conceded that after that Parliament did not have to be consulted on their use.

Francis Davey's analysis of the bill can be read here.
http://www.theregister.co.uk/2009/12...ntrol_the_web/





After The Pirate Bay, Web Sheriff Takes On Jamie Oliver
Ernesto

Defending the rights of The Village People, the legendary Web Sheriff threatened The Pirate Bay’s operators with legal action in an attempt to get compensation for the losses the six had suffered. The threats didn’t lead to much unfortunately, so The Village People and the infamous Sheriff are now pointing the gun at Jamie Oliver instead.

Early 2008, the Web Sheriff, aka John Giacobbi, launched an assault on the operators of The Pirate Bay. The Sheriff threatened legal action in both Sweden and the US, but despite all the bluster, that’s the last we heard of it. Now, nearly two years later, Giacobbi moves away from the Swedish torrent chefs, instead pointing the finger at the British cook Jamie Oliver.

Unlike the Pirate Bay operators, Oliver didn’t host any torrents on his website. No, he committed the crime of dressing in clothing similar to that worn by The Village People thirty years ago, but without their permission.

In a promo trailer for the show, Jamie’s American Road Trip, the British cook dressed up as the 70’s style icons. The clip ran on TV, accompanied with a poster campaign in the London Underground, all featuring Jamie dressed up as each of the six members of the hit disco band.

According to the Web Sheriff, Jamie Oliver’s actions are blatant trademark infringement, seriously hurting the revenues of the Indian chief, cowboy and the four other members of the band. “The Village People are still a huge, multimillion-dollar global business,” the Sheriff commented.
village people

Much like the Pirate Bay, Channel 4, the broadcaster of Jamie’s show, is not impressed by the Sheriffs threats. “We are confident that the promotional campaign for Jamie’s American Road Trip does not infringe any copyright/trademark rights which the Village People may have. No parties have received a formal legal claim from the Village People and, if one is received, it will be vigorously defended,” a Channel 4 spokesperson said.

If the case does indeed come to court, it could be risky for the Web Sheriff to dress up in his usual outfit, as that is more than likely trademarked by a third party who might also wish to issue similar frivolous legal threats.
http://torrentfreak.com/after-the-pi...oliver-091221/





Accused 'Wolverine' Pirate Calls Charges 'Ridiculous'
Greg Sandoval

The FBI has accused the man who allegedly was first, or among the first, to upload a pirated copy of "X-Men Origins: Wolverine" that circulated online in April. What authorities have apparently yet to do is identify the original source of the leak.

On Wednesday, after Gilberto Sanchez was charged in New York with violating federal copyright laws by posting "Wolverine" to a file-sharing site a month before the film's theatrical release, he told reporters from The New York Daily News: "It's just ridiculous. I bought it from a Korean guy on the street for five bucks. Then I uploaded it. I didn't make any money."

Sanchez, who is 47 and works as a glazier, doesn't appear to have any direct ties to 20th Century Fox, the Hollywood studio that produced "Wolverine," or the film industry. To hear Sanchez tell it, he was way downstream from the original leak and authorities should be on the lookout for one of the thousands of New York street vendors.

But Sanchez's explanation raises more questions than it answers. The first of which is whether the trail of the person who first leaked the movie has gone cold in the eight months since the unauthorized copy first appeared on the Web. Security experts I've spoken with, however, say long delays are common with these kinds of file-sharing cases, which sometimes require law enforcement officials to spend months compiling evidence.

The two things that almost everybody agrees on are: 1) the case illustrates once again how hard it is to protect digital content, and 2) Sanchez isn't the original source of the leak.

In April, someone posted to the Web an incomplete version of "Wolverine," which cost $100 million to make and stars actor Hugh Jackman. The indictment filed against Sanchez in Los Angeles earlier this month did not say whether he was allegedly the only person to upload it or the first, but Sanchez is the only person who's been indicted in connection with the investigation. The copy that began circulating online was missing music and many computer-generated effects but was still a popular attraction. According to Big Champagne, which tracks file sharing, the movie was viewed 4 million times before it was screened in theaters on May 1.

In the months after the leak, "Wolverine" went on to gross $375 million worldwide, so it doesn't appear the pirated copy prevented the film from turning a profit. But 20th Century Fox, which produced the movie, argues the unauthorized version was watched about 14 million times online and no matter how one slices it, the leak cost the studio big money.

More recently, the U.S. Attorney's office has begun efforts to extradite Sanchez to Los Angeles, according to Philip Weinstein, his attorney. Weinstein said he has advised his client not to comment on the case.

According to my Hollywood sources, the authorities have ruled out Sanchez as the original source of the leak.

At many top studios, security is tight. Access to working copies is restricted. Copies are tracked and the names of anyone who touches them are supposed to be recorded. That happens not only at the studios but often at the firms hired to do post-production work, such as special-effects houses.

While sources say Sanchez didn't have that kind of access, what isn't clear is whether he knows someone who did.

The government said in its indictment against Sanchez that he posts comments on the Internet under such usernames as "SkillfulGil" and "SkillyGilly." A Google search showed that those names are prevalent at some video-sharing sites as well as numerous music-themed community sites, including MySpace and Crazypellas.net.

Many of the posts from these sites are accompanied by snapshots of a person resembling the Gilberto Sanchez who was photographed by the Daily News on Wednesday.

In one 2008 post at Crazypellas.net, SkillfulGil discussed ripping and posting movies to the Web. At the same site on July 7, two months after the "Wolverine" leak, SkillfulGil wrote: "I had FBI with search warrant in my place. They took my PC. Now (they're) building a fed case on me for the same thing. Copyright Infringement...So I guess I'll (be) made an example of."

An FBI spokeswoman said Tuesday that Sanchez's residence was searched by agents last summer.

Tracing the source of the leak

If, like Sanchez says, the leaked "Wolverine" copy was first available on bootleg DVD and was sold from a street corner to any passerby, then isn't it logical to assume others uploaded the movie to the Web? Couldn't tracing the discs back to their source help lead agents to the original leak? And if there were others who uploaded the film to the Web, wouldn't the government be arresting them as well?

According to my film industry sources, one possible reason that federal officials haven't arrested anyone else is that they may be building a case.

One example for how long it can take to build a case was illustrated in last year's leak of "The Love Guru."

FBI agents had to follow a long trail before filing a criminal complaint nine months after the original leak. (Ben Sheffner, a well-known pro-copyright blogger and attorney, posted a copy of the criminal complaint at his site, Copyrights & Campaigns).

In that case, agents had strong suspicions early on about who leaked the much-maligned Mike Meyers film, according to court documents.

Jack Yates, an employee of Los Angeles Duplication & Broadcasting ("LADB"), was asked to make screener copies that were supposed to appear on talk shows for promotional purposes (one of the copies went to Jay Leno). Yates, however, was seen on the company's video cameras making an extra copy and taking it to his car.

In interviews with agents, Yates denied knowledge of the copy. So federal officials were forced to track down the IP address associated with the first uploading of the movie.

The trail of who obtained a copy of the film involved multiple people but Yates was eventually undone when investigators traced it back to his cousin.

Last summer, the 28-year-old Yates was sentenced to six months in jail.
http://news.cnet.com/8301-31001_3-10420059-261.html





Google Convicted in French Copyright Case
Greg Keller

A Paris court ruled Friday that Google's expansion into digital books breaks France's copyright laws, and a judge slapped the Internet search leader with a 10,000-euro-a-day fine until it stops showing literary snippets.

Besides being fined the equivalent of $14,300 for each day in violation, Google was ordered to pay 300,000 euros ($430,000) in damages and interest to French publisher La Martiniere, which brought the case on behalf of a group of French publishers.

Google attorney Alexandra Neri said the company would appeal.

The decision erects another legal barrier that may prevent Google from realizing its 5-year-old goal of scanning all the world's books into a digital library accessible to anyone with an Internet connection.

A U.S. legal settlement that would give Google the digital rights to millions of books is in limbo because U.S. regulators have warned a federal judge in New York that the arrangement probably would thwart competition in the budding electronic book market and compromise copyrights, as well.

The top U.S. copyright official and the governments in Germany and France also have raised objections about that settlement overstepping its bounds. Google is trying to address the critics with a revised settlement that is still under court review.

The French case is relatively small in comparison. It didn't even seem to faze investors as Google shares gained $2.33 to $596.27 in Friday trading.

Still, the ruling served as a reminder that Google's ambitious push into other markets beyond Internet search increasingly is clashing with fears the Mountain View company is getting too powerful.

As part of the backlash, Google has been depicted as a copyright scofflaw that prospers off the content of others — a portrayal the company's management insists is totally off base.

The head of the French publisher's union applauded Friday's verdict.

"It shows Google that they are not the kings of the world and they can't do whatever they want," said Serge Eyrolles, president of France's Syndicat National de l'Edition. He said Google had scanned 100,000 French books into its database, 80 percent of which were under copyright.

Eyrolles said French publishers would still like to work with Google to digitize their books, "but only if they stop playing around with us and start respecting intellectual property rights."

Philippe Colombet, the head of Google's book-scanning project in France, said the company disagrees with the court's ruling.

"French readers now face the threat of losing access to a significant body of knowledge and falling behind the rest of Internet users," Colombet said in a conference call with reporters. "We believe that displaying a limited number of short extracts from books complies with copyright legislation both in France and the U.S. — and improves access to books."

Colombet declined to answer questions about whether Google would remove the books from its database or pay the fine. "We are going to study the judgment carefully over the coming days," he said.

The judgment will have little or no effect on Internet users outside of France. And French books that are in Google's database with publishers' consent will remain searchable, even in France. Colombet could not say how many French books Google has scanned overall, or how many French publishers allowing Google to show its works.

Google has scanned more than 10 million books worldwide since 2004, including 2 million with the consent of about 30,000 publishers, About 9,000 of those publishers are in Europe, Colombet said. Another 2 million books in Google's library no longer are in copyright. Google has been only showing snippets from the remaining books while it tries to iron out copyright disputes.

French President Nicolas Sarkozy has made catching up on France's digital delay one of the national priorities by earmarking 750 million euros (about $1 billion) of a 35 billion euro spending plan announced earlier this week for digitizing France's libraries, film and music archives and other repositories of the nation's recorded heritage.

Earlier this week a consortium of French technology companies announced a plan to create a book scanning project they said would be better than Google's, but only in three years time.
http://www.siliconvalley.com/news/ci_14025404





Hackers Break Amazon's Kindle DRM

The great ebook 'unswindle'
Dan Goodin

An Israeli hacker says he has broken copyright protections built in to Amazon's Kindle for PC, a feat that allows ebooks stored on the application to work with other devices.

The hack began as an open challenge in this (translated) forum for participants to come up with a way to make ebooks published in Amazon's proprietary format display on competing readers. Eight days later, a user going by the handle Labba had a working program that did just that.

The hack is the latest to show the futility of digital rights management schemes, which more often than not inconvenience paying customers more than they prevent unauthorized copying.

Once upon a time, Apple laced its iTunes-purchased offerings with similar DRM restrictions that evoked major headaches when trying to do something as simple as transferring songs to a new PC. When reverse engineering specialist DVD Jon neutered the mechanism, that was the beginning of the end to the draconian regimen, which Apple called, ironically enough, Fairplay.

But most vendors don't bow so gracefully or quickly out of the reverse-engineering arms race. Witness, well, Apple, which regularly issues iPhone updates to thwart users who have the audacity to jailbreak the devices they own. Texas Instruments has also been known to take action against customers who reverse engineer calculators.

Amazon representatives have yet to indicate how they plan to respond. Queries put to a spokesman on Tuesday weren't immediately returned.

According to a translated writeup of the Kindle hack here, Amazon engineers went to considerable lengths to prevent their DRM from being tampered with. The Kindle for PC uses a separate session key to encrypt and decrypt each book "and they seem to have done a reasonable job on the obfuscation," the author says.

The crack comes courtesy of a piece of software titled unswindle, and it's available here. Once installed, proprietary Amazon ebooks can be converted into the open Mobi format. And from there, you can enjoy the content any way you like.
http://www.theregister.co.uk/2009/12...kindle_hacked/





WTO Upholds that China Unfairly Blocking US Music, Films; US Fights China Over Raw Materials
Bradley S. Klapper

The World Trade Organization's top arbitrators upheld a ruling that China is illegally restricting imports of U.S. music, films and books, and Washington pushed forward with a new case accusing China of manipulating the prices for key ingredients in steel and aluminum production.

Monday's verdict by the WTO's appellate body knocked down China's objections to an August decision that came down decisively against Beijing's policy of forcing American media producers to route their business through state-owned companies.

If China fails over the next year to bring its practices in line with international trade law, the U.S. can ask the WTO to authorize commercial sanctions against Chinese goods.

"Today America got a big win," U.S. Trade Representative Ron Kirk said in a statement. "U.S. companies and workers are at the cutting edge of these industries, and they deserve a full chance to compete under agreed WTO rules. We expect China to respond promptly to these findings and bring its measures into compliance."

The Asian country's import restrictions have been a key gripe of Western exporters, who complain that China's rapid rise as a trade juggernaut has been aided by unfair policies that boost sales of Chinese goods abroad while limiting the amount of foreign products entering the Chinese market.

The probe initiated Monday by the WTO — at the request of the U.S., Mexico and the 27-nation European Union — focuses on the other half of the equation by examining China's treatment of domestic and foreign manufacturers with regards to its vast wealth of raw materials.

Washington and Brussels claim that China unfairly favors domestic industry by setting export quotas on materials such as coke, bauxite, magnesium and silicon metal. Export quotas are contentious under trade rules because they can cause a glut on the domestic market, driving down prices for local producers, while leading to scarcity and higher prices for competitors abroad.

Beijing, however, claims that the curbs are an effort to protect the environment, and says they comply with WTO rules. For its part, China is challenging U.S. trade rules on a number of issues such as poultry, and asked the WTO at the dispute body meeting Monday for a new investigation into American import taxes on Chinese tires.

Washington delayed the tire probe for another month, but the global trade referee will likely rule in all these disputes over the course of the next year.

Analysts and observers believe these Sino-American trade fights are only the beginning as President Barack Obama's administration will likely file more cases against China. Obama made campaign pledges to take a tougher approach with U.S. trading partners in the face of soaring job losses and the longest U.S. recession since World War II.

Last week, the two countries settled a dispute initiated by the Bush administration in December over subsidies that China allegedly provides to exporters of famous Chinese merchandise. Beijing agreed out-of-court to eliminate the subsidies, according to the office of the U.S. Trade Representative, which will boost the prospects of U.S. exporters of household appliances, textiles, chemicals, medicines and food products.

The media dispute with China focused on a number of complaints raised by the trade associations representing record labels such as EMI and Sony Music Entertainment; publishers including McGraw Hill and Simon & Schuster; and, to a lesser extent, the major Hollywood studios of Warner Bros., Disney, Paramount, Universal and 20th Century Fox.

The WTO made no finding that implies it is illegal for Beijing to review foreign goods for objectionable content. But it said China cannot limit the distribution of U.S. goods to Chinese state-owned companies, and said the Asian country's burdensome restrictions were not "necessary" to protect public morals.
http://www.courant.com/entertainment...0,345364.story





Top Court Transforms Press Freedom with New Libel Defence

Updated law extends defence to new media
Kirk Makin

The Supreme Court of Canada transformed the country's libel laws Tuesday with a pair of decisions that proponents say will expand the boundaries of free speech.

The court ruled that libel lawsuits will rarely succeed against journalists who act responsibly in reporting their stories when those stories are in the public interest.

It also updated the laws for the Internet age, extending the same defence to bloggers and other new-media practitioners.

Chief Justice Beverley McLachlin said that Canada needs to keep in step with several other Western democracies that have provided greater protection to the media.

“Freewheeling debate on matters of public interest is to be encouraged and the vital role of the communications media in providing a vehicle for such debate is explicitly recognized,” Chief Justice McLachlin said in a pair of 9-0 decisions.

Although the court acknowledged that free expression does not “confer a licence to ruin reputation,” it argued society is best served by fearless commentary and investigative journalism.

The court ordered new trials for a Toronto newspaper that exposed a questionable land deal and an Ottawa newspaper that raised questions about the conduct of a police officer who helped search for survivors after the Sept. 11, 2001, attacks on New York.

The media were exultant about the rulings. “This is a historic turn for Canadian media, who have long suffered an undue burden of proof,” said Globe and Mail editor-in-chief John Stackhouse. “We should not take our responsibility any more lightly, but we should celebrate the fact that the heavier blinds of Canadian libel law have been pulled back. The acceptance of this new defence by the Supreme Court of Canada will greatly advance the cause of freedom of expression, transparency and responsible journalism in Canada.”

Chief Justice McLachlin said that context is critical. She urged trial judges not to parse controversial statements in isolation, but to consider the entire subject matter.

Other critical factors listed by the court were: the seriousness of a published allegation; the public importance and urgency of the issue; the status and reliability of a source; and whether the plaintiff's side of the story was sought and accurately reported.

In the first case – Peter Grant v. Torstar Corp. – the court said that a Toronto Star reporter worked hard to get to the bottom of allegations that Mr. Grant and his company might have used political influence in securing the location of a private golf course development.

Paul Schabas, a lawyer for the Star, said Tuesday that the ruling “is hugely important; the most important libel decision ever released by the Supreme Court. It is a victory for the right to speak responsibly about public matters – to put issues to the public and let the public debate and decide.”

The second ruling involved a former Ontario Provincial Police officer – Danno Cusson – who presented himself as a trained dog handler at ground zero after the Sept. 11, 2001, attacks.

Reports in the Ottawa Citizen characterized Mr. Cusson as a wannabe who got in the way of legitimate searchers while he was attempting to free trapped survivors.

The ruling cancels a $125,000 award a jury made to Mr. Cusson at trial.

Dean Jobb, a journalism professor at University of King's College in Halifax, said that a revamping of the libel laws was long overdue.

“The court has recognized that the definition of ‘journalist' is expanding in our online world,” Prof. Jobb said. “Bloggers and anyone else publishing information on matters of public interest can claim the defence, provided the way they gather and present the news conforms with the ethical standards of journalists.”
http://www.theglobeandmail.com/news/...rticle1408613/





stephen-conroy.com - stephen conroy: minister for fascism

auDA Takedown

On Fri 18-12-2009 auDA issued a notice giving us 3 hours to provide evidence of our eligibility to hold stephenconroy.com.au and related domain names. We asked for reasonable time to prepare and make representations on our eligibility but auDA refused to grant this, insisting we reply within the 3 hour window.

After several attempts at convincing them to give us reasonable time to reply we made a last-ditch attempt at 16:10AEDT stating that we provide a consultancy product with 'Stephen Conroy' in it's name. We hoped that this would at least enable us to stay up over the weekend, but they didn't want to know. We believe that auDA had decided to pull our registration regardless of what we did.

After we were ultimately unable to obtain appropriate advice and representation in the manifestly inadequate time they pulled the domain, sending the site off-line. We've temporarily moved to stephen-conroy.com while we assess and respond to auDA's actions. Please update your bookmarks.

During the course of the weekend we have obtained advice from several sources and had discussions with the extremely helpful guys at EFA, who have agreed to provide their support. They also confirmed having received numerous other complaints from members of the public regarding auDA's actions.

We've received widespread support and messages of condemnation aimed at auDA for their actions, which seem to have been rightly interpreted as a manifestly political move.

We'd like to take this opportunity to thank everyone for their support and state that we WILL BE fighting what we consider to be a heavy-handed attempt to silence us. If you think this is as ridiculous as we do feel free to contact auDA directly and register your disapproval:

Chris Disspain - auDA CEO
Email: ceo@auda.org.au
Telephone: 1300 732 929
Facsimile: 03 8341 4112
http://stephen-conroy.com/page.php?4





Erroneous DMCA Notices and Copyright Enforcement, Part Deux
Mike Freedman

A few weeks ago, I wrote about a deluge of DMCA notices and pre-settlement letters that CoralCDN experienced in late August. This article actually received a bit of press, including MediaPost, ArsTechnica, TechDirt, and, very recently, Slashdot. I'm glad that my own experience was able to shed some light on the more insidious practices that are still going on under the umbrella of copyright enforcement. More transparency is especially important at this time, given the current debate over the Anti-Counterfeiting Trade Agreement.

Given this discussion, I wanted to write a short follow-on to my previous post.

The VPA drops Nexicon

First and foremost, I was contacted by the founder of the Video Protection Alliance not long after this story broke. I was informed that the VPA has not actually developed its own technology to discover users who are actively uploading or downloading copyrighted material, but rather contracts out this role to Nexicon. (You can find a comment from Nexicon's CTO to my previous article here.) As I was told, the VPA was contracted by certain content publishers to help reduce copyright infringement of (largely adult) content. The VPA in turn contracted Nexicon to find IP addresses that are participating in BitTorrent swarms of those specified movies. Using the IP addresses given them by Nexicon, the VPA subsequently would send pre-settlement letters to the network providers of those addresses.

The VPA's founder also assured me that their main goal was to reduce infringement, as opposed to collecting pre-settlement money. (And that users had been let off with only a warning, or, in the cases where infringement might have been due to an open wireless network, informed how to secure their wireless network.) He also expressed surprise that there were false positives in the addresses given to them (beyond said open wireless), especially to the extent that appropriate verification was lacking. Given this new knowledge, he stated that the VPA dropped their use of Nexicon's technology.
BitTorrent and Proxies

Second, I should clarify my claims about BitTorrent's usefulness with an on-path proxy. While it is true that the address registered with the BitTorrent tracker is not usable, peers connecting from behind a proxy can still download content from other addresses learned from the tracker. If their requests to those addresses are optimistically unchoked, they have the opportunity to even engage in incentivized bilateral exchange. Furthermore, the use of DHT- and gossip-based discovery with other peers---the latter is termed PEX, for Peer EXchange, in BitTorrent---allows their real address to be learned by others. Thus, through these more modern discovery means, other peers may initiate connections to them, further increasing the opportunity for tit-for-tat exchanges.

Some readers also pointed out that there is good reason why BitTorrent trackers do not just accept any IP address communicated to it via an HTTP query string, but rather use the end-point IP address of the TCP connection. Namely, any HTTP query parameter can be spoofed, leading to anybody being able to add another's IP address to the tracker list. That would make them susceptible to receiving DMCA complaints, just we experienced with CoralCDN. From a more technical perspective, their machine would also start receiving unsolicited TCP connection requests from other BitTorrent peers, an easy DoS amplification attack.

That said, there are some additional checks that BitTorrent trackers could do. For example, if the IP query string or X-Forwarded-For HTTP headers are present, only add the network IP address if it matches the query string or X-Forwarded-For headers. Additionally, some BitTorrent tracker operators have mentioned that they have certain IP addresses whitelisted as trusted proxies; in those cases, the X-Forwarded-For address is used already. Otherwise, I don't see a good reason (plausible deniability aside) for recording an IP address that is known to be likely incorrect.

Best Practices for Online Technical Copyright Enforcement

Finally, my article pointed out a strategy that I clearly thought was insufficient for copyright enforcement: simply crawling a BitTorrent tracker for a list of registered IP addresses, and issuing a infringement notice to each IP address. I'll add to that two other approaches that I think are either insufficient, unethical, or illegal---or all three---yet have been bandied about as possible solutions.

* Wiretapping: It has been suggested that network providers can perform deep-packet inspection (DPI) on their customer's traffic in order to detect copyrighted content. This approach probably breaks a number of laws (either in the U.S. or elsewhere), creates a dangerous precedent and existing infrastructure for far-flung Internet surveillance, and yet is of dubious benefit given the move to encrypted communication by file-sharing software.
* Spyware: By surreptitiously installing spyware/malware on end-hosts, one could scan a user's local disk in order to detect the existence of potentially copyrighted material. This practice has even worse legal and ethical implications than network-level wiretapping, and yet politicians such as Senator Orrin Hatch (Utah) have gone as far as declaring that infringers' computers should be destroyed. And it opens users up to the real danger that their computers or information could be misused by others; witness, for example, the security weaknesses of China's Green Dam software.

So, if one starts from the position that copyrights are valid and should be enforceable---some dispute this---what would you like to see as best practices for copyright enforcement?

The approach taken by DRM is to try to build a technical framework that restricts users' ability to share content or to consume it in a proscribed manner. But DRM has been largely disliked by end-users, mostly in the way it creates a poor user experience and interferes with expected rights (under fair-use doctrine). But DRM is a misleading argument, as copyright infringement notices are needed precisely after "unprotected" content has already flown the coop.

So I'll start with two properties that I would want all enforcement agencies to take when issuing DMCA take-down notices. Let's restrict this consideration to complaints about "whole" content (e.g., entire movies), as opposed to those DMCA challenges over sampled or remixed content, which is a legal debate.

* For any end client suspected of file-sharing, one MUST verify that the client was actually uploading or downloading content, AND that the content corresponded to a valid portion of a copyrighted file. In BitTorrent, this might be that the client sends or receives a complete file block, and that the file block hashes to the correct value specified in the .torrent file.
* When issuing a DMCA take-down notice, the request MUST be accompanied by logged information that shows (a) the client's IP:port network address engaged in content transfer (e.g., a record of a TCP flow); (b) the actual application request/response that was acted upon (e.g., BitTorrent-level logs); and (c) that the transferred content corresponds to a valid file block (e.g., a BitTorrent hash).

So my question to the readers: What would you add to or remove from this list? With what other approaches do you think copyright enforcement should be performed or incentivized?
http://www.freedom-to-tinker.com/blo...ment-part-deux





Libel Gag on Talk of 'Medical Hurricane'

A healthcare firm is seeking to silence a Danish academic from expressing doubts about one of its products by using England’s draconian libel laws
Henrik Thomsen

Two years ago in a conference room in the Randolph hotel in Oxford, Henrik Thomsen gave his inside account of a medical “nightmare”.

In a presentation to about 30 colleagues, Thomsen, one of Europe’s leading radiologists, revealed how patients treated at his hospital had subsequently contracted a rare and potentially fatal disease.

Thomsen and other doctors at his Copenhagen University hospital were baffled as to why 20 kidney patients who had been given routine scans were afflicted by a disorder — nephrogenic systemic fibrosis (NSF) — in which the skin gradually swells, thickens and tightens. Some sufferers were confined to wheelchairs. At least one died. There was no known cure.

Then, in March 2006, there was a breakthrough. It was confirmed that all those who had fallen ill with NSF had been given the same drug in advance of a magnetic resonance imaging (MRI) scan.

Omniscan was used to enhance the images produced by the scan. The product was sold around the world and was manufactured by GE Healthcare, a subsidiary of General Electric, one of the world’s largest corporations.

Thomsen’s presentation lasted no more than 15 minutes, with the final slide reading: “I hope none of you meets a similar medical hurricane.”

The quietly spoken 56-year-old, director of the department of diagnostic sciences at the University of Copenhagen, is part of a small group of clinicians credited with alerting patients and regulators to the potential risks of Omniscan for renal patients.
The Medicines and Healthcare Products Regulatory Agency this weekend said there had been 20 reports in the UK of NSF after patients were given Omniscan. Five of the patients died.

Last month, European medical regulators recommended that anybody who needed an MRI scan should be given a check to ensure their kidneys were healthy if they were to be given Omniscan or two similar products. In America, the Food and Drug Administration (FDA) is reviewing its advice.

Thomsen, however, now refuses to speak anywhere in England on the possible risks of Omniscan. The reason is that he faces another kind of storm: GE Healthcare is suing him in the High Court for libel.

The company claims his presentation in Oxford, entitled Management Aspects of NSF, was highly defamatory.

GE has already racked up costs of more than £380,000 pursuing the respected academic, who has authored or co-authored nearly 400 papers and delivered countless presentations to his peers. Thomsen will have to pay the firm’s costs if he loses the case.

In recent weeks, The Sunday Times has highlighted how London’s draconian libel laws are being used to silence critics of the rich and the powerful.

In a number of cases, both claimants and defendants have little apparent connection with Britain.

Campaign groups have warned that vital scientific and medical work is being threatened because of the danger of libel actions.

Thomsen, who was in London last week meeting Carter-Ruck, his lawyers, has no doubt about the driving force behind his case. “I believe that the lawsuit is an attempt to silence me,” he said.

IN October 2003 General Electric, one of the most watched companies on Wall Street, made a successful £5.7 billion bid for Amersham, the British healthcare company that had been privatised by Margaret Thatcher.

GE has a finger in many corporate pies, from manufacturing lightbulbs to insurance sales, but this appeared a particularly good business fit. GE Medical Systems made scanning equipment and Amersham made the products used to make medical imaging more effective. The combined business — GE Healthcare — had its headquarters in Buckinghamshire and was expected to generate about $13 billion (£8 billion) annually, representing about 10% of GE’s revenue at that time.

One of its products was Omniscan, which is among a small group of products administered to patients before an MRI scan. It is a so-called “contrast agent” whose properties enhance the differences between fluids and structures in the body when they appear on scans, making diagnosis easier. To date, it has sold more than 48m doses worldwide, including 620,000 in the UK. Such products sell at about $30 a dose.

There was, however, a problem. A small number of kidney patients injected with Omniscan and other products were falling ill with NSF, a horrific disease that first attacks the skin and can then attack organs. Those with healthy kidneys were unaffected and the product has been safe for more than 99% of patients.

NSF was first identified in the United States in 1997, more than five years after the contrast agents were introduced, and doctors were initially mystified as to what caused it.

One victim, Celeste Castillo Lee, from North Carolina, who gave evidence earlier this month to a FDA hearing, described how the disease migrated through her body, causing agony. “Seventy-five per cent of us [NSF victims] are in wheelchairs,” she said. “It’s actually a torture.”

She said swelling, which started in her ankles, moved through her limbs and then attacked her insides. Her bones, she said, felt like they were in a vice.

In British hospitals doctors were also finding cases of the new and strange condition. Giles Roditi, a consultant radiologist at Glasgow Royal Infirmary, said his hospital had 16 cases of NSF in renal patients.

What was causing it? In early 2006 Thomsen had turned clinical detective to try to answer this question.

After he was told that every kidney patient at his hospital who was diagnosed with the disease had been given a drug for a MRI scan, he and his colleagues alerted the medical authorities and then embarked on a review of all known cases of NSF.

Every patient out of the 150 cases that he found had been given a contrast agent for an MRI. About 90% had been given Omniscan. It was not proof that the drug caused NSF, but it was enough evidence for Thomsen never again to give it to any kidney patient.
Omniscan is one of several competing MRI agents that contain the rare-earth metal gadolinium, which is potentially toxic. The metal is chemically protected in its various drug forms and is quickly flushed out of the body by patients with healthy kidneys.

However, regulators believe the chemical structure of Omniscan, and another similarly constructed product, make them less stable and potentially dangerous for those suffering from renal problems.

The Danish Medicines Agency was the first to sound the alarm, highlighting 25 cases linked to Omniscan in a notice in May 2006.

Many of the Danish patients filed claims with a Danish government insurance agency, which pays benefits if it decides that a drug was a likely cause of inquiry or death.

In one of those cases, involving a 55-year-old woman who died from a lung embolism in 2003, the insurance agency concluded the side effects of Omniscan had “caused” her immobilisation, which, in turn, “caused” her deadly embolism. GE declined to comment on this case last week.

In June 2007, the Commission on Human Medicines in the UK advised doctors not to use Omniscan in patients with severe renal problems along with two other products, Magnevist, manufactured by Bayer HealthCare, and Optimark, produced by Covidien. It advised that other products containing gadolinium should not be used for kidney patients unless essential.

GE Healthcare appears to have reacted promptly to concerns about its product, but it objects to some of Thomsen’s work and the decisions by the European regulators. While it does not deny that there is an association between its product and NSF, it felt that to classify its drug as particularly risky was unfair.

It insists that a causal link to NSF has “not been established”. It believes a reporting bias may account for the high number of NSF cases linked to its product.

GE’s arguments helped sway regulators in the US, the biggest market for Omniscan. Doctors there have been warned of an association between contrast agents and NSF, but the FDA until recently said the data was too limited to classify some of the products as more risky than others.

The FDA has also not so far recommended doctors to stop using any of these products in kidney patients, but has simply issued safety warnings.

FDA’s staff has been revisiting the evidence. This month the FDA said it now believes that Omniscan, Optimark and Magnevist pose higher risks. An FDA advisory panel went further: they recommended that Omniscan and Optimark should not be given to patients with severe kidney disease.

This represents a victory of sorts for Thomsen and may mean less chance of kidney patients suffering from this disease in the future. But his mind is now on another looming battle with GE in the High Court.

Academics and radiologists who attended Thomsen’s presentation say it is “ludicrous” that he faces a potentially ruinous legal action and are writing letters of support for him in the action. His PowerPoint presentation is summarised in the writ and appears to be an objective analysis of the association between Omniscan and NSF.

GE Healthcare said this weekend it believed the presentation was defamatory because it accused the firm of suppressing information and marketing its product when it was aware of possible problems.

Last week, however, a spokeswoman was unable to highlight any part of Thomsen’s presentation in which this allegation was made. The writ says the defamation may have been “by way of innuendo”.

Carter-Ruck, defending Thomsen on a no-win, no-fee basis, says the action should be struck out because the words complained of in the writ are clearly not defamatory.

GE Healthcare is also suing over an article that appeared under Thomsen’s name in Imaging Management, a medical journal, published in Belgium, which referred to rumours that the company had been warned about possible problems with its product.

Thomsen says he did not “write or publish” the words that are the subject of the complaint and that they were written by a journalist. He denies libel.

The GE Healthcare spokeswoman said the company supported scientific debate and sued Thomsen only as a “last resort” because it felt it was being defamed. It says it always reacted quickly and in the best interests of patients to any possible side-effects of Omniscan. It reiterated that the product was still safe to use for the vast majority of patients.

THOMSEN’S case, however, is merely the latest example of the courts being used against scientists who scrutinise drugs and treatments. Simon Singh, a science writer, is being sued by the British Chiropractic Association for describing some of their treatments as “bogus”.

Another victim of the libel laws is Peter Wilmshurst, a consultant cardiologist at Shrewsbury hospital. He is being sued by an American company, NMT Medical, after he questioned the effectiveness of a new heart implant device.

The Libel Reform Campaign is now urging capped damages, stricter controls on costs and a stronger public interest defence. John Kampfner, chief executive of Index on Censorship, which is among a number of groups supporting the campaign, said: “[Thomsen’s case] appears to be yet another shocking example of multinational corporations going after academics and scientists working in the public interest.”

Sir Ken Macdonald, the former director of public prosecutions, has already called for reform. “The idea that we are becoming an international haven for people to attack scientists is something that we should not be proud of,” he said.

The culture, media and sport committee of the House of Commons is compiling a report that is expected to recommend change in the country’s libel laws. Jack Straw, the justice secretary, has already announced a review.

Colin Blakemore, professor of neuroscience at Oxford University and an adviser to Sense about Science, which promotes the use of good science in public debates, said any review should ensure the libel courts are never used as forum for assessing the risks of a treatment. “The risk is that the party with the most money will always win in what should be a dispassionate assessment of evidence,” he said.

GE, along with other manufacturers of contrast agents, now faces action in the American courts over its drug. A writ filed in the northern district of the Ohio federal court by lawyers acting for almost 500 plaintiffs says Omniscan was “routinely administered to kidney patients for years without warnings”.

These actions mean the debate over the side-effects of Omniscan and other products is likely to continue for many years to come.

Thomsen’s concern is about medical researchers who will need to highlight possible risks from drugs in the future. He says it is essential that they are not cowed by the prospect of appearing before the libel courts in London.

“It’s dangerous for the patient if we can’t frankly exchange views,” he said.

Jeff Gerth is a senior reporter at ProPublica (www.propublica.org), a US-based independent, non-profit newsroom that produces investigative journalism in the public interest.

Additional reporting: Jon Ungoed-Thomas

http://business.timesonline.co.uk/to...cle6962816.ece





Calling on Leakers to Help Document Local Misdeeds
Noam Cohen

It was a simple idea: use the power and elusiveness of the Internet to publish secret documents that someone, somewhere thought should be made public. And dare the government, any government, to shut you down.

Since its founding in late 2006, the Web site WikiLeaks.org has pursued that idea to the heights of commercial and political powerexposing internal memos about the dumping of toxic material off the African coast, the membership rolls of a racist British party, and most recently more than half a million pager messages from around the time of the 9/11 attacks, including some from government officials.

But the time has come for WikiLeaks, which calls itself "the first intelligence agency of the people," to think locally, says Daniel Schmitt, a German computer engineer who is a full-time unpaid spokesman for the Web site. "We are trying to bring WikiLeaks more directly to communities," he said in a telephone interview.

The organization has applied for a $532,000 two-year grant from the Knight Foundation to expand the use of its secure, anonymous submission system by local newspapers. The foundation's News Challenge will give as much as $5 million this year to projects that use digital technology to transform community news.

WikiLeaks proposes using the grant to encourage local newspapers to include a link to WikiLeaks' secure, anonymous servers so that readers can submit documents on local issues or scandals. The newspapers would have first crack at the material, and after a period of timeperhaps two weeks, Schmitt saidthe documents would be made public on the main WikiLeaks page.

For an organization that publicizes hidden documents, WikiLeaks is adamant about protecting the anonymity of the document donors. "We maintain our servers at undisclosed locations, pass communication through protective jurisdictions, keep no traffic logs, and use military-grade encryption to protect sources and other confidential information," the proposal reads in part. So unlike other online applicants, however, WikiLeaks cannot refer to a spike in Internet traffic in its pitch for itself.

"We are not really in a position to do that," Schmitt said. "We have strong stands on anonymity and don't have log files on users."
http://news.cnet.com/Calling-on-leak...3-6250339.html





Secret Neo-Nazi Documents Published
John Oates

Exclusive Wikileaks is in the process of making a cache of documents and files from eleven different neo-Nazi organisations readable, and readily available, online.

The membership records and private messages are currently being formatted to make them easy for non-techies to read and will be released on the Wikileaks site shortly.

The organisation got massive publicity last year when it published a BNP membership list handed over by a disgruntled ex-member.

The raw data is already available but needs formatting so: "your grandmother can read them and google can find them... Journalists won't write about it otherwise."

The site is asking for volunteers with enough database skills to be able to expand fields and dumping to text.

The compressed data is about 54MB.

The internal documents include more than just membership lists. There are what seem to be private internal messages, forum posts and email addresses.

Groups who have lost data, or had it stolen, include Aryan Nations, Blood and Honour, White Revolution, Volks Front and the Hammerskin Nation.

The data should be available online from the usual sources shortly. Feel free to discuss this below, but please do not post direct links to the files.
http://www.theregister.co.uk/2009/12...ership_leaked/





Comcast Settles Class-Action Suit Over Peer-To-Peer Delays

Operator will pay up to $16 million to customers who believe they were affected
Todd Spangler

Comcast has agreed to settle a class-action lawsuit alleging the nation's largest cable company impaired the use of peer-to-peer file-swapping applications, and will pay up to $16 million -- minus $3 million in attorneys' fees and other costs -- to customers who believe they were affected.

In a statement, Comcast said: "We are pleased to have reached a settlement in these consolidated class action lawsuits. Although we continue to believe that our network management practices were appropriate and in the best interests of our customers, we prefer to put this matter behind us and avoid a potentially lengthy and distracting legal dispute that would serve no useful purpose."

Comcast was sued by several customers who variously claimed breach of contract or that the operator violated consumer-protection laws by misrepresenting its broadband service as "unfettered" and that it provided "the fastest Internet connection." Those complaints were consolidated into multidistrict litigation in the U.S. District Court for the Eastern District of Pennsylvania.

The cable company's practice of impeding P2P traffic during times of peak congestion on its networks drew national attention -- and scrutiny from the Federal Communications Commission, then headed by chairman Kevin Martin -- after the Associated Press confirmed Comcast was limiting the ability of BitTorrent applications to transfer a copy of the King James Bible. Comcast's P2P "blocking" was originally publicized by Oregon resident Robb Topolski, who was a plaintiff in the class action.

The FCC subsequently issued an order requiring Comcast to change its network-management practices and finding that it violated the agency's network neutrality principles. Comcast is fighting the FCC's ruling in federal court; the U.S. Federal Appeals Court for the D.C. Circuit is scheduled to hear oral arguments on Jan. 8 in the case and is expected to rule within three months.

In its statement Tuesday, Comcast said the network-management practices that were the subject of the consumer lawsuits "are a thing of the past. Our network management practices at issue here were consumer-friendly and clearly disclosed to our subscribers in full compliance with all applicable legal requirements. Our goal at all times was to manage our network effectively for the benefit of all of our customers. We are pleased the court rejected each and every one of the objections lodged by certain parties and preliminarily approved the settlement in all respects."

Two law firms -- Lexington Law Group and class-action firm Scott + Scott LLP -- were appointed as co-class counsel. "This settlement is a great result for Comcast customers," Lexington Law Group partner Mark Todzo said in a statement. "It creates an efficient and effective mechanism that will put money back in the customers' hands without them individually going to court."

Under the terms of the settlement, individual Comcast subscribers are eligible to receive a payment of up to $16.

Current or former Comcast broadband customers may be eligible to get a refund or credit if: they used or attempted to use Comcast service to use the peer-to-peer file-sharing software -- including Ares, BitTorrent, eDonkey, FastTrack or Gnutella -- any time from April 1, 2006, to Dec. 31, 2008, and were unable to share files or believe the speed at which files were shared was impaired; and/or attempted but were unable to use Comcast service to use Lotus Notes to send e-mails any time from March 26, 2007 to October 3, 2007.

Comcast customers who wish to make a claim for settlement benefits may call (877) 567-2754 or visit www.P2PCongestionSettlement.com for more information.
http://www.multichannel.com/article/...r_Delays.p hp





DirecTV Thrives on 'Deception,' Lawsuit Alleges
Bob Sullivan

A lawsuit filed this week by Washington state against DirecTV could have a secondary purpose: It could serve as a textbook for consumers on tricks companies play to take their money.

The suit filed by Washington Attorney General Rob McKenna alleges so many forms of misbehavior that he thinks DirecTV, the nation’s largest satellite TV provider, has "built deception into their business model." In an interview with msnbc.com, he also said that the firm has "left few deceptive tactics unused."

"It's amazing, the wide variety of ways they've taken advantage of their customers," he said.

Much of the case centers on alleged misleading advertisements, and on a series of pricey early termination fees the firm levies on customers. For example: Aggressive marketing campaigns tout service for $29.99 per month, but leave less clear the two-year obligation attached to the deal, or that the price almost doubles after the first year, the lawsuit says. After the first year, consumers face a Hobson’s choice – either pay the higher price or cough up an early-termination fee of up to $480.

"It is what amounts to a bait-and-switch strategy,” McKenna said. “They use a variety of lures to bring people in at prices the customer doesn't actually pay."

Logo_miniBut that's just the tip of the iceberg in the complaint, which accuses DirecTV of 16 different causes of action.

Before filing the lawsuit on Monday, McKenna’s office had received 375 consumer complaints about DirecTV this year -- more than any other company. Another 59 complaints arrived in the 24 hours after the lawsuit was filed, he said.

In a statement, DirecTV denied the accusations.

“We always strive to provide 100 percent customer satisfaction but, to put it in perspective, we are talking about less than one percent of our customer base in the entire state," it said. "The vast majority of our customers in Washington, and the U.S. for that matter, understand our lease agreement and are happy with our overall service. We are disappointed that the state elected to file a lawsuit. We believe their allegations lack merit, and we are confident the court will agree with us.”

McKenna said he'd been working with DirecTV for months in an attempt to avoid a court battle, and he was surprised DirecTV refused to change its business practices voluntarily.

Other state attorneys general are also considering suing DirecTV, he said, declining to identify them. Earlier this year, a group of 46 states settled a lawsuit with DirecTV competitor DISH Network. The firm was accused of automatically debiting consumers’ accounts without their consent. The firm admitted no wrongdoing but agreed to change its business practices and refund $6 million to consumers.

"When we go after a company, it's because we have them dead to rights," McKenna said. "Most companies just want to settle. ... If DirecTV wants to take on the states, that's their choice."

Here are a few of the other allegations from the complaint:

* DirecTV requires a 24-month commitment but offers only 12-months worth of discounts. Terms for the offer are spelled out in newspaper ads in 5.5 point-sized fonts, barely readable to the naked eye.
* The sales scripts used by telemarketers include nothing about the terms and conditions on the discount plan.
* To receive the $29 monthly price, consumers must use an automatic payment method. Failure to do so adds $5 to the monthly bill.
* Customers who refuse a credit check or have bad credit face "hundreds of dollars" in extra fees.
* In order to get the promotional rate, some customers are required to file rebate forms. The rebate, which is applied as a deduction to monthly bills and spread out over many months, can be voided if a customer is late with a payment, bumping them up to a higher price. Also, some consumers complain that their rebates have been unfairly denied. And because rebates can take time to process, some customers say they were forced to pay a higher price during their first months of service.
* When consumers complain about defective equipment and the equipment is repaired, their service contract requirement is often renewed without their knowledge for another two years.
* Consumers report being signed up for a $5.99 monthly service maintenance plan they didn't want. When they cancel, they can be charged a $10 early termination fee.
* Consumers who are given "Free HBO" stations for a trial period are often rolled into a paid subscription without their express consent.
* Consumers who buy a DirecTV unit at an electronics store like Best Buy have been unknowingly enrolled in a lease agreement. Even thought the units are purchased like any other electronic equipment -- often for around $100 -- the consumers don’t own them. If they fail to activate DirecTV service, they can face a $150 "activation failure" fee. If they turn off the service early, they can face a $480 early-termination fee and must return the unit to DirecTV.

DirecTV's contract with consumers is "so one-sided as to grossly favor the defendants," McKenna said. That's assuming someone can find the contract terms.

HerbboxBut McKenna's office says all these conditions on DirecTV agreements never appear in a single place. Instead, using an approach called "layering," the terms and conditions can appear in various places: on store receipts, on order forms and on the company Web site.

"There's no single form with all the rules," he said. "That's unfair to consumers."

DirecTV is already facing legal action from consumers on similar issues. A class action lawsuit filed in California earlier this year alleges that the company raids customer bank accounts to collect early termination fees without consumers' consent. One of the plaintiffs, Mary Cox of Fontana, said a DirecTV customer service agent would only identify himself as "Ding-A-Ling" when she phoned to dispute an unauthorized $430 withdrawal from her account.

DirecTV faces challenges in the marketplace because its customer start-up costs are considerably higher than cable firms. New satellite users must obtain a set-top box, a dish and expert installation. Without offering free installation, the firm would have trouble matching similar sign-up deals from competitors. So the firm heavily subsidizes start-up equipment costs, and has adopted tactics similar to those used by cell phone carriers to ensure that its setup subsidies aren't wasted.

Despite such tactics, the firm is facing stiff competition for its 17 million subscribers. In its most recently reported quarter, DirecTV told investors that its losing customers at a "monthly churn rate of 1.72 percent." The firm blamed aggressive competitor promotions and "stricter" retention policies that "tighten up our offers to existing customers."

Red Tape Wrestling Tips

If you feel you've been treated unfairly by DirecTV, contact your state attorney general immediately. If a case is filed in your state, those with complaints on file will be the first in line to receive restitution should the states prevail.

If you are considering DirecTV -- or any pay TV service – read this complaint carefully. (PDF) All the pay TV services have conjured up complicated trial offers, tricky rebates and so on. The DirecTV lawsuit is an excellent summation of the kinds of things to watch for.

Discount trial offers -- say, $29.99 service for 12 months -- are excellent, but know when you sign up that you are playing a game. It critical to remember when you signed up, so you can switch services or ask for another discount before the higher rate kicks in. One idea: Put a small sticky note on your cable or satellite box with your discount end date, so you don't forget. And of course, always ask about early termination fees.
http://redtape.msnbc.com/2009/12/a-l...xtbook-fo.html





As Phones Do More, They Become Targets of Hacking
Brad Stone

Mobile phones are becoming ever more like personal computers. That means they are also becoming more vulnerable to traditional computer menaces like hackers and viruses.

This year, the Russian antivirus company Kaspersky Lab reported on a new malicious program that stole money by taking over Nokia phones and making small charges to the owners’ wireless accounts.

Last month, an Australian student created an experimental worm that hopscotched across “jailbroken” iPhones, which are phones altered to run software Apple has not authorized. The mischievous worm did not cause any damage; it just installed a photo of the ’80s pop star Rick Astley. But to security experts, it suggested that pernicious attacks on iPhones are possible.

Where there are perceived security threats, there are always entrepreneurs and investors looking to capitalize on them — and build profitable businesses. This month Khosla Ventures, a prominent Silicon Valley venture capital firm, led an investment group that injected $5.5 million into a fledgling security start-up called Lookout.

Lookout, based in San Francisco, was previously a consulting firm called Flexilis run by recent graduates of the University of Southern California. Now it wants to be the security giant of the mobile world, similar to the role Symantec plays in the PC market.

This year, Lookout began testing security software for phones running the Windows Mobile and Android operating systems, and it will soon introduce security applications for the BlackBerry and iPhone. The software protects phones against rogue programs and gives phone owners the ability to remotely back up and erase the data on their phones. It also lets them track the location of their handset on the Web.

A basic version of the software is free, while the company plans to charge a monthly subscription for a version with more features.

“It feels a lot like it did in 1999 in desktop security,” said John Hering, Lookout’s 26-year-old chief executive, who for years has done research demonstrating security vulnerabilities in phones. “People are using the mobile Web and downloading applications more than ever before, and there are threats that come with that.”

Lookout represents the latest attempt to build a new business that capitalizes on the surge of smartphones. Thousands of companies making mobile games, shopping tools and other programs have sprung up in the last two years as the iPhone in particular has taken off. Lookout and its investors believe this is the right time to get into the market.

“The rules of mobile are different,” said Vinod Khosla, founder of Khosla Ventures, which also recently invested in Square, a mobile payments start-up. “This is people’s most personal computer, and it needs to be protected.”

Companies like Research In Motion, maker of the BlackBerry, and Good Technology, a Silicon Valley-based mobile messaging firm, already offer mobile security tools, but their systems are aimed at businesses. Security firms like Symantec also have mobile security divisions, and a five-year-old company, Trust Digital, based in McLean, Va., has set its sights on this market.

Lookout says it can address the unique challenges of protecting cellphones, like preserving battery life. While the company will not give details, it says it has figured out how to get its software to work on the iPhone, which does not allow non-Apple programs to operate in the background, as security software typically does.

Mr. Hering and his co-founder, Kevin Mahaffey, 25, have been publicly demonstrating the weaknesses of mobile phones for some time. In 2005, they camped outside the Academy Awards ceremony in Hollywood and scanned the phones of stars walking the red carpet, using a short-range Bluetooth wireless connection. They found that as many as 100 of the phones were vulnerable to hacking over such a connection.

That year, at the Black Hat security conference in Las Vegas, they hacked into a phone over a mile away using Bluetooth.

Lookout’s founders and backers concede that for now, snoops and bad guys pose much less of a threat to cellphones than to PCs. But they believe there is an immediate need for software that preserves and protects a phone’s data, from e-mail to corporate information, and they say current systems do not work when a family or business has multiple types of cellphones on various wireless networks.

For instance, a small business could install the Lookout software on many different types of devices, back up all the data and remotely erase a phone if, say, an employee leaves it in a cab.

Jeff Moss, a security expert and organizer of the Black Hat conference, said mobile security had historically “been a solution in search of a problem.” But he said mobile viruses had recently become more common in Asia. His own Nokia N97 phone even caught a bug recently, though software he was running from F-Secure, a Finnish security company, caught it in time.

“The tipping point will be when we’re using the phone to shop and conduct banking,” Mr. Moss said. “The more you do with the phone, the more valuable a target it becomes.”
http://www.nytimes.com/2009/12/21/te...gy/21cell.html





Gadget Patrol: 21st Century Phone

(This isn't a product review, it's a big-picture overview brought to you from the universe of "Halting State".)
Charlie Stross

It shouldn't be news to anyone that smartphones — as a category — really took off in the second half of the noughties. Before 2005, few people bothered with PDAs, and fewer still with phones that had keyboards and could browse the web or send email. Current projections, however, show 25% of all phones sold in 2010 being smartphones — and today's smartphone is a somewhat more powerful computer than 2002's laptop.

At the same time, the winners in 2005's smartphone market (Palm, Windows Mobile, Symbian Series 60, 80, and UIQ) are losing ground rapidly (PalmOS is already dead, modulo the Hail Mary pass that is WebOS on the Pré) while strange new mutants slouch towards market dominance — Android, Mac OS X, and maybe Maemo.

What's happening?

Here's my hypothesis ...

Pre-2005, digital mobile phones typically ran on GSM, with GPRS data limited to 56kbssec, or Verizon's CDMA. This badly choked their ability to do anything useful and internet-worthy. By 2005, the first 3G networks based on WCDMA (aka UMTS) began to open up. By 2009, 3G HSDPA networks can carry up to 7.2mbps. The modem-grade data throughput of the mid-noughties smartphone experience has been replaced by late-noughties broadband grade thorughput, at least in the densely networked cities where most of us live. (I am not including the rural boondocks in this analysis. Different rules apply.)

To the mobile phone companies, 3G presented a headache. They typically offered each government billions for the right to run services over the frequencies freed up by the demise of old analog mobile phone services and early TV and other broadcast systems; how were they to monetise this investment?

They couldn't do it by charging extra for the handsets or access, because they'd trained their customers to think of mobile telephony as, well, telephony. But you can do voice or SMS perfectly well over a GSM/GPRS network. What can you do over 3G that justifies the extra cost?

Version 1 of their attempt to monetise 3G consisted of walled gardens of carefully cultivated multimedia content — downloadable movies and music, MMS photo-messaging, and so on. The cellcos set themselves up as gatekeepers; for a modest monthly fee, the customers could be admitted to their garden of multimedia delights. But Version 1 is in competition with the internet, and the roll-out of 3G services coincided (and competed) with the roll-out of wifi hotspots, both free and for-money. It turns out that what consumers want of a 3G connection is not what a mobile company sees fit to sell them, but one thing: bandwidth. Call it Version 2.

Becoming a pure bandwidth provider is every cellco's nightmare: it levels the playing field and puts them in direct competition with their peers, a competition that can only be won by throwing huge amounts of capital infrastructure at their backbone network. So for the past five years or more, they've been doing their best not to get dragged into a game of beggar-my-neighbour, by expedients such as exclusive handset deals (ever wondered why AT&T in America or O2 in the UK allowed Apple to tie their hands and retain control over the iPhone's look and feel?) and lengthening contractual lock-in periods for customers (why are 18-month contracts cheaper than 12-month contracts?). And the situation with international data roaming is dismal. It doesn't hit Americans so much, but here in the UK, if I travel over an hour by air, the odds are good that I'll be paying £6 per megabyte for bandwidth. It's as if my iPhone's IQ drops by 80 points whenever I leave home.

Enter: Apple and Google.

Apple are an experience company. They're a high-end marque; if they were in the automobile business, they'd be BMW, Mercedes, and Porsche rolled into one. They own about 12% of the PC market in the USA ... but 91% of the high end of the PC market (laptops over $999, desktops over $699). How they got into the mobile phone market is an odd and convoluted story, but it's best to view it as a vertical upwardly-mobile extension of the MP3 player market (from their point of view), which has taken on a lucrative life of its own. Apple's unique angle is the user experience. Without OS X to differentiate them from the rest of the market, their computers would just be overpriced PCs. So it should be no surprise that Apple's runaway hit iPhone business team have a single overriding goal: maintain control of the platform and keep it different (and aspirational).

Apple don't want to destroy the telcos; they just want to use them as a conduit to sell their user experience. Google, however, are another matter.

Google is an advertising corporation. Their whole business model is predicated on breaking down barriers to access — barriers which stop the public from accessing rich internet content plastered with Google's ads. Google want the mobile communications industry to switch to Version 2, pure bandwidth competition. In fact, they'd be happiest if the mobile networks would go away, get out of the users' faces and hand out free data terminals with unlimited free bandwidth. More bandwidth, more web browsing, more adverts served, more revenue for Google. Simple!

This is where the Nexus One announced last week may be significant. If the rumours are true — that they're pushing it at a low or subsidized price, and have strong-armed T-Mobile (the weakest of the US cellcos) into providing a cheap data-only mobile tariff for it, and more significantly access to VoIP and cheap international data roaming — then they've got a Trojan horse into the mobile telephony industry.

I think Google are pursuing a grand strategic vision of destroying the cellco's entire business model — of positioning themselves as value-added gatekeepers providing metered access to content — and their second-string model of locking users in by selling them premium handsets (such as the iPhone) on a rolling contract.

They intend to turn 3G data service (and subsequently, LTE) into a commodity, like wifi hotspot service only more widespread and cheaper to get at. They want to get consumers to buy unlocked SIM-free handsets and pick cheap data SIMs. They'd love to move everyone to cheap data SIMs rather than the hideously convoluted legacy voice stacks maintained by the telcos; then they could piggyback Google Voice on it, and ultimately do the Google thing to all your voice messages as well as your email and web access.
(This is, needless to say, going to bring them into conflict with Apple. Hitherto, Apple's iPhone has been good for Google: iPhone users do far more web surfing — and Google ad-eyeballing — than regular phone users. But Apple want to maintain the high quality Apple-centric user experience and sell stuff to their users through the walled garden of the App Store and the iTunes music/video store. Apple are an implicit threat to Google because Google can't slap their ads all over those media. So it's going to end in handbags at dawn ... eventually.)

The real message here is that if Google succeeds, the economic basis of your mobile telephony service in 2019 is going to be unrecognizably different from that of 2009. Some of the biggest names in phone service (T-mobile? Orange? Vodafone? AT&T? Verizon?) are going to go the way of Pan Am and Ma Bell by then; the ones left standing will be the ones with the best infrastructure (hint: that doesn't look like AT&T right now — by some analyses, AT&T mis-understand TCP/IP so badly that their network trouble is self-inflicted) and best interoperability (goodbye Verizon), selling bits at the lowest price to punters who buy their cheap-to-disposable (phones are part of the perpetually deflating consumer elecronics sector; today's $350 BoM should be down to under $100 by 2019, for something a good deal more powerful) unlocked in WalMart and take ditchwater-cheap international roaming service for granted.

Probably around the time VoIP takes over from the current model, we'll see something not unlike DNS emerge for mapping OpenID or other internet identities onto the phone number address space. (God, I hate phone numbers. Running a phone service that forces everyone to use seven to twelve digit numbers is like running an internet that forces everyone to use raw IP addresses.) Then the process will be complete, and things will have come full-circle, and the internet will have eaten the phone system.

What's good for the internet is good for Google. Right now, the phone companies are not good for the internet. If I'm right about the grand strategy, the Googlephone will change that.
http://www.antipope.org/charlie/blog...ury_phone.html





Is the Success of Google's Android a Threat to Free Software?
Glyn Moody

When Google first announced its Linux-based Android mobile phone platform just over two years ago, many were sceptical. After all, the reasoning went, the world of mobile phones is very different from that of computers. Similar doubts greeted the first Android phone, the HTC Dream (also known as the T-Mobile G1), when it appeared last year. But something strange has happened in the last twelve months, with a growing chorus of approval for the Android platform and its phones.

For example, numbers are emerging that suggest that Android has established a real beachhead in the mobile market:

Quote:
Google’s open-source Android operating system accounted for more than one in four of the smartphone ads served in November by AdMob, the mobile ad network that Google agreed to buy for $750 million six weeks ago. That’s up from 20 percent in October.

That means Apple’s iPhone, which seemed untouchable only a few months ago, has a fast-rising competitor. Apple’s share of ad requests is twice as big — 55% — but Android traffic is eating into iPhone growth, rather than being pushed aside by the iPhone as happened to Nokia.
It's also striking that there is now a palpable sense of excitement around some Android phones, notably Motorola's Droid and HTC models like the Hero (disclosure: I've recently bought one of these). This may not quite be at the level of the mindless worship enjoyed by the iPhone, but it's getting surprisingly close. For example, here are some figures on the sales of the Droid:

Quote:
The Motorola Droid is doing more than just bashing the iPhone -- it's also smashing the ceiling when it comes to mainstream Android appeal. A full 250,000 people snatched up Droids during the phone's first week in stores, some newly released data suggests. That's four times the number of launch-week sales estimated for the myTouch 3G, which had previously been considered the fastest-selling Android device.



So where does the Droid fall within the smartphone spectrum? According to Flurry, Droid's 250,000 figure puts it well above the myTouch, but well below the iPhone. The firm says 60,000 myTouch 3G handsets were sold during its launch week, while 1.6 million iPhones went out during its first seven days.
The same article puts those numbers in context:

Quote:
Yes, the iPhone's sales figure is significantly higher than the Droid's. But don't forget a couple of important factors:

First, the iPhone 3GS was building upon a massive base of existing iPhone owners, many of whom were guaranteed to be grabbing at Apple's updated model (or, let's be honest, practically anything new Apple offered) the second it hit store shelves. Droid didn't share this advantage; it was a first-generation product without an established fanbase.

Second, and equally noteworthy, the iPhone 3GS launched in eight countries during its first week. The Droid launched only in America. Taking that into consideration, the difference in sales suddenly doesn't seem quite so overwhelming.
And remember: these are *Linux*-based systems that lots of ordinary users are getting worked up about; that's really something quite new and important.

Finally, and just as critically for the future success of the platform, it's worth noting that it's not just the users who are waxing enthusiastic about the Android phones: developers are starting to code for it in big numbers too. Figures from Androlib.com show a healthy upward-trending graph for the number of apps in the Android market. Just how good those apps are (many are feeble), and whether the current figure of 20,000 is completely accurate (Google says it isn't) is largely beside the point: what's important is that the number has doubled in just a few months, which indicates a growing interesting in the platform.

And that's where the problems start. As far as I can tell, the majority of these apps are closed source - it's not something that is flagged up much, but, symptomatically, the Google Android Developer Challenge doesn't require entries to be open source. Which suggests that we are seeing the rise of something that should concern everyone in the free software world: a popular system built on top of Linux, but running closed-source apps.

It already looks increasingly likely that the world of smartphones will be dominated by two platforms: the iPhone and Android. If, as some believe, Google does come out with its own branded mobile, this will give an even greater impetus to Android's uptake. But while the vast majority of the its apps are closed source they will not help spread real user freedom, or offer much of an alternative to Apple's tightly-controlled approach.

Worse, if efforts to enable Android apps to run on distros like Ubuntu succeed, then we may see closed-source software being used on the free software stack there, too. Ironically, Android's success could harm not just open source's chances in the world of mobile phones, but even on the desktop.

The free software community needs to address these problems by encouraging many more developers to build great Android apps that are truly free. In fact, we have an excellent example of how to do that with the rich ecosystem of Firefox add-ons that are free software. Moreover, this should be an attractive challenge to ambitious coders given the exciting possibilities that mobile offers for new kinds of programs (and not just those based on trendy areas like augmented reality). Maybe the time has come to shift the emphasis away from trying in vain to conquer the legacy desktop, towards excelling on mobile, likely to be the main computing platform for most of humanity.
http://www.linuxjournal.com/content/...-free-software





The BBC's Digital Rights Plans Will Wreak Havoc On Open Source Software

The BBC is trying to dictate what kinds of televisions and set-top boxes we use to watch its programmes
Cory Doctorow

Last summer, the BBC tried to sneak "digital rights management" into its high-def digital broadcasts.

Now, generally speaking, the BBC isn't allowed to encrypt or restrict its broadcasts: the licence fee payer pays for these broadcasts, and no licence fee payer woke up today wishing that the BBC had added restrictions to its programming.

But the BBC tried to get around this, asking Ofcom for permission to encrypt the "metadata" on its broadcasts – including the assistive information used by deaf and blind people and the "tables" used by receivers to play back the video. The BBC couched this as a minor technical change, and Ofcom held a very short, very quiet consultation, but was overwhelmed by a flood of negative submissions from the public and from technologists who understood the implications of this move.

Fundamentally, the BBC is trying to leverage its broadcast licence into control over the devices that can receive broadcasts. That is, in addition to deciding what shows to put on the air, the Beeb wants the power to decide what kinds of tellies and set-top boxes will be able to display and record those shows – and it wants the power to control the design of all the devices that might be plugged into a TV or set-top box. This is an unprecedented amount of power for a broadcaster to have.

As Ofcom gears up to a second consultation the issue, there's one important question that the BBC must answer if the implications of this move are to be fully explored, namely: How can free/open source software co-exist with a plan to put DRM on broadcasts?

A brief backgrounder on how this system is meant to work: the BBC will encrypt a small, critical piece of the signal. To get a key to decrypt the scrambled data, you will need to sign onto an agreement governed by a consortium called the Digital Transmission Licensing Administrator (some of the agreement is public, but other parts are themselves under seal of confidentiality, which means that the public literally isn't allowed to know all the terms under which BBC signals will be licensed).

DTLA licenses a wide variety of devices to move, display, record, and make limited copies of video. Which programmes can be recorded, how many copies, how long recordings can last and other restrictions are set within the system. To receive a licence, manufacturers must promise to honour these restrictions. Manufacturers also must promise to design their devices so that they will not pass video onto unapproved or unlicenced devices – only DTLA-approved boxes can touch or manipulate or play the video.

DTLA enforces these rules through a system of penalties for non-compliant vendors. It also has the power to "revoke" devices after they are sold to you, so that the BBC's signals will refuse to play on your set-top box if the DTLA determines that its security is inadequate and they pass it a revocation message (even though you always used your box in accordance with the law).

With DTLA devices, the integrity and usefulness of your home theatre is subject to the ongoing approval of the consortium, and they can switch it off if they decide, at any time in the future, that they don't trust it any more.

The entire DTLA system relies on the keys necessary to authenticate devices and unscramble video being kept secret, and on the rules governing the use of keys being inviolable. To that end, the DTLA "Compliance and Robustness Agreement" (presented as "Annex C" to the DTLA agreement) has a number of requirements aimed at ensuring that every DTLA-approved device is armoured against user modification. Keys must be hidden. Steps must be taken to ensure that the code running on the device isn't modified. Failure to take adequate protection against user modification will result in DTLA approval being withheld or revoked.

This is where the conflict with free/open source software arises.

Free/open source software, such as the GNU/Linux operating system that runs many set-top boxes, is created cooperatively among many programmers (thousands, in some cases). Unlike proprietary software, such as the Windows operating system or the iPhone's operating system, free software authors publish their code and allow any other programmer to examine it, make improvements to it, and publish those improvements. This has proven to be a powerful means of quickly building profitable new businesses and devices, from the TomTomGo GPSes to Google's Android phones to the Humax Freeview box you can buy tonight at Argos for around £130.

Because it can be adapted by anyone, free software is an incredible source of innovative new ideas. Because it can be used without charge, it has allowed unparalleled competition, dramatically lowering the cost of entering electronics markets. In short, free software is good for business, it's good for the public, it's good for progress, and it's good for competition.

But free software is bad for DTLA compliance.

Free software is intended to be examined and modified by all comers.

Generally, the licence terms for free software require that it is licensed for public examination and adaptation. It is literally impossible for a device to be both "open" and for it to prevent its users from retrieving keys hidden in its guts, or from changing the code that runs on it. This, of course, is totally incompatible with the DTLA requirement to hide keys and prevent modification of code.

And so, when the BBC threatens to infect its high-def broadcasts with DTLA, it also threatens to remove free/open software from consideration for any device that can play, record, or manipulate the video that the licence fee pays for. It means that you can't use a GNU/Linux phone to watch a show, or an open video player like VLC on your laptop. It means that your kids can't use free/open video-editing software to cut some of last night's news into a presentation for class.

It means that British entrants into the DTV device market can't avail themselves of the free software that their competitors all over the world are using, and will have to spend fortunes reinventing the wheel, creating operating systems and programs that do the same things as their free counterparts, but in such a way as to enforce restrictions against the device's owner.

Ofcom is meant to guard the public interest in matters such as these. If the public interest is to be upheld here, the BBC must explain how it intends to do the impossible: add DRM without banning free/open source development.
http://www.guardian.co.uk/technology...-cory-doctorow





To Deal With Obsession, Some Unfriend Facebook
Katie Hafner

Facebook, the popular networking site, has 350 million members worldwide who, collectively, spend 10 billion minutes there every day, checking in with friends, writing on people’s electronic walls, clicking through photos and generally keeping pace with the drift of their social world.

Make that 9.9 billion and change. Recently, Halley Lamberson, 17, and Monica Reed, 16, juniors at San Francisco University High School, made a pact to help each other resist the lure of the login. Their status might as well now read, “I can’t be bothered.”

“We decided we spent way too much time obsessing over Facebook and it would be better if we took a break from it,” Halley said.

By mutual agreement, the two friends now allow themselves to log on to Facebook on the first Saturday of every month — and only on that day.

The two are among the many teenagers, especially girls, who are recognizing the huge distraction Facebook presents — the hours it consumes every day, to say nothing of the toll it takes during finals and college applications, according to parents, teachers and the students themselves.

Some teenagers, like Monica and Halley, form a support group to enforce their Facebook hiatus. Others deactivate their accounts. Still others ask someone they trust to change their password and keep control of it until they feel ready to have it back.

Facebook will not reveal how many users have deactivated service, but Kimberly Young, a psychologist who is the director of the Center for Internet Addiction Recovery in Bradford, Pa., said she had spoken with dozens of teenagers trying to break the Facebook habit.

“It’s like any other addiction,” Dr. Young said. “It’s hard to wean yourself.”

Dr. Young said she admired teenagers who came up with their own strategies for taking Facebook breaks in the absence of computer-addiction programs aimed at them.

“A lot of them are finding their own balance,” she said. “It’s like an eating disorder. You can’t eliminate food. You just have to make better choices about what you eat.” She added, “And what you do online.”

Michael Diamonti, head of school at San Francisco University High School, which Monica and Halley attend, said administrators were pondering what the school’s role should be, since students used Facebook mostly at home, although excessive use could affect their grades.

“It’s such uncharted territory,” Dr. Diamonti said. “I’m definitely in support of these kids recognizing that they need to exercise some control over their use of Facebook, that not only is it tremendously time consuming but perhaps not all that fulfilling.”

In October, Facebook reached 54.7 percent of people in the United States ages 12 to 17, up from 28.3 percent in October last year, according to the Nielsen Company, the market research firm.

Many high school seniors, now in the thick of the college application process, are acutely aware of those hours spent clicking one link after another on the site.

Gaby Lee, 17, a senior at Head-Royce School in Oakland, Calif., had two weeks to complete her early decision application to Pomona College. Desperate, she deactivated her Facebook account.

The account still existed, but it looked to others as if it did not.

“No one could go on and write on my wall or look at my profile,” she said.

The habit did not die easily. Gaby said she would sit down at the computer and find that “my fingers would automatically go to Facebook.”

In her coming book, “Alone Together” (Basic Books, 2010), Sherry Turkle, a psychologist who is director of the Initiative on Technology and Self at the Massachusetts Institute of Technology, discusses teenagers who take breaks from Facebook.

For one 18-year-old boy completing a college application, Professor Turkle said, “Facebook wasn’t merely a distraction, but it was really confusing him about who he was,” and he opted to spend his senior year off the service. He was burned out, she said, trying to live up to his own descriptions of himself.

But Facebook does not make it easy to leave for long. Deactivating an account requires checking off one of six reasons — “I spend too much time using Facebook,” is one. “This is temporary. I’ll be back,” is another. And it is easy to reactivate an account by entering the old login and password.

For Walter Mischel, a professor of psychology at Columbia University, who studies self-control and willpower, “what’s fascinating about this is that it involves spontaneous strategies of self-control, of trying to exert willpower after getting sucked into a huge temptation.”

Professor Mischel performed a now-famous set of experiments at Stanford University in the late 1960s in which he tested young children’s ability to delay gratification when presented with what he called “hot” temptations, like marshmallows.

Some managed to stop themselves; others could not.

“Facebook is the marshmallow for these teenagers,” Professor Mischel said.

Rachel Simmons, an educator and the author of “The Curse of the Good Girl: Raising Authentic Girls with Courage and Confidence” (Penguin Press, 2009), said Facebook’s new live feed format had made the site particularly difficult to tear oneself away from.

“You’re getting a feed of everything everyone is doing and saying,” Ms. Simmons said. “You’re literally watching the social landscape on the screen, and if you’re obsessed with your position in that landscape, it’s very hard to look away.”

It is that addictive quality that makes having a partner who knows you well especially helpful. Monica said that when she was recently in bed sick for several days, she broke down and went on Facebook. And, of course, she felt guilty.

“At first I lied,” Monica said. “But we’re such good friends she could read my facial expression, so I ’fessed up.”

As punishment, the one who breaks the pact has to write something embarrassing on a near-stranger’s Facebook wall.

After several failed efforts at self-regulation, Neeka Salmasi, 15, a sophomore at Greenhills School in Ann Arbor, Mich., finally asked her sister, Negin, 25, to change her Facebook password every Sunday night and give it back to her the following Friday night.

Neeka quickly saw an improvement in her grades.

Still better, she said, is that her mother no longer visits her room “every half an hour to see if I was on Facebook or doing homework.”

“It was really annoying,” she said.

Last year, Magellan Yadao, 18, a senior at Northside College Preparatory High School in Chicago, went on a 40-day Facebook fast for Lent.

“In my years as a Catholic, I hadn’t really chosen something to give up that was very important to me,” Magellan said in an e-mail message. “Apparently, Facebook was just that.”

In his follow-up work, Professor Mischel said he found that some of the children who delayed gratification with the marshmallows turned out to be higher achievers as adults.

Halley said she and Monica expect their hiatus to continue at least through the rest of the school year. She added that they were enjoying a social life lived largely offline.

“Actually, I don’t think either one of us wants it to end,” she said.
http://www.nytimes.com/2009/12/21/te...1facebook.html





New Programs Aim to Lure Young Into Digital Jobs
Steve Lohr

Growing up in the ’70s, John Halamka was a bookish child with a penchant for science and electronics. He wore black horn-rimmed glasses and buttoned his shirts up to the collar.

“I was constantly being called a geek or a nerd,” he recalled, chuckling.

Dr. Halamka grew up to be something of a cool nerd, with a career that combines his deep interests in medicine and computing, and downtime that involves rock climbing and kayaking.

Now 47, Dr. Halamka is the chief information officer at the Harvard Medical School, a practicing emergency-ward physician and an adviser to the Obama administration on electronic health records.

Hybrid careers like Dr. Halamka’s that combine computing with other fields will increasingly be the new American jobs of the future, labor experts say. In other words, the nation’s economy is going to need more cool nerds. But not enough young people are embracing computing — often because they are leery of being branded nerds.

Educators and technologists say two things need to change: the image of computing work, and computer science education in high schools. Teacher groups, professional organizations like the Association for Computing Machinery and the National Science Foundation are pushing for these changes, but so are major technology companies including Google, Microsoft and Intel. One step in their campaign came the week of Dec. 7, National Computer Science Education Week, which was celebrated with events in schools and online.

Today, introductory courses in computer science are too often focused merely on teaching students to use software like word processing and spreadsheet programs, said Janice C. Cuny, a program director at the National Science Foundation. The Advanced Placement curriculum, she added, concentrates narrowly on programming. “We’re not showing and teaching kids the magic of computing,” Ms. Cuny said.

The agency is working to change this by developing a new introductory high school course and seeking to overhaul Advanced Placement courses as well. It hopes to train 10,000 high school teachers in the modernized courses by 2015.

One goal, Ms. Cuny and others say, is to explain the steady march and broad reach of computing across the sciences, industries, culture and society. Yes, they say, the computing tools young people see and use every day — e-mail, text-messaging and Facebook — are part of the story. But so are the advances in field after field that are made possible by computing, like gene-sequencing that unlocks the mysteries of life and simulations that model climate change.

That message must resonate with parents and school administrators, they say, if local school districts are to expand their computer science programs.

“We need to gain an understanding in the population that education in computer science is both extraordinarily important and extraordinarily interesting,” said Alfred Spector, vice president for research and special initiatives at Google. “The fear is that if you pursue computer science, you will be stuck in a basement, writing code. That is absolutely not the reality.”

Kira Lehtomaki can attest to this. She came to computing by way of art and movies. Art projects, not computers, were her childhood passions. She loved watching videos of Disney movies like “Sleeping Beauty” and “Dumbo,” and wanted to grow up to be one of those artists who stirred life into characters using pencils and paper. She even took a summer job at Disneyland as a “cookie artist,” painting designs and Mickey Mouse faces on baked goods, because she was allowed to spend a few days with Disney’s animators.

Yet as a 19-year-old college student in 2001, Ms. Lehtomaki saw the Pixar film “Monsters, Inc.” and was impressed by how good computer animation had become. At the University of Washington, she pursued computer graphics, graduating with a degree in computer science.

Today Ms. Lehtomaki, 27, is an animator at Walt Disney Animation Studios, working on “Rapunzel,” which is scheduled to be released next year. She does her drawing on a computer, using specialized graphics and modeling software. Her computer science education, she said, is an asset every day in her work, less for technical skills than for what she learned about analytic thinking.

“Computer science taught me how to think about things, how to break down and solve complex problems,” Ms. Lehtomaki said.

Reformulating a seemingly difficult problem into something a person can know how to solve is “computational thinking,” which the new high school courses are intended to nurture. Some schools in Los Angeles County are experimenting with the introductory course, called “Exploring Computer Science,” including South East High School in South Gate, Calif. Last year, 35 students were in a pilot program, and this year the course is being taken by 130 students.

Most of the school’s 2,800 students come from low-income families and qualify for free or subsidized school lunches. In the new class, students create projects that address subjects of their interest, like gang violence and recycling, said John Landa, who teaches the course.

Others choose to make simple computer games.

“It’s much more engaging,” Mr. Landa said. “And the idea is not to have most or all of them go into computer science, but to give kids a chance to try things out. The course is designed to give kids a sense of computational thinking no matter what they do after this.”

A solid grounding in computing, experts say, promises rewards well beyond computer science. Most new jobs in the modern economy will be heavily influenced by technology, said Robert Reich, a professor at the University of California, Berkeley, and former labor secretary in the Clinton administration. And they will require education beyond high school, though often two years or less.

“Most of them will not be pure technology jobs, designing computer software and hardware products, but they will involve applying computing and technology-influenced skills to every industry,” Mr. Reich said. “Think Geek Squads in other fields,” he added, referring to a popular tech-support service.

These workers, he said, will be needed in large numbers to install, service, upgrade and use computer technology in sectors like energy and health care.

The Obama administration, as part of the economic stimulus, has increased federal financing for science and technology education. More immediately, its multibillion-dollar program to accelerate the adoption of computerized health records may generate more than 200,000 jobs, analysts estimate.

“These are jobs for what I think of as digital technicians,” Mr. Reich said. “And they are at the core of the new middle-wage middle class.”

Still, the revamped high school courses, educators say, should entice more young people into computer science careers as well.

At South East High School, Mario Calleros, an 18-year-old senior, may be one of them. He took the new course last year, after his interest was piqued by his experience playing computer games. “I really wanted to know how they worked,” he said.

Mr. Calleros picked up a sense of game technology by making his own, an action game with a knights-in-armor motif. Last summer, he won an internship at the Center for Embedded Networked Sensing at the University of California, Los Angeles. In the summer program, Mr. Calleros and a partner built a smartphone application, linking pictures, text descriptions and GPS location data to explain the history, architecture and amenities of individual buildings on the U.C.L.A. campus as users walk by.

Mr. Calleros is applying to college and plans to major in computer science. His teacher, Mr. Landa, pointing to the new high school curriculum, said: “It’s small and we’re just under way, but I think we’re going in the right direction here.”
http://www.nytimes.com/2009/12/21/te...y/21nerds.html





Most People Say No to Slow Online Video
Don Reisinger

About 81 percent of Web users leave an online video page if they encounter mid-stream rebuffering, a new study from video analytics firm TubeMogul has found.

Rebuffering has become a major issue for most Web users. And even though TubeMogul found that just 7 percent of streaming video is slow-loading, it said Web video still can't quite match TV-quality viewing.

"The technology just isn't there yet to have a TV-like experience," David Burch, marketing director at TubeMogul, said in a statement. "And if it's an advertiser hosting video on a branded site or distributing it across the Web, people are just clicking away when they see that spinning wheel."

TubeMogul conducted its study by sampling 192 streams from leading content delivery networks--Akamai, Limelight, Edgecast, and Bit Gravity to name a few. According to TubeMogul, the services it tested "help to power video across thousands of sites." But they aren't quite doing as nice a job as some users had hoped.

TubeMogul found that Limelight performed best out of all the services it tested, experiencing slow load times just 4 percent of the time. It was followed by Panther Networks, Akamai, Edgecast, and BitGravity, respectively.

Although slow load times are still a problem on the Web, it's not stopping people from attempting to view streaming content. A recent Nielsen study found that online video viewing was up a whopping 34.9 percent in the last quarter, compared to a year prior. Now the CDNs just need to catch up.
http://news.cnet.com/8301-17939_109-10413989-2.html





DVD Industry Down, But Not Downbeat
Thomas K. Arnold

To outsiders, the DVD business appears to be in the toilet -- so why are home-entertainment honchos smiling?

At the end of the third quarter, DVD sales were running about 13% behind last year, and even a fourth-quarter bump isn't expected to bring that gap down by much once the final numbers for 2009 are tallied.

And yet studio executives aren't nearly as glum as they were a year ago, when DVD sales were running 5% behind the previous year. What gives?

The recession, for one. The closing months of 2008 saw the banking and auto industries all but collapse, the jobless rate skyrocket and home foreclosures go through the roof. DVD sales might have plummeted this past year, but hey, so did everything else.

Even more significant is the realization that the old way of looking at things isn't necessarily the only way, or even the right way. Although DVD sales have fallen steeply, rentals -- spurred by a robust Netflix and Redbox kiosks -- have risen sharply, with an 8.2% uptick in the first nine months of 2009, according to Rentrak's Home Video Essentials.

Indeed, factor in dramatic gains in Blu-ray Disc sales and digital delivery, and consumer consumption of home entertainment might be at an all-time high -- a point alluded to three months back when the Digital Entertainment Group released those third-quarter numbers. According to DEG's figures, overall consumer home video transactions were up a solid 6.6% from the previous year.

"It's been a roller coaster of a year, but consumer demand for content is as strong as we've seen it," said Ron Sanders, president of Warner Home Video. "Rental activity has gained in this economic environment, which you'd expect, but so have Blu-ray and high-priced catalog sets. And digital continues its very strong growth."

The fact is, DVD is now a mature format -- it will officially become a teenager next year, celebrating its 13th birthday in March -- and much of the catalog that fueled early growth has been exploited. Consumers now own most of what they want to keep in their home libraries and have returned to renting what they merely want to watch, especially with the economy dinging paychecks.

To get them buying again, Hollywood is looking to Blu-ray, which experienced dramatic gains in 2009.

In the first nine months of the year, consumers spent $568 million on Blu-ray Disc purchases, 83% more than in the first nine months of 2008. Some 3.3 million additional players, including PlayStation 3 consoles, were sold in the same time frame, up 13% from the year-ago period.

The fourth quarter has been a watershed moment for the HD discs. Players were readily available for as little as $78 (at Wal-Mart), and catalog Blu-ray Discs could be had for less than $10 and new releases for less than $20.

Much of this was driven at retail, with dealers using the discs as loss leaders as they once did with DVDs. But it also reflects a changing mind-set among the studios, which is that premium pricing simply isn't going to fly. If the masses really are to transition from standard DVD to Blu-ray, studio execs grudgingly concede, the HD discs will have to be priced competitively.

"It's partially a timing thing," said analyst Tom Adams, president of Adams Media Research. "If a generation of Americans hadn't had the Great Depression fear thrown into them, it would be a whole different ballgame. But on the other hand, events like that do change attitudes, and I think it's clear that consumers are seeking out the bargain way of doing things."

To that end, the new focus in Hollywood is to make it as easy as possible for consumers to transition from standard DVD to Blu-ray Disc, with the tool du jour being to offer DVD and Blu-ray Disc in the same package.

As for rentals, studio execs are hoping the recent uptick was a product of the recession and that consumers will return to buying once the economy improves.

"The business is definitely being affected by the recession,' said David Bishop, president of Sony Pictures Home Entertainment. "It has brought about the 'trading down' to lower-cost rentals and subscription services and contributed to the first growth in the rental business in nearly 10 years."

This year, Hollywood reacted with shock at the Redbox kiosk rollout, which brought a total of 22,500 dollar-rental kiosks to supermarkets, drugstores and Wal-Marts. Angry studio execs blamed the cheap rental machines for cannibalizing DVD sales, and by the end of the year they had either taken Redbox to court or cut "sleeping with the enemy" deals to at least share in the spoils.

Looking to 2010, studios are keeping their fingers crossed that the Redbox phenomenon could be stopped cold by the courts or will burn out like a Roman candle once the economy recovers and the lure of Blu-ray gets people back into the habit of buying. There's also optimism about the prospects of 3D.

"We expect to see some level of recovery in 2010 and anticipate another near-doubling of Blu-ray sales again in the coming year," Bishop said. "While this still will not be enough to offset the decline in DVD, the addition of increased digital revenue should spur growth for the industry overall by 2011."
http://www.reuters.com/article/idUST...ertainmentNews





New Film Ignites Debate on Ratings Policy
Brooks Barnes

The romantic comedy “It’s Complicated” arrived at the multiplex on Friday complete with an R rating, ranking it in the same category as “The Texas Chainsaw Massacre” and “Basic Instinct” in the eyes of the Motion Picture Association of America.

But there is no violence in “It’s Complicated,” and the bedroom scenes are decidedly tame by contemporary standards. Instead, the R rating — which experts say could limit the box-office potential of the Universal Pictures film — comes largely from a sequence in which Steve Martin and Meryl Streep smoke marijuana.

Giggles ensue.

The rating has kicked up dust in Hollywood, with movie bloggers starting blistering attacks on the M.P.A.A. for being out of touch. The marijuana lobby is equally miffed. “This is an absurd ruling rooted in old cultural thinking,” said Allen St. Pierre, executive director of the National Organization for the Reform of Marijuana Laws. Universal and Mr. Martin unsuccessfully appealed, seeking a PG-13 rating.

Conservative groups, meanwhile, find themselves in the rare position of cheering the ratings system instead of condemning it. Dan Isett, director of public policy for the Parents Television Council, which also monitors movies, said “It’s Complicated” was a “rare instance” of the board getting a rating correct.

“The last I checked, smoking pot was still illegal, illicit behavior,” he said. “Too often material gets rated lower than it should be.”

Figuring prominently in the brouhaha are other depictions of marijuana in cinema, particularly the scene in the 1980 comedy “9 to 5” showing Dolly Parton, Jane Fonda and Lily Tomlin getting high and raiding the refrigerator. Its rating was PG.

“This demonstrates a real hilarity and inconsistency, especially given how far the medical marijuana movement has come,” said Martin Kaplan, the director of the Norman Lear Center for the study of entertainment and society at the University of Southern California.

The rumpus comes amid informal discussion about tweaking the ratings formula, particularly where R is involved. The M.P.A.A., a trade organization financed by the major studios, has ruminated about dividing the R rating into new categories. Already, the industry refers informally to movies that are “soft R” or “hard R.”

Nancy Meyers, who directed the film, declined to comment, as did Universal and the film’s producers.

But financial forces are at work against any changes. If the difference between a PG-13 and an R rating can be tens of millions of dollars at the box office, the last thing studios want is to slice the pie thinner. “In general, the more child-friendly the rating is — even for movies that might not be aimed at teenagers — the more tickets you sell,” said S. Abraham Ravid, a business professor at Rutgers University who has published many studies on movie economics.

Joan Graves, the chairwoman of the film industry’s Classification and Rating Administration, declined to comment on “It’s Complicated,” citing internal policy barring the public discussion of a specific picture. But she dismissed criticism of her board members.

“They react the way that most people react,” she said. “America is not just two coasts.”

Some in the industry see something deeper at work, arguing that the trade organization is on its best behavior because it has a lame-duck leader in Dan Glickman (who is to step down as chief executive in September) and because Congressional elections will take place next year. The Federal Trade Commission harshly criticized the movie industry this month for inappropriately advertising movies with PG-13 and R ratings to children.

It was not specifically the actual drug use that got “It’s Complicated,” about a divorced woman who has an affair with her remarried ex-husband, into this pickle, according to people with knowledge of how the decision was reached. Instead, the ratings board was concerned about what the movie did not have: a negative consequence for the behavior. (Ms. Graves said that “no scrutiny or outside influence impacts the rating of any film — period.”)

The board, according to these people, thought the scene was uproariously funny and could leave children with a strong message that smoking marijuana is fun. The opposite, of course, could be argued: One way to make young people think that marijuana is uncool is to show the white-haired Mr. Martin, 64, smoking it.

This emphasis on consequences has long been part of how Hollywood has navigated taboo subjects, dating back to the Hays Code era, said Robert Sklar, an emeritus professor of cinema studies at New York University and the author of “Movie-Made America.” “If somebody transgressed — infidelity, alcoholism — they had to pay for it,” he said.

The M.P.A.A. is often accused by conservative groups of “ratings creep,” a loosening of standards as the years go on, and of pandering to the studios, which resist R ratings because it could limit the audience. But “It’s Complicated” may be an example of the reverse.

Ms. Graves said the board has grown more strict about drug use over the last two decades. “In the ’60s and ’70s, drugs were considered fun and recreational, but then parents started to wise up and standards shifted the other way,” she said.

In other words, “9 to 5” was born of a different cultural time.

It is hard to argue, however, that cannabis has become anything but more routine over the years. There are now about 1,000 medical marijuana dispensaries in the Los Angeles area alone, according to city estimates; as a point of reference, there are fewer than 300 Starbucks outposts.
http://www.nytimes.com/2009/12/25/bu...25ratings.html





Avatar’s a BitTorrent Hit, But Fox Plays Down Piracy ‘Threat’
enigmax

Avatar is doing great at the box office so it will come as no surprise that it is also very popular on BitTorrent. However, rather than the usual overreaction to piracy, Fox is playing down the leak and saying it will have a much smaller effect on profits. In the meantime, a lesser-known release blog is thrust into the spotlight.

Set on a moon under siege by humans determined to exploit its resources, the new sci-fi extravaganza ‘Avatar’ from Titanic director James Cameron is the most hotly anticipated film of the year. It comes as no surprise, therefore, that some people want to try and download an illicit copy from the Internet.

Just before the official US release, that became possible. There appears to be several releases of the movie online, although some appear to originate from the same TS (Telesync) copy, although without downloading and watching them all, that is very hard to verify from the screenshots currently available.

Normally an Internet leak of a movie, particularly one the size of Avatar, leads to furious statements from the studios. Indeed, when a movie is leaked before it hits US theaters, as was the case with productions such as Star Wars Episode III, Wolverine and now Avatar, the FBI usually gets called in. This time things seem a little different.

This weekend a press release began to circulate which quotes Eden Wright, a Fox representative, saying that due to the movie’s availability in 3D at the cinema, “piracy will play a much smaller role in stealing profits from [Avatar] due to the technological hurdles it imposes.”

It’s difficult to disagree with Wright. James Cameron has gone to extraordinary lengths to produce this 3D movie and seeing a blurred copy acquired from the Internet will just ruin the whole experience – people who are prepared to pay will want to see it properly.

Indeed, the figures seem supportive. Avatar pulled in more than $3.5 at its midnight launch with 3D viewings accounting for 85% of the gross. On Friday it took $27 million, with 3D accounting for 58% of the gross.

However, there are always those that either don’t have the money or easily succumb to the temptation. These people have been feverishly hitting BitTorrent and according to data collected by TorrentFreak, thus far Avatar has clocked up around 500,000 downloads in just two days.

The press release mentioned earlier also put a lesser-known blog firmly in the spotlight. “James Cameron’s Avatar first appeared on a blog GetTheNew.com,” it read, going on to say that such sites “…now account for as much as 20% of online piracy.”

While the release states that the site is a source of information, those unfamiliar with how Internet piracy works could be forgiven for thinking that somehow GetTheNew was responsible for the leak, but that’s not true.

GetTheNew, which opened just this September, told TorrentFreak that while they may have been the first site to publish the name of the pirate release (which incidentally leaked to P2P first, an increasing phenomenon), any Google searches the site provided would have come up blank since the movie had not hit public torrent sites yet. All GetTheNew had published at the time was a review of the movie and links to the relevant pages on IMDB.
http://torrentfreak.com/avatars-a-bi...threat-091221/





Big Sunday Raises 'Avatar' Weekend to $77.35 Million, a Record Debut for December
AP

James Cameron's "Avatar" has jumped out to a faster start than projected.

Distributor 20th Century Fox says Cameron's sci-fi saga did far more business Sunday than the studio had estimated, raising the film's domestic weekend total to $77.35 million. That's $4.35 million more than the studio first forecast.

That also should give a bump to the movie's worldwide total, which has topped $230 million.

Fox still is counting final numbers, but if the figure holds, "Avatar" will go in the record book for biggest December opening ever, a fraction ahead of the $77.2 million for Will Smith's "I Am Legend" two years ago.

But factoring in higher ticket prices today, "I Am Legend" sold more tickets than "Avatar" over opening weekend.
http://www.courant.com/entertainment...0,163777.story





Wired Explains: How 3-D Movie Projection Works
Brian X. Chen

Every few years you’ve probably watched a mainstream movie through a pair of glasses that make creatures, people and explosions pop out of the screen. And if you’ve bought into the massive hype, you were probably lining up this past weekend for James Cameron’s Avatar, which is screening in 3-D.

You might wonder, why can’t more movies be shown in 3-D? It would just take some post-production video rendering and a pair of stereoscopic glasses, right?

Actually, 3-D projection is a lot more complicated — and expensive — than one would think. In anticipation of Avatar, Wired.com paid a visit to Dolby Laboratories in San Francisco to learn about the history of 3-D movie technology leading up to its current state.

Remember those junky glasses, with a blue lens for one eye and a red one for the other? They were tied to a 3-D-imaging method called anaglyph that dates back to the 1950s. With this system, the images on the screen were projected with two color layers superimposed onto one another. When you put on the glasses, each eye sees a separate visual, the red-tinted image through one eye and the blue-tinted one through the other. Your visual cortex combines the views to create the representation of 3-D objects.

Though it may have been impressive at the time, early anaglyph imaging suffered from many issues. The color separation on film was very limited, and thus it was difficult to perceive details in 3-D scenes. Another frequent problem was ghosting, which happened when the image that should be appearing in your left eye would creep over to the right.

And then there’s the screen. Theaters projecting 3-D movies with the anaglyph method had to install silver screens for an ideal viewing experience. That’s because the more reflective screen helped keep the two different light signals separated.

3-D movie technology has come a long way. Anaglyph imaging has improved: Glasses now are typically red and cyan, which, when combined, can make use of all three primary colors, resulting in more realistic color perception.

RealD cinema, currently the most widely used 3-D movie system in theaters, uses circular polarization — produced by a filter in front of the projector — to beam the film onto a silver screen. The filter converts linearly polarized light into circularly polarized light by slowing down one component of the electric field. When the vertical and horizontal parts of the picture are projected onto the silver screen, the filter slows down the vertical component. This effectively makes the light appear to rotate, and it allows you to more naturally move your head without losing perception of the 3-D image. Circular polarization also eliminates the need for two projectors shooting out images in separate colors. The silver screen, in this case, helps preserve the polarization of the image.

Dolby’s 3-D system, used for some Avatar screenings, is a little different. It makes use of a special filter wheel installed inside the projector in front of a 6.5-kilowatt bulb. The wheel is divided into two parts, each one filtering the projector light into different wavelengths for red, green and blue. The wheel spins rapidly — about three times per frame — so it doesn’t produce a seizure-inducing effect. The glasses that you wear contain passive lenses that only allow light waves aligned in a certain direction to pass through, separating the red, green and blue wavelengths for each eye.

The advantages of Dolby’s 3-D system? There’s no need for a silver screen, thanks to the built-in color-separation wheel and the powerful bulb right next to it, ensuring a bright picture necessary for 3-D viewing. Also, a mechanism can be adjusted inside the projector to change the projection method from reflection to refraction — meaning theaters can switch between projecting regular movies and 3-D movies.

The cons? The glasses are pricey: $27 apiece, so they’re designed to be washed and reused (as opposed to recycled). (Although, this would be considered a pro for the environment.) Altogether, a Dolby 3-D projection system costs theaters about $26,500, not including the eyewear.
http://www.wired.com/gadgetlab/2009/12/3d-movies/





Snap and Search (No Words Needed)
Miguel Helft

THE world, like the World Wide Web before it, is about to be hyperlinked. Soon, you may be able to find information about almost any physical object with the click of a smartphone.

This vision, once the stuff of science fiction, took a significant step forward this month when Google unveiled a smartphone application called Goggles. It allows users to search the Web, not by typing or by speaking keywords, but by snapping an image with a cellphone and feeding it into Google’s search engine.

How tall is that mountain on the horizon? Snap and get the answer. Who is the artist behind this painting? Snap and find out. What about that stadium in front of you? Snap and see a schedule of future games there.

Goggles, in essence, offers the promise to bridge the gap between the physical world and the Web.

Computer scientists have been trying to equip machines with virtual eyes for decades, and with varying degrees of success. The field, known as computer vision, has resulted in a smattering of applications and successes in the lab. But recognizing images at what techies call “scale,” meaning thousands or even millions of images, is hugely difficult, partly because it requires enormous computing power. It turns out that Google, with its collection of massive data centers, has just that.

“The technology exists and was developed by other people,” said Gary Bradski, a computer vision expert and a consulting professor of computer science at Stanford. “The breakthrough is doing this at scale. There are not many entities that could do that.”

Goggles is not the first application to try to create a link between the physical and virtual worlds via cellphones. A variety of so-called augmented-reality applications like World Surfer and Wikitude allow you to point your cellphone or its camera and find information about landmarks, restaurants and shops in front of you. Yet those applications typically rely on location data, matching information from maps with a cellphone’s GPS and compass data. Another class of applications reads bar codes to link objects or businesses with online information about them.

Goggles also uses location information to help identify objects, but its ability to recognize millions of images opens up new possibilities. “This is a big step forward in terms of making it work in all these different kinds of situations,” said Jason Hong, a professor at the Human Computer Interaction Institute at Carnegie Mellon University.

When you snap a picture with Goggles, Google spends a few seconds analyzing the image, then sends it up to its vast “cloud” of computers and tries to match it against an index of more than a billion images. Google’s data centers distribute the image-matching problem among hundreds or even thousands of computers to return an answer quickly.

Google says Goggles works best with certain categories of objects, including CDs, movie posters, products, wine labels, artwork, buildings and landmarks. It can read business cards and book covers. It doesn’t do so well with trees, cars or objects whose shape can change, like a towel. And it has trouble recognizing objects in less than ideal lighting conditions.

“Today, Google Goggles is limited because it recognizes certain objects in certain categories,” said Vic Gundotra, a vice president at Google in charge of its mobile phone applications. “But our goal is for Goggles to recognize every image. This is really the beginning.”

For now, Goggles is part of the “labs” section of Google’s Web site, which indicates that the product remains experimental. So it is not surprising that it has quirks and flaws.

Goggles had trouble recognizing the San Francisco-Oakland Bay Bridge, for example, when the image was shot with several trees in the way of its suspension span. But it did recognize it when the picture was snapped with fewer obstacles in the way. Faced with a picture of a Yahoo billboard shot in San Francisco, the search results showed Times Square, presumably because of the huge Yahoo billboard there.

But the service can also delight and amaze. It had no trouble recognizing an Ansel Adams photograph of Bridalveil Fall in Yosemite, returning search results for both the image and a book that used that image on its cover. It also correctly identified a BlackBerry handset, a Panasonic cordless phone and a Holmes air purifier. It stumbled with an Apple mouse, perhaps because there was a bright reflection on its plastic surface.

It’s not hard to imagine a slew of commercial applications for this technology. You could compare prices of a product online, learn how to operate that old water heater whose manual you have lost or find out about the environmental record of a certain brand of tuna. But Goggles and similar products could also tell the history of a building, help travelers get around in a foreign country or even help blind people navigate their surroundings.

It is also easy to think of scarier possibilities down the line. Google’s goal to recognize every image, of course, includes identifying people. Computer scientists say that it is much harder to identify faces than objects, but with the technology and computing power improving rapidly, improved facial recognition may not be far off.

Mr. Gundotra says that Google already has some facial-recognition capabilities, but that it has decided to turn them off in Goggles until privacy issues can be resolved. “We want to move with great discretion and thoughtfulness,” he said.
http://www.nytimes.com/2009/12/20/business/20ping.html





About Face with New Recognition Software

An engineering team has developed a face recognition system that is remarkably accurate in realistic situations.

Unlike existing face recognition programs that try to find “optimal” facial features, the new program uses sparse representation. One of the program’s developers, Yi Ma, an associate professor at the University of Illinois, contends that the choice of features is less important than the number of features used.

“Face recognition is not new, but new mathematical models have allowed researchers to identify faces so occluded that it was previously thought impossible,” says Ma.

People can learn upwards of tens of thousands of different human faces during their lifetime. Various real-world situations such as lighting, background, pose, expression, and occlusion may complicate human recognition, but are incredibly difficult problems for traditional face recognition algorithms to conquer.

Ma’s sparse representation algorithm randomly selects pixels from all over the face, increasing the accuracy of recognition even in cases of disguise, varying expressions, or poor image quality.

The algorithm also increases accuracy by ignoring all but the most compelling match from one subject.

Experiments using sparse representation support the approach. In an experiment that uses two established databases of faces, the Yale B and the AR, the new face recognition method is remarkably accurate. Applying this approach to the Yale B database shows 98.3% accuracy using mouth-region images. The AR database shows 97.5% accuracy on face images with a sunglasses disguise and 93.5% accuracy with a scarf disguise.

The technology is jointly owned by the University of Illinois and the University of California, Berkeley, and could have applications for personal and corporate use.

“The computer can identify images that the human eye can’t,” says Ma, who sees a future where people can capture someone’s face with their camera phone, upload the image to a web-based service, and have a match sent to them seconds later.
http://futurity.org/science-technolo...tion-software/





Privacy Group Sues DoJ Over 'Digital Strip Search' Data
Kelly Fiveash

A privacy group has filed a lawsuit against the US Department of Justice for allegedly failing to disclose information about the use of devices that capture black 'n' white images of people stripped naked.

The Electronic Privacy Information Center (EPIC) brought the suit against the DoJ in the US District Court of Columbia late last week.

According to EPIC, the Transportation Security Administration confirmed that the "Whole Body Imaging" (WBI) machines were being used in at least one Virginia federal court by the US Marshall Service.

"EPIC submitted a FOIA [Freedom of Information Act] request for information about these devices, including the contracts with the manufacturer of the machines, and information about technical specifications and training materials," said the group in a statement.

"The Marshall Service failed to respond adequately to the request," it claimed.

EPIC had requested the following:

All unfiltered or unobscured images captured using Whole Body Imaging technology

All contracts entered into by the U.S. Marshals Service pertaining to Whole Body Imaging systems, including contracts for hardware, software, or training

All documents detailing the technical specifications of Whole Body Imaging hardware, including any limitations on image capture, storage, or copying

All documents, including but not limited to presentations, images, and videos, used for training persons to use Whole Body Imaging systems

All complaints related to the use of Whole Body Imaging and all documents relating to the resolution of those complaints

All documents concerning data breaches of images generated by Whole Body Imaging technology.

EPIC has a copy of the filing here. The DoJ could not immediately be reached for comment at time of writing.
http://www.theregister.co.uk/2009/12...doj_foia_spat/





Security in the Ether

Information technology's next grand challenge will be to secure the cloud--and prove we can trust it.
David Talbot

In 2006, when Amazon introduced the Elastic Compute Cloud (EC2), it was a watershed event in the quest to transform computing into a ubiquitous utility, like electricity. Suddenly, anyone could scroll through an online menu, whip out a credit card, and hire as much computational horsepower as necessary, paying for it at a fixed rate: initially, 10 cents per hour to use Linux (and, starting in 2008, 12.5 cents per hour to use Windows). Those systems would run on "virtual machines" that could be created and configured in an instant, disappearing just as fast when no longer needed. As their needs grew, clients could simply put more quarters into the meters. Amazon would take care of hassles like maintaining the data center and network. The virtual machines would, of course, run inside real ones: the thousands of humming, blinking servers clustered in Amazon's data centers around the world. The cloud computing service was efficient, cheap, and equally accessible to individuals, companies, research labs, and government agencies.

But it also posed a potential threat. EC2 brought to the masses something once confined mainly to corporate IT systems: engineering in which Oz-like programs called hypervisors create and control virtual processors, networks, and disk drives, many of which may operate on the same physical servers. Computer security researchers had previously shown that when two programs are running simultaneously on the same operating system, an attacker can steal data by using an eavesdropping program to analyze the way those programs share memory space. They posited that the same kinds of attacks might also work in clouds when different virtual machines run on the same server.

In the immensity of a cloud setting, the possibility that a hacker could even find the intended prey on a specific server seemed remote. This year, however, three computer scientists at the University of California, San Diego, and one at MIT went ahead and did it (see "Snooping Inside Amazon's Cloud" in above image slideshow). They hired some virtual machines to serve as targets and others to serve as attackers--and tried to get both groups hosted on the same servers at Amazon's data centers. In the end, they succeeded in placing malicious virtual machines on the same servers as targets 40 percent of the time, all for a few dollars. While they didn't actually steal data, the researchers said that such theft was theoretically possible. And they demonstrated how the very advantages of cloud computing--ease of access, affordability, centralization, and flexibility--could give rise to new kinds of insecurity. Amazon stressed that nobody has successfully attacked EC2 in this manner and that the company has now prevented that specific kind of assault (though, understandably, it wouldn't specify how). But what Amazon hasn't solved--what nobody has yet solved--is the security problem inherent in the size and structure of clouds.

Cloud computing--programs and services delivered over theInternet--is rapidly changing the way we use computers (see Briefing, July/August 2009, and "Clouds, Ascending" in above slideshow). Gmail, Twitter, and Facebook are all cloud applications, for example. Web-based infrastructure services like Amazon's--as well as versions from vendors such as Rackspace--have attracted legions of corporate and institutional customers drawn by their efficiency and low cost. The clientele for Amazon's cloud services now includes the New York Times and Pfizer. And Google's browser and forthcoming operating system (both named Chrome) mean to provide easy access to cloud applications.

Even slow-moving government agencies are getting into the act: the City of Los Angeles uses Google's Apps service for e-mail and other routine applications, and the White House recently launched www.apps.gov to encourage federal agencies to use cloud services. The airline, retail, and financial industries are examples of those that could benefit from cloud computing, says Dale Jorgenson, a Harvard economist and expert on the role of information technology in national productivity. "The focus of IT innovation has shifted from hardware to software applications," he says. "Many of these applications are going on at a blistering pace, and cloud computing is going to be a great facilitative technology for a lot of these people."

Of course, none of this can happen unless cloud services are kept secure. And they are not without risk. When thousands of different clients use the same hardware at large scale, which is the key to the efficiency that cloud computing provides, any breakdowns or hacks could prove devastating to many. "Today you have these huge, mammoth cloud providers with thousands and thousands of companies cohosted in them," says Radu Sion, a computer scientist at the State University of New York at Stony Brook. "If you don't have everybody using the cloud, you can't have a cheap service. But when you have everybody using the clouds, you have all these security issues that you have to solve suddenly."

Cloud Crises

Cloud computing actually poses several separate but related security risks. Not only could stored data be stolen by hackers or lost to breakdowns, but a cloud provider might mishandle data--or be forced to give it up in response to a subpoena. And it's clear enough that such security breaches are not just the stuff of academic experiments. In 2008, a single corrupted bit in messages between servers used by Amazon's Simple Storage Service (S3), which provides online data storage by the gigabyte, forced the system to shut down for several hours. In early 2009, a hacker who correctly guessed the answer to a Twitter employee's personal e-mail security question was able to grab all the documents in the Google Apps account the employee used. (The hacker gleefully sent some to the news media.) Then a bug compromised the sharing restrictions placed on some users' documents in Google Docs. Distinctions were erased; anyone with whom you shared document access could also see documents you shared with anyone else.

Andin October, a million T-Mobile Sidekick smart phones lost data after a server failure at Danger, a subsidiary of Microsoft that provided the storage. (Much of the data was later recovered.) Especially with applications delivered through public clouds, "the surface area of attack is very, very high," says Peter Mell, leader of the cloud security team at the National Institute of Standards and Technology (NIST) in Gaithersburg, MD. "Every customer has access to every knob and widget in that application. If they have a single weakness, [an attacker may] have access to all the data."

To all this, the general response of the cloud industry is: clouds are more secure than whatever you're using now. Eran #Feigenbaum, director of security for Google Apps, says cloud providers can keep ahead of security threatsmuch more effectively than millions of individuals and thousands of companies running their own computers and server rooms. For all the hype over the Google Docs glitch, he points out, it affected less than .05 percent of documents that Google hosted. "One of the benefits of the cloud was the ability to react in a rapid, uniform manner to these people that were affected," he says. "It was all corrected without users having to install any software, without any server maintenance."

Think about the ways security can be compromised in traditional settings, he adds: two-thirds of respondents to one survey admitted to having mislaid USB keys, many of them holding private company data; at least two million laptops were stolen in the United States in 2008; companies can take three to six months to install urgent security patches, often because of concern that the patches will trigger new glitches. "You can't get 100 percent security and still manage usability," he says. "If you want a perfectly secure system, take a computer, disconnect it from any external sources, don't put it on a network, keep it away from windows. Lock it up in a safe."

But not everyone is so sanguine. At a computer security conference last spring, John Chambers, the chairman of Cisco Systems, called cloud computing a "security nightmare" that "can't be handled in traditional ways." At the same event, Ron Rivest, the MIT computer scientist who coinvented the RSA public-key cryptography algorithm widely used in e-commerce, said that the very term cloud computing might better be replaced by swamp computing. He later explained that he meant consumers should scrutinize the cloud industry's breezy security claims: "My remark was not intended to say that cloud computing really is 'swamp computing' but, rather, that terminology has a way of affecting our perceptions and expectations. Thus, if we stop using the phrase cloud computing and started using swamp computing instead, we might find ourselves being much more inquisitive about the services and security guarantees that 'swamp computing providers' give us."

A similar viewpoint, if less colorfully expressed, animates a new effort by NIST to define just what cloud computing is and how its security can be assessed. "Everybody has confusion on this topic," says Peter Mell; NIST is on its 15th version of the document defining the term. "The typical cloud definition is vague enough that it encompasses all of existing modern IT," he says. "And trying to pull out unique security concerns is problematic." NIST hopes that identifying these concerns more clearly will help the industry forge some common standards that will keep data more secure. The agency also wants to make clouds interoperable so that users can more easily move their data from one to another, which could lead to even greater efficiencies.

Given the industry's rapid growth, the murkiness of its current security standards, and the anecdotal accounts of breakdowns, it's not surprising that many companies still look askance at the idea of putting sensitive data in clouds. Though security is currently fairly good, cloud providers will have to prove their reliability over the long term, says Larry Peterson, a computer scientist at Prince#ton University who directs an Internet test bed called the PlanetLab Consortium. "The cloud provider may have appropriate security mechanisms," Peterson says. "But can I trust not only that he will protect my data from a third party but that he's not going to exploit my data, and that the data will be there five years, or 10 years, from now? Yes, there are security issues that need attention. But technology itself is not enough. The technology here may be out ahead of the comfort and the trust."

In a nondescript data center in Somerville, MA, just outside Boston, lies a tangible reminder of the distrust that Petersonis talking about. The center is owned by a small company called 2N+1, which offers companies chilled floor space, security, electricity, and connectivity. On the first floor is a collection of a dozen black cabinets full of servers. Vincent Bono, a cofounder of 2N+1, explains these are the property of his first client, a national bank. It chose to keep its own servers rather than hire a cloud. And for security, the bank chose the tangible kind: a steel fence.

Encrypting the Cloud

Cloud providers don't yet have a virtual steel fence to sell you. But at a minimum, they can promise to keep your data on servers in, say, the United States or the European Union, for regulatory compliance or other reasons. And they are working on virtual walls: in August, Amazon announced plans to offer a "private cloud" service that ensures more secure passage of data from a corporate network to Amazon's servers. (The company said this move was not a response to the research by the San Diego and MIT group. According to Adam Selipsky, vice president of Amazon Web Services, the issue was simply that "there is a set of customers and class of applications asking for even more enhanced levels of security than our existing services provided.")

Meanwhile, new security technologies are emerging. A group from Microsoft, for example, has proposed a way to prevent users of one virtual machine on a server from gleaning information by monitoring the use of shared cache memory by another virtual machine on the same server, something that the San Diego and MIT researchers suggested was possible. And researchers at IBM have proposed a new kind of security mechanism that would, in essence, frisk new virtual machines as they entered the cloud. Software would monitor each one to see how it operates and ensure its integrity, in part by exploring its code. Such technologies could be ready for market within two or three years.

But fully ensuring the security of cloud computing will inevitably fall to the field of cryptography. Of course, cloud users can already encrypt data to protect it from being leaked, stolen, or--perhaps above all--released by a cloud provider facing a subpoena. This approach can be problematic, though. Encrypted documents stored in a cloud can't easily be searched or retrieved, and it's hard to perform calculations on encrypted data. Right now, users can get around these problems by leaving their information in the cloud unencrypted ("in the clear") or pulling the encrypted material back out to the safety of their own secure computers and decrypting it when they want to work with it. As a practical matter, this limits the usefulness of clouds. "If you have to actually download everything and move it back to its original place before you can use that data, that is unacceptable at the scale we face today," says Kristin Lauter, who heads the cryptography research group at Microsoft Research.

Emerging encryption technologies, however, could protect data in clouds even as users search it, retrieve it, and perform calculations on it. And this could make cloud computing far more attractive to industries such as banking and health care, which need security for sensitive client and patient data. For starters, several research groups have developed ways of using hierarchical encryption to provide different levels of access to encrypted cloud data.

A patient, for example, could hold a master key to his or her own electronic medical records; physicians, insurers, and others could be granted subkeys providing access to certain parts of that information.

Ideally, we'd make it more practical to work with sensitive data that needs to be encrypted, such as medical records, so that unintended viewers couldn't see it if it were exposed by a hack or a glitch at the cloud provider. "The general theme of cloud computing is that you want to be able to outsource all kinds of functionality but you don't want to give away your privacy--and you need very versatile cryptography to do that," says Craig Gentry, a cryptography researcher at IBM's Watson Research Center in Yorktown, NY. "It will involve cryptography that is more complicated than we use today."

To find and retrieve encrypted documents, groups at Carnegie Mellon University, the University of California, Berkeley, and elsewhere are working on new search strategies that start by tagging encrypted cloud-based files with encrypted metadata. To perform a search, the user encrypts search strings using mathematical functions that enable strings to find matches in the encrypted metadata. No one in the cloud can see the document or even the search term that was used. Microsoft Research recently introduced a theoretical architecture that would stitch together several crytographic technologies to make the encrypted cloud more searchable.

The problem of how to manipulate encrypted data without decrypting it, meanwhile, stumped researchers for decades until Gentry made a breakthrough early in 2009. While the underlying math is a bit thick, Gentry's technique involves performing calculations on the encrypted data with the aid of a mathematical object called an "ideal lattice." In his scheme, any type of calculation can be performed on data that's securely encrypted inside the cloud. The cloud then releases the computed answers--in encrypted form, of course--for users to decode outside the cloud. The downside: the process eats up huge amounts of computational power, making it impractical for clouds right now. "I think one has to recognize it for what it is," says Josyula Rao, senior manager for security at IBM Research. "It's like the first flight that the Wright Brothers demonstrated." But, Rao says, groups at IBM and elsewhere are working to make Gentry's new algorithms more efficient.

Risks and Benefits

If cloud computing does become secure enough to be used to its full potential, new and troubling issues may arise. For one thing, even clouds that are safe from ordinary hackers could become central points of Internet control, warns Jonathan Zittrain, the cofounder of Harvard's Berkman Center for Internet and Society and the author of The Future of the Internet--and How to Stop It. Regulators, courts, or overreaching government officials might see them as convenient places to regulate and censor, he says.

What's more, cloud providers themselves could crack down on clients if, say, copyright holders apply pressure to stop the use of file-sharing software. "For me," Zittrain says, "the biggest issue in cloud security is not the Sidekick situation where Microsoft loses your data." More worrisome to him are "the increased ability for the government to get your stuff, and fewer constitutional protections against it; the increased ability for government to censor; and increased ability for a vendor or government to control innovation and squash truly disruptive things."

Zittrain also fears that if clouds dominate our use of IT, they may turn into the kinds of "walled gardens" that characterized the Internet in the mid-1990s, when companies such as Compuserve, Prodigy, and AOL provided limited menus of online novelties such as news, e-commerce, and e-mail to the hoi polloi. Once people pick a cloud and applications they like, he says--Google Apps, for example--they may find they have limited access to great apps in other clouds, much as Facebook users can't network with people on MySpace.

But such concerns aren't stopping the ascendance of the cloud. And if cloud security is achieved, the benefits could be staggering. "There is a horrendous amount of computing and database management where cloud computing is clearly relevant," says Harvard's Dale Jorgenson. Imagine if today's emerging online repositories for personal health data, such as Google Health and Microsoft HealthVault, could link up with the growing number of electronic records systems at hospitals in a way that keeps private data protected at all times. The resulting medical megacloud could spread existing applications cheaply and efficiently to all corners of the medical profession. Doctors could easily compare patients' MRI scans, for example, with those of other patients around the country, and delve into vast databases to analyze the efficacy of treatments and prevention measures (see "Prescription: Networking," November/December 2009). "The potential there is enormous, because there are a couple of transformations that may occur in medicine in the near future from vast collections of medical records," says Ian Foster, a computer scientist who leads the Computation Institute at Argonne National Laboratory and the University of Chicago. Today, he points out, individuals are demanding access to their own medical information while medical institutions seek new sources of genomic and other data. "The two of those, together, can be powered by large-scalesharing of information," he says. "And maybe you can do it in the cloud. But it has particularly challenging security problems."

This isn't the first time a new information technology has offered profound benefits while raising potentially intolerable security risks. The advent of radio posed similar issues a century ago, says Whitfield Diffie, one of the pioneers of public-key cryptography, who is now a visiting professor at Royal Holloway College at the University of London. Radio was so much more flexible and powerful than what it replaced--the telegraph--that you had to adopt it to survive in business or war. The catch was that radio can be picked up by anyone. In radio's case, fast, automated encryption and decryption technologies replaced slow human encoders, making it secure enough to realize its promise. Clouds will experience a similar evolution. "Clouds are systems," says NIST's Peter Mell. "And with systems, you have to think hard and know how to deal with issues in that environment. The scale is so much bigger, and you don't have the physical control. But we think people should be optimistic about what we can do here. If we are clever about deploying cloud computing with a clear-eyed notion of what the risk models are, maybe we can actually save the economy through technology."
http://www.technologyreview.com/web/24166/





Obama to Name Howard Schmidt as Cybersecurity Coordinator
Ellen Nakashima

Seven months after President Obama vowed to "personally select" an adviser to orchestrate the government's strategy for protecting computer systems, the White House will name a former Bush administration official to the job Tuesday.

Howard A. Schmidt, who was a cyber-adviser in President George W. Bush's White House, will be Obama's new cybersecurity coordinator, an administration official said Monday night.

Schmidt declined to comment.

The mission is challenging: to coordinate cybersecurity policy across the federal government, from the military to civilian agencies. Schmidt's appointment comes as the Pentagon launches a major new "cyber-command" unit up and running and the Department of Homeland Security works to improve protection of civilian networks.

In May, Obama declared the nation's digital networks a "strategic national asset" and said protecting them would be a "national security priority." Creating a White House cybersecurity office, run by a senior White House official, would be key to that effort, he said. "I'll depend on this official in all matters relating to cybersecurity, and this official will have my full support and regular access to me as we confront these challenges," Obama said from the East Room.

But his remarks were undercut by internal tension over how much authority the "cyber-czar" would have and to whom the official would report. White House economic adviser Lawrence H. Summers insisted that the new coordinator report to him as well, arguing that cybersecurity is also a matter of national economic security, sources said. The new coordinator, who does not require Senate confirmation, will report to deputy national security adviser John O. Brennan and will "work closely with and collaborate with" the economic council on cyber-issues, the administration official said, speaking on the condition of anonymity because the choice was not yet official.

Schmidt was chosen after a long process in which dozens of people were sounded out. Many declined the post, largely out of concern that the job conferred much responsibility with little true authority, some of them said.

The cybersecurity chief at the National Security Council, Christopher Painter, has served as the de facto coordinator, trying to push ahead the 60-day cyberspace policy review plan unveiled by Obama in May. That plan's formulation was led by Melissa Hathaway, who resigned in frustration in August after delays in naming a cyber-coordinator.

Schmidt served as special adviser for cyberspace security from 2001 to 2003 and shepherded the National Strategy to Secure Cyberspace, a plan that then was largely ignored. He left that job also frustrated, colleagues said.

The administration official lauded Schmidt's "unique background and skill sets" as readying him for the job. Schmidt's résumé reflects experience in the private sector, law enforcement and government.

Before he joined the Bush White House, he worked as chief security officer at Microsoft. He then became vice president and chief information security officer at eBay. He served in the Air Force from 1967 to 1983 in various roles, both active-duty and civilian, and headed the computer exploitation team at the FBI's National Drug Intelligence Center in the 1990s.

He is now president of the Information Security Forum, a nonprofit consortium of corporations and public-sector organizations working to resolve cybercrime and cybersecurity issues.

"He has many of the qualities and connections that one would think would be good for the position," said a colleague who spoke on the condition of anonymity in order to be candid. "He is a team player. I don't have high expectations for that position as it is currently defined, so he's very possibly overqualified for it."

Staff researcher Eddy Palanzo contributed to this report.
http://www.washingtonpost.com/wp-dyn...122103055.html





White House Picks New Cyber Czar
AP

Howard A. Schmidt, longtime computer security executive, tapped for post

After months of wrangling and delays, President Barack Obama has chosen a national cyber security coordinator to take on the formidable task of organizing and managing the nation's increasingly vulnerable digital networks.

Obama has tapped Howard A. Schmidt, longtime computer security executive who worked in the Bush administration and has extensive ties to the corporate world, according to a senior White House official, who spoke on condition of anonymity because the announcement will not be made until Tuesday.

Schmidt's selection comes more than 10 months after Obama declared cyber security a priority and ordered a broad administration review.
Story continues below ↓advertisement | your ad here

The official said Obama was personally involved in the selection process and chose Schmidt after an extensive search because of his unique background and skills. Schmidt will have regular and direct access to the President for cybersecurity issues, the official said.

Obama released the findings of the cyber security review nearly seven months ago, vowing that the White House will name a cyber coordinator to deal with one of the "most serious economic and national security challenges we face as a nation."

On the back burner?

Corporate computer security leaders have openly expressed frustration with the White House as movement on the job post stalled and questioned the administration's claims that the issue is a priority.

At the same time, cyber experts and potential job candidates have complained that the position lacks the budgetary and policy making authority needed to be successful. Schmidt will report to the National Security Council and closely support the National Economic Council on cyber issues

"From the industry's perspective, a lot of people are starting to think that other pressing matters in Afghanistan and other issues put this on a back burner," said Roger Thornton, chief technology officer for Fortify Software, and a cyber security expert. "If it is, that's understandable but depressing."

Schmidt's selection suggests that economic and business interests in the White House held more sway in the selection process. Schmidt, president and CEO of the Information Security Forum, a nonprofit international consortium that conducts research in information security, has served as chief security officer for Microsoft and as cyber security chief for online auction giant eBay. He was reportedly preferred by NEC director Lawrence Summers.

Thornton praised Schmidt's choice, saying the coordinator has to strong on many different dimensions.

He said Schmidt understands the technology, has broad management experience and also has worked well within the political arena, a key requirement for the White House post.

"I think he would be able to get people to compromise and move things forward," said Thornton.

Hacker threats

U.S. government computer systems are constantly under assault, and are being attacked or scanned millions of times a day. Hackers and cyber criminals pose an expanding threat, using increasingly sophisticated technologies to steal money or information, while nation-states probe for weaknesses in order to steal classified documents or technology or destroy the networks that run vital services.

The nation's cyber security vulnerabilities have been underscored recent months, with a number of high profile assaults, including ones that breached a high-tech fighter jet program and the electrical grid, although no classified material was compromised.
Early last month, unknown hackers knocked a number of U.S and South Korean government Web sites off line in a widespread and unusually resilient computer attack.

Considered an expert in computer forensics, Schmidt's roughly 40-year career includes 31 years in local and federal government service, including a stint as vice chairman of President George W. Bush's Critical Infrastructure Protection Board. He also was for a short time an adviser to the FBI and worked at the National Drug Intelligence Center.

'Cyber peasant'

Congress members, business leaders and cyber security experts have called for a more coordinated effort by the federal government to monitor and protect U.S. systems and work with the private sector to insure that transportation systems, energy plants and other sensitive networks are equally protected.

Obama pledged that the new cyber coordinator will have "regular access" to the Oval Office. But critics worry that the coordinator will be mired in bureaucracy.

Rather than being a cyber czar, the person will be more of a "cyber peasant" said James Lewis, a cybersecurity expert and senior fellow at the Washington-based Center for Strategic and International Studies. "A lot depends on who the person is, but it's not a top tier position, not someone who reports directly to the president."

In a letter to Obama, TechAmerica — which represents about 1,500 companies — said the naming of a coordinator was urgently needed since " those that would seek to harm America by exploiting our digital infrastructure continue to increase their efforts."

With his administration struggling to pass a controversial health care overhaul, deal with a troubled and unpopular war in Afghanistan, and revive the stumbling economy, Obama has had little time to focus on what has become a cyber quagmire.
http://www.msnbc.msn.com/id/34517252...more_politics/





Verizon Snuffs Google for Microsoft Search

Forces Bing on browser box
Cade Metz

Verizon has unilaterally updated user Storm 2 BlackBerries and other smartphones so that their browser search boxes can only be used with Microsoft Bing.

The move is part of the five-year search and advertising deal Verizon signed with Microsoft in January for a rumored $500m.

Verizon pushed the search change over its network two days ago, the company has confirmed with The Reg. "We're a proud supporter of Microsoft's Bing search engine," a company spokesman tells us. "On a couple of select smartphones (Storm 2 the most prominent), we've changed the [Verizon Wireless]-supplied web menu to make Bing the default search engine."

Previously, the search box - baked into the top of Verizon's browser, above the url address bar - could be set to search Google, Wikipedia, and other sites.

Naturally, such sites can still be queried via the browser proper. But countless users are up-in-arms over the switch. A discussion thread dedicated to the change at CrackBerry (http://forums.crackberry.com/f71/bin...ngine-386198/), a popular BlackBerry user site, is now 36 pages long.

"This frustrates the heck out of me. On the phone with VZW right now. The rep is telling me that she can choose search options from her non-Storm phone, so she's off to get a Storm to find out what the deal is. Will post results. Grrrrrrrrrrrrrrrr," writes one user.

A sea of similar comments has also appeared on Verizon's web forums (http://community.vzw.com/t5/BlackBer...g/td-p/133060). "Yesterday, all of the search providers that used to be available through the browser disappeared and bing is the only option. I hate bing. I no longer am able to search using Google, Dictionary.com, or Wikipedia from the 'Go to...' page on my browser. This is a very poor decision...to take choice away from their users," the first post says.

"SOMEONE is pushing this change to Blackberry users without notification and without giving the option to refuse this change. If this has happened to you, please call Verizon and inform them. I really want my choices for search back. Not only because I hate bing, but because taking choices away from customers is just a really **bleep** thing to do."

Verizon and Microsoft have an existing relationship. In January, the two signed a five-year search and ad deal rumored (http://www.informationweek.com/news/...leID=212701405) to be worth $500m. When we asked Microsoft to comment on the Verizon search-box switch, it referred us to a January blog post (http://www.bing.com/community/blogs/...near-you.aspx).

"Verizon Wireless subscribers in the U.S. will be able to use Live Search on their mobile devices to find information on local business and shopping information, access maps and directions, find ringtones and other mobile products and services," the company said at the time. "This partnership will give Verizon Wireless customers great search results and provide targeted, relevant mobile advertising to enhance the overall mobile computing experience."

When we asked Google for comment, a spokeswoman said: "We're passionate believers in competition that's good for users. We're committed to working with industry leaders to provide the best user experience possible and develop innovative products and services."

It should be said, however, that according to press reports, Google was in talks with Verizon over a similar search deal before the Microsoft pact was finalized. Google and Verizon have since agreed to a deal (http://www.theregister.co.uk/2009/10...e_and_verizon/) that involves the two companies jointly developing Android phones for the carrier's network.

Meanwhile, press reports indicate (http://www.theregister.co.uk/2009/12...ephone_exists/) that Google intends to sell its own Android phone in the New Year. Google has confirmed the existence of a Google-built Android phone, and this device is built for GSM networks - i.e. not Verizon's.
http://www.theregister.co.uk/2009/12...ogle_for_bing/





Our 2009 12-City 3G Data Mega Test: AT&T Won
Wilson Rothman

Given carrier reputation and our own iPhone call drops, we were pretty surprised to discover, through careful testing in 12 markets, that AT&T's has pretty consistently the fastest 3G network nationwide, followed closely—in downloads at least—by Verizon Wireless.

Let's get this straight right away: We didn't test dropped voice calls, we didn't test customer service, and we didn't test map coverage by wandering around in the boonies. We tested the ability of the networks to deliver 3G data in and around cities, including both concrete canyons and picket-fenced 'burbs. And while every 3G network gave us troubles on occasion, AT&T's wasn't measurably more or less reliable than Verizon's.

It was measurably faster, however, download-wise, in 6 of the 12 markets where we tested, and held a significantly higher national average than the other carriers. Only Verizon came close, winning 4 of the 12 markets. For downloads, AT&T and Verizon came in first or second in nine markets, and in whatever location we tested, both AT&T and Verizon 3G were consistently present. If you're wondering about upload speeds, AT&T swept the contest, winning 12 for 12.

The Cities

Last year, we did an 8-city coast-to-coast test, and called Sprint the big winner. This year, we have results from 11 cities coast-to-coast, and even got to test (during what was otherwise vacation time) on the Hawaiian island of Maui. Also, unlike last year, we were able to test T-Mobile's new 3G network, active in all the markets we visited (except, at the time, Maui). For being such a latecomer, T-Mo did well, and the numbers show even more promise from them.

We tried to spread the love around this year, geographically, hitting cities we didn't get to last year (at the cost of losing a few from '08). Besides Maui, we hit Atlanta, Chicago, Denver, Las Vegas, Los Angeles, New York, Phoenix, Portland, Seattle, San Francisco/Bay Area and Tampa.

The Methodology

Our testing regimen was based on the same scheme as last year: We picked five locations in each city, including at least one "downtown" location that was considered a suburb. The selections were arbitrary, or fixed but logical—landmarks, residences, etc. (Note: Due to timing constraints, Chicago and Maui only had three test locations.)

Our hardware consisted of two identical stripped-down Acer Timeline laptops running Windows Vista, and four 3G wireless modems requested from the carriers. We allowed them to make the choice of hardware, simply asking for their "best performing" model. Once up and running, here are the tests we ran:

• Bandwidth & Latency: Speedtest.net - Reports upload and download bandwidth in megabits per second, as well as ping latency in milliseconds. We performed this test five times at each location on each modem.

• Pageload: Hubble images at Wikimedia - A 4.42MB web page with 200 4KB thumbnails, it was fully reloaded three times, and timed using the Firefox plug-in YSlow. The three time readings were averaged.

• Download: Wikimedia's Abell 2667 galaxy cluster photo - This single 7.48MB JPEG is a clear test of how fast you can download stuff from the cloud, and again, we hard refreshed this file three times, and measured time using YSlow for an accurate human-error-free reading.

This was a test of 3G performance. Even though Sprint and its tech partner Clearwire have intrepidly released 4G networks in half of the tested markets—Atlanta, Chicago, Las Vegas, Maui, Portland and Seattle—we only tested Sprint's 3G network. The reason should be obvious: While we performed the test with laptop cards on PCs, it's supposed to serve as a test of the network's ability to deliver service to all devices, including smartphones, dumbphones and laptops. Show us a Palm Pre WiMax edition—better yet, sell 100,000 of them—and then we'll switch it up. And while you may argue that this 3G test still doesn't adequately reflect your experience with your iPhone, at least it's the same network, and may serve to rule out AT&T's data pipe as the independent cause for all those infamous dropped calls.

(On a side note, when multiple carriers release 4G networks, we'll definitely conduct a comparative test of them all, using new parameters, and focused around laptop use.)

The Results

Now that you know how we ran the test, here are the top finishers in each market, plus some pretty bar graphs showing you how bandwidth compares.

Though we tested for uploads and downloads, we focused our additional tests on the downstream, as it's the more important direction, in the minds of most consumers and most carriers. The anomaly there is AT&T, which has dramatically good upload bandwidth, even when its download bandwidth doesn't keep up. Fast uploads are a priority for AT&T, and will soon be for T-Mobile, which recently turned on faster uploading in NYC, which you can see in our test results. Meanwhile, although Verizon technically came in second in uploads as well as downloads, it doesn't seem to treat this as a major priority.

When it came to downloads, though, the competition was markedly stiffer:

Atlanta - AT&T, followed by Verizon
Bay Area/San Francisco - AT&T, followed by Verizon
Chicago - AT&T, followed by Verizon then Sprint
Denver - AT&T, followed by Verizon
Las Vegas - Verizon, followed by AT&T
Los Angeles - AT&T, followed by Sprint
Maui - Verizon, followed by AT&T
New York - AT&T, followed by T-Mobile
Phoenix - Verizon, followed by T-Mobile
Portland - T-Mobile, followed by Verizon
Seattle - Verizon, followed by T-Mobile
Tampa - Sprint, followed by AT&T

Is That The End?

No. We've compiled the following gallery with all the data from each test location in the 12 markets, so you can see on a neighborhood-by-neighborhood level who won what. This also includes latency, pageload and download numbers, so you can track the performance in several ways. (The data above is bandwidth, though as you'll see, that was generally representative of the overall performance. If a carrier was tops in bandwidth, it was usually tops in download time.) These tests are all just "snapshots in time," as the carriers like to say, so feel free to bitch about where your experience doesn't reflect our results. We stand by them, but acknowledge that network performance is changing all the time, and experiences very regular hiccups.

Regarding latency, you'll notice it didn't appear to affect actual user experience—3G isn't really up for Modern Warfare 2, if that's what you're thinking—we will gladly show you latency averages, as well as pageload and file download averages, broken out for every market on the test.
http://gizmodo.com/5428343/our-2009-...a-test-att-won





Pupils 'Bypassing School Internet Security'
Iain Mackenzie

Many young people are using 'proxy servers' to get round their schools' internet security systems. The free services offer instant access to banned websites, including online games and social networking. Figures suggest the use of proxies has risen sharply in recent years. Security experts are warning that pupils who log on put themselves at risk of cyber crime.

It sounds like an obscure, techy area of computing that only geeks would know about.

But when we asked pupils in one secondary school classroom who had heard of proxy servers, every hand went up.

These 'secret tunnels' to the internet are a way of life for teenagers across the UK.

As schools employ increasingly sophisticated software to stop them accessing 'non-educational' websites, the proxies offer a quick, easy way to bypass those restrictions.

“It's just a box that says 'type in the website that is blocked'. You type it in and it brings it up,” said a senior pupil, who wanted to remain anonymous.

Web-based proxy servers disguise a user's activity from school monitoring software.

'Cat-and mouse' game

A student will appear to be visiting only one site, that of the proxy itself. Any internet surfing they do after that is effectively invisible.

Ironically, what pupils are usually trying to work-around are other types of proxy servers, commonly used in schools to protect their machines from online threats.

Schools can take action against web-based proxies, by blocking them. However because of the sheer number that exist, it becomes a game of cat-and-mouse.

“When the blocked proxy server you were using got banned, you had another one ready and everyone had at least four that they knew and everyone shared them about,” said one pupil.

Statistics on the use of proxy servers are hard to come-by. One useful measure is the number of them being flagged-up by proxy-blocking systems.
'Security risk'

M86 Security monitors such sites. In 2006 it was tracking 7,111 proxies. By 2009 that had risen to 91,490.

There are fears that the use of proxy servers amongst school pupils may be putting more than their education at risk.

Some can carry viruses, malicious software, and may even be under the control of cyber criminals, according to security experts.

Con Mallon from Symantec carried out a scan on a random selection of free proxies.

“There is a site which is hosting what we call a trojan. It may invite you to install some software onto your machine.

“Once that is installed, it allows the bad guys to come back to your machine at any time... what they would probably then do is install something called a keylogger.

“It will sit there and monitor what you are typing-in. What they are really looking for is passwords and logins.”

Proxy 'problem'

Such information, once harvested, can be sold online.

Amongst our school pupils, the news comes as a surprise: “I didn't have a clue that people could get my information if I was putting it in,” said one girl.

“I kind of had an idea that it stays there. But I didn't know it was that big – that it could log everything,” added her classmate.

The problem of proxies is recognised by most of the bodies responsible for providing schools with internet access.

A spokesman for JANET, which carries data traffic between many local school networks, said: “I would agree that proxy servers to get around security systems is indeed a problem.

“Technical solutions need to be used as one aspect of a wider approach to protecting users, including educating children, teachers, and parents in how to use the web safely.”

BECTA, the government agency that looks after school ICT, said: “...currently there is no single technology or method that can address this issue fully”.

However, it seems newer security systems are helping schools catch up with the proxy users.

Most of the young people interviewed for this article agreed that monitoring systems had made it harder for them to find usable proxies, but said their efforts to get round school security would continue.
http://news.bbc.co.uk/newsbeat/hi/te...0/10003579.stm





OLPC Unveils Roadmap, Plans Tablet for 2012
Chloe Albanesius

The One Laptop Per Child (OLPC) initiative on Tuesday outlined its product roadmap for the next three years – a plan that includes the release of tablet-based OLPC by 2012.

First up will be the XO 1.5, a $200 laptop that will be available in January 2010. By early 2011, OLPC is looking to upgrade that to the XO 1.75, which will include an 8.9-inch touchscreen for $150 or less, before finally introducing the tablet-based XO 3.0 in 2012 for less than $100.

Though its focus is admirable – providing $100 laptops to children in developing nations – the OLPC program has struggled to catch on. A presence on Amazon.com late last year expanded its reach, but the group kicked off 2008 by losing the support of Intel and then cut its staff by 50 percent in January 2009.

At the time of the layoffs, OLPC said it would refocus its mission on the development of second-generation laptops.

To that end, the XO 1.5 will have the same design as the existing XO 1.0 laptop. It will include a VIA processor, which will replace AMD, and OLPC promises twice the speed and four times the DRAM and Flash memory. It will run Linux and Windows.

The XO 1.5 will be available in January 2010 for about $200, depending on the markets for DRAM and Flash, OLPC said.

When the program upgrades to the XO 1.75 in early 2011, the laptop will have "essentially the same industry design," but it will have a rubber bumper on the outside and include an 8.9-inch touchscreen on the inside.

The XO 1.75 will have an ARM processor from Marvell, which OLPC said will provide twice the speed at a quarter of the power. It is targeted for sale at $150, and will complement the x86-based XO 1.5, which will remain in production so buyers can have a choice in processors.

By 2012, the OLPC will have a "totally different approach," the group said. The inner workings of the device will be similar to the XO 1.75, but the design will be a tablet – a "single sheet of flexible plastic [that] will be unbreakable." It will have a small loop on the corner for carrying, and is targeted at under $100.

"The first version of OLPC's child-centric laptop, the XO, is a revolution in low-cost, low-power computing. The XO has been distributed to more than 1.4 million children in 35 countries and in 25 languages," Nicholas Negroponte, founder and chairman of OLPC, said in a statement. "To fulfill our mission of reaching 500 million children in all remote corners of the planet, OLPC will continue to innovate in design and performance. Because we are a non-profit, we hope that industry will copy us."
http://www.pcmag.com/article2/0,2817,2357518,00.asp





More on the ‘Bits and Pieces’ $99 CherryPal Africa Netbook
Chris Meadows

The other day, I mentioned the CherryPal Africa, the $99 netbook CherryPal has made for the developing world (but is also selling to consumers in the developed one). I have been looking into that laptop a little further tonight, in the interest of possibly taking $100 of Christmas money and buying one.

Under Linux, such a device would make a great e-book reader for those of us who buy DRM-free books and do not mind “staring at a glowing screen”. (And the Windows CE version could read whatever DRM-compatible readers exist for Windows CE, such as MobiPocket and eReader.) With its wifi, it would mean I could download those books directly from anywhere I have wireless access, just as with my iPod Touch.

But unlike my Touch, I could actually type at full speed into the device, meaning that I could not only read but write. Theoretically, of course—assuming the keyboard is not unusably small.

Here are some interesting things I have uncovered.

A Bits and Pieces Approach

How does CherryPal sell so cheaply? Much the same way homebuilders make their desktop PCs for cheaper than the ones available pre-built. It puts them together out of the least expensive components it can find. In this blog post, Chairman Max Seybold explains:

Quote:
[W]e buy access (sic) inventory, overcapacity, out-of-fashion shells, shells with minor cosmetic flaws, discounted limited batches, and other high quality though discounted components and systems, package them up and sell them under the Cherrypal Africa brand. Bear with me. What this means is that 5 randomly selected people ordering a Cherrypal Africa on any given day theoretically might get 5 different systems, with different configurations. However, their “Africas” would have one particular thing in common, you can browse the Internet, actually pretty fast. […] In other words, what you buy for 99 bucks is Cherrypal’s promise that you will be able to browse the Internet.
This seems to go counter to the principles of interchangeable parts and standardized products that have been with us since the Industrial Revolution. Eli Whitney and Henry Ford must be turning over in their graves.

On the other hand, I must admit it appeals to a certain “hacker ethic” in me, something similar to the ethic perhaps expressed in Cory Doctorow’s Makers (I say “perhaps” because I have not actually read it yet) of making do with what you can scrounge.

It also means that CherryPal can sometimes overbuild the Africa compared to the specifications on its website, depending on what parts are available at the time. Seybold again:

Quote:
We soft-launched the Cherrypal Africa in November and started shipping early December. So far we got nothing but positive feedback from admittedly surprised customers. What we promised was a 7″ laptop, 400 MHz processor, 256 MB RAM, 2GM storage, and what we shipped were 10.2 inch, 1.6 GHz, 160 GB (new) laptops.
The website specs, Seybold explains, are a minimum, a promise you will get at least this. What you actually end up getting may be a roll of the dice.

Menq the Merciless?

Here is an article from WindowsForDevices.com looking at the Africa and its big brother, the Bing (no relation to Microsoft), in more detail than I have been able to find in other places.

The article notes that CherryPal’s practice of using whatever CPU it can get its hands on means the device may end up coming with X86, MIPS, or ARM processor architecture.

This would mean the operating system CherryPal puts on it (Windows CE or CherryPal’s own custom “Green Maraschino” Linux distribution) has to be compiled three separate times. That has to be a standardization headache.

It also points out that the Africa seems to be similar in every respect (save for having slightly better specs) to the Menq EasyPC E790, a Chinese netbook sold for $80. (I covered a similar $100-range laptop from Chinese manufacturer HiVision last year.)

But it seems next to impossible to find the Meng EasyPC for sale anywhere; I certainly did not find a vendor in Google. What I did find was that $80 is apparently the wholesale price. One commenter on an Engadget post said:

Quote:
You have to buy it in bulk by directly contacting [Menq]. And by bulk, I mean, say, 5000 units at $80 or so each. So unless you have a half-million to play with, I doubt you’d be getting your hands on one soon.
Any vendor who bought an EasyPC at $80 wholesale would probably end up doubling the price to sell it retail. On the other hand, CherryPal is selling the Africa for $99 retail, and it has better (potentially much better) specs than the EasyPC to begin with.

The Price Gap

In the interest of comparison shopping, I have been looking at other netbook prices on sites such as NewEgg.com. The lowest-priced netbooks have better specs than CherryPal promises (though about the same as the ones Seybold says they shipped at launch) and cost $250—and those are open-box. The new ones start at $290 and go up.

CherryPal’s mix-and-match, bits-and-pieces approach to netbook-building seems to be very economically beneficial, even if it does make buying the devices a bit of a crapshoot.

I will be trying to get my hands on a review unit to see if I can review it for TeleRead. If not, I may just end up buying one and reviewing that. $100 is really not all that much to pay for a two-pound portable web access device. On the other hand, it is often true that you get what you pay for.
http://www.teleread.org/2009/12/21/m...frica-netbook/





Memory Chip Shortage Seen in H2 2010: DRAMeXchange

Memory chips for computers are likely to be in short supply by the second half of next year as consumers demand more capacity and companies embark on a delayed drive to replace PCs, industry tracker DRAMeXchange believes.

Prices for DRAM chips, the most common type of computer memory, have stabilized over the past two months after rising for most of the year as recession-struck chipmakers slashed capacity and capital spending, causing shortages.

DRAMeXchange forecast on Thursday that shipments of PCs would rise 13 percent next year, driven by notebooks, with 22.5 percent growth to 160 million units, and pared-down netbooks, set to rise 22 percent to 35 million units.

"DRAM will likely face a serious shortage in 2H10 triggered by the hot PC sales," DRAMeXchange said. "The DRAM price decline will likely be eased in 2Q10. That is, DRAM vendors will have a great opportunity to remain in profit for the whole year." Top U.S. memory chipmaker Micron on Tuesday delivered its first quarterly profit in nearly three years as rising prices lifted sales beyond expectations.

DRAMeXchange forecast that 2010 capital expenditure by DRAM vendors would rise 80 percent from this year's record low to $7.85 billion, rising to $10 billion to $12 billion by 2011 or 2012.

Industry leader Samsung would spend $2.6 billion, it predicted. Fellow Korean chipmaker Hynix said on Thursday it planned 2.3 trillion won ($2.2 billion) in capital expenditure next year.

(Reporting by Georgina Prodhan; Editing by David Cowell)
http://www.reuters.com/article/idUST...technologyNews





Satellite TV to FCC: We're Special, Don't Make Us Open Up

DirecTV says that the new FCC push to bust open video should only apply to cable; satellite is plenty competitive already. Also, a tale of woe from a Comcast subscriber illustrates just why some common video decryption standards are needed.
Nate Anderson

If you've tried to pump your fully-paid-up cable connection into, say, a computer running Windows Media Center, you've probably come up against the closed nature of pay-TV and the severe limitations of CableCARD. And what about satellite TV? Don't even think about it.

The FCC wants to blow open the market for third-party video devices, scrapping some of the current (failed) CableCARD rules and adding satellite providers to the list. The idea has a certain obvious simplicity to it: encourage huge innovation in the video content marketplace by crafting rules that allow third-party boxes to easily access TV shows and on-demand content from both cable and satellite providers (technically, the rules could cover all multichannel video programming provider distributors, or MVPDs).

But DirecTV has been making the rounds at the Federal Communications Commission (FCC), telling staffers in the various Commissioner offices that Chairman Genachowski's idea for cracking open TV is terrific… so long as it applies only to cable. DirecTV wants an exemption, just like it currently has one from the CableCARD mandate.

The company has filed a recent series of ex parte notices with the FCC that detail these meetings. In them, the company makes a point that we are sympathetic to: "it is not clear that increasing access to Internet content through MVPDs’ set-top boxes will spur broadband adoption."

One of the big goals of the new inquiry on set-top boxes is to spur Internet access. It's driven by the finding that 99 percent of US households have a TV while only 76 percent have a home computer. Make it possible to browse the Web and check e-mail through a set-top box, using the TV set as a screen, and a new set of Americans can suddenly get online. (If this isn't yet conjuring up shades of WebTV [now MSN TV], it should.)

But a scheme that could open up video content would be far more interesting, depending on the final rules. If set-top boxes and third-party devices like PS3s, Xboxes, PCs, Apple TVs, and TiVo could easily access, manipulate, and store cable and satellite programming, the potential for innovation would be enormous.

Rather than scrapping current industry efforts, such as CableCARD and the it's-coming-real-soon-now tru2way middleware program, the cable industry is pushing a plan under which it could continue to use tru2way while satellite could use its own decryption and middleware system. One box would support multiple standards; just take the box anywhere you go in the US, subscribe to any video service, and it should be able to output a signal.

In cable's view, this is just a matter of fairness; if cable needs to open up, so should satellite.

Satellite wants nothing to do with such a plan to open its content, though. Satellite broadcasters were exempted from the CableCARD rules back when they were first proposed on the grounds that the satellite market was more competitive than cable and that there was more competition in the satellite set-top box market. DirecTV says that all these things remain true and that it should continue to be exempt from any future rules on the topic.

A tale of woe

For now, though, consumers will continue to be seriously frustrated by the state of video openness. One agitated Comcast subscriber from Oregon recently filed his tale of woe with the FCC.

You see, our hero had purchased a TV and a few other devices that were capable of tuning unencrypted (ClearQAM) digital cable channels. "I have several ClearQAM-capable devices which I was using to view and record the digital services to which I had subscribed and am still paying to access," he wrote. "While the "navigation" was problematic (i.e., no fixed mapping between 'cable channel 63' and 'ClearQAM channel 67-3'), it was a minor problem.

Until November 11, 2009, when "without prior notice, Comcast of Oregon began encrypting all digital services except those corresponding to the analog basic services… What remained in the clear were mostly local broadcast and public, education, and government (PEG) channels, with a few shopping channels. Even though I was still paying to access enhanced (digital) services, I was no longer able to use the CPE [customer premises equipment] I had invested in to view those services."

The man could pay for a CableCARD (and pick up a set-top box), but he had "purchased CPE both for enhanced capabilities and to avoid paying 'double'… It is exactly this kind of situation which prompted the Cable Consumer Protection Act of 1992… The clear intent of the legislature was that CPE should not only be allowed but should be promoted in the interests of the consumer and the industry as a whole."

Without the ability to access a TV signal, would the VCR ever have flourished? Would TiVo ever have shown us the incredible power of a DVR?

Now, thanks to the encryption, the man fears the we have "returned to the dark ages, similar to the time when Ma Bell controlled what you could attach her phone lines… According to Comcast of Oregon, it is sufficient for me to be able to schedule the recording of a show two days in advance (the limit of the on-screen program guide in the full-feature cable box I have been provided), with the additional requirement that I attach the VCR only to that box and must also program that VCR to record that signal. I cannot schedule two different channels to be recorded 'every week' or even 'day after tomorrow'—a task that was trivial using the CPE I have already invested in."
http://arstechnica.com/tech-policy/n...us-open-up.ars





Stern's Threat to Quit Sirius Could be Empty Talk
Deborah Yao

Howard Stern is threatening to leave Sirius XM Radio Inc. now that the shock jock and the satellite radio provider are getting set to enter contract talks in 2010.

That threat probably seems less daunting to Sirius than it once would have. Sirius originally wanted Stern so badly that it gave him the most lucrative radio contract ever, a five-year deal that started in 2006 and paid him $500 million in cash and stock.
Today, he doesn't have many places left to go -- at least if he wants another huge payday.

Free radio stations are struggling with steep drops in advertising and high debt loads, and probably can't pay top dollar to get Stern back to the medium where he began. He also likely would chafe at being censored again after enjoying the freedom of satellite radio, where his racy banter hasn't been subject to federal restrictions on language and content.

He can't switch to another satellite radio provider -- Sirius swallowed the only other one, XM, last year.

So if Stern, 55, does re-sign with Sirius, it's likely to be for less this time around.

Sirius nearly had to file for bankruptcy protection this year and is still trying to reduce costs. The company is feeling the brunt of weak auto sales, which deliver many of its new customers. And it faces new threats from emerging commercial-free rivals such as Internet radio.

For these reasons -- and because Stern has warned other times that he might quit or retire -- his latest threat rings hollow to some analysts.

"It's probably positioning for contract negotiations," said Brett Harriss, an analyst at Gabelli & Co., whose parent Gamco Investors Inc. owns 1.1 million shares of Sirius. "I don't think he would give up his bullhorn."

Sirius' chief executive, Mel Karmazin, told The Associated Press in a recent interview that he will work hard to retain Stern, but the company would not offer more detailed comments. Stern's agent, Don Buchwald, did not respond to requests for comment.

Stern made his name on traditional or "terrestrial" radio. While Sirius mainly makes its money from selling subscriptions, the money that flowed to Stern on traditional radio came from syndication rights. In that setup, radio stations pay companies that distribute programs such as Stern's.

Many of those radio stations have struggled since Stern left the free airwaves, and the recession compounded the problems. In the first nine months of the year, radio advertising revenue fell by 21 percent to $11.8 billion, according to the Radio Advertising Bureau.

Citadel Broadcasting Corp., the nation's third-largest operator of radio stations, filed for bankruptcy protection Sunday. Other big station owners also are wrestling with debts, and the syndication division of the largest station owner, Clear Channel Communications Inc., already is believed to be paying Rush Limbaugh $400 million over an eight-year contract.

"Who else can afford Howard Stern?" Harriss said.

When Stern signed with Sirius, the company trailed XM Satellite Radio Holdings Inc. in the race for customers. It badly needed a marquee name to attract subscribers to its service, which delivers 130 radio channels anywhere in the country for $6.99 a month to $19.99 a month, depending on the package.

Now after buying XM for $3.3 billion, Sirius has 18.5 million subscribers, down slightly from a peak of 19 million at the end of last year. Sirius' radio lineup beyond Stern includes Oprah Winfrey, Martha Stewart, NFL games and Major League Baseball. Half of its channels are music and free of commercials, while the rest air sports, talk shows, news, entertainment, traffic and weather.

The company still has never posted a net profit. Revenue was nearly flat in the last quarter, and Sirius remains pressured to cut costs. Sirius narrowly avoided bankruptcy protection 10 months ago by getting $530 million in financing from Liberty Media Corp. Sirius had to give a 40 percent ownership stake to Liberty, which is controlled by satellite mogul John Malone.

As Sirius tries to get its finances in order, it must cope with threats from emerging technologies, such as Internet radio services that also deliver radio programming without commercials.

The company has been trying to cut costs. Sirius' programming expenses in the past four quarters fell 18 percent from the total paid by Sirius and XM in the previous year, when they were still separate companies. Sirius has eliminated duplicative radio programs since it absorbed XM and found ways to reduce "on-air talent costs."

Given the climate, if Stern returns to Sirius, "he's not going to get $500 million again," said Miller Tabak analyst David Joyce. Robert Eatman, the agent for Sirius talents Opie & Anthony and rapper Nick Cannon, agreed that Stern is "probably not worth" $500 million to Sirius now.

But the question will be just how much less Sirius can pay and still keep Stern.

Stern accounts for about $80 million of Sirius' annual programming costs, which have totaled $365 million over the past four quarters. The $80 million covers Stern's salary, wages for his staff and production and operating expenses, according to filings with the Securities and Exchange Commission. The remainder of the contract was paid in stock.

There are no independent ratings available to track the popularity of Stern's show, which airs Mondays through Thursdays from 6 a.m. to 9 a.m. But he has been so important to Sirius that he was the sole radio talent mentioned in SEC filings from 2006 through 2009 as a party whose failure could hurt Sirius' business. (Automakers were also among the listed entities.) In his first year at Sirius, Stern received a stock bonus worth $82.9 million because Sirius' subscriber count exceeded an agreed-upon target by more than 2 million.

Stern could leave to start a new venture, perhaps a subscription service that sends his show to PCs and mobile devices. Sirius already streams Stern's shows online and through the iPhone. Or he could explore more options in cable TV, where his first pay-per-view special, "Howard Stern's Negligee and Underpants Party," was offered in 1988.

Stern also could retire.

"Howard has the creative and business freedom to do what he wants to. He can just about write his own ticket in a number of areas," said Tom Taylor, executive news editor of Radio-Info.com, which tracks the radio industry. "He doesn't need to do anything. He's going to pay the rent fine."
http://www.newstimes.com/entertainme...alk-295324.php





Citadel Broadcasting Files For Bankruptcy
FMQB

As expected, Citadel Broadcasting has filed for bankruptcy, turning over control of one of the country's largest radio owners to its creditors. As per a report earlier this month, Citadel has indeed filed for Chapter 11 and will hand over the reigns to the company in exchange for shrinking its debts.

According to an announcement made Sunday, the deal will "extinguish" $1.4 billion of Citadel's debts. Bloomberg reports that JPMorgan Chase Bank, Wilmington Trust Co. and The Walt Disney Co. are Citadel's three largest creditors, while private equity firm Forstmann Little & Co. is its largest stockholder.

"We are pleased with the support from the majority of our senior lenders, and we look forward to working with the remaining senior lenders and other stakeholders to ensure a complete and expeditious restructuring," said Citadel CEO Farid Suleman in a statement. "Our business will continue as usual and the Company will work to emerge from the restructuring process as quickly as possible." Suleman is expected to remain as head of the company.

Citadel Broadcasting Corp. is the third largest radio ownership group in the country, with 224 stations in total, as well as syndication arm Citadel Media.
http://www.fmqb.com/article.asp?id=1633661





Terra Firma Suing Citigroup Over EMI Deal
FMQB

Private equity group Terra Firma is suing Citigroup, claiming that the bank "misrepresented fundamental facts" when it brokered the sale of EMI to Terra Firma in 2007. The equity group says that the bank falsely claimed that other bidders were in the running for EMI, causing Terra Firma to raise its bid, according to BBC News. Terra Firma claims that the firm Citigroup said was trying to purchase EMI was Cerebus Capital Management, when in fact Cerebus had actually withdrawn from the EMI auction hours before it closed.

Citigroup said the lawsuit was "without merit" and it would defend itself "vigorously," according to the BBC.

In related news, Terra Firma is looking to bring in outside investors to help finance EMI, as the company is still burdened with £2.6 billion of debt. City pension funds, insurance companies and foreign banks have been approached about a possible deal amid fears that EMI could default on its interest payments to Citigroup, The Guardian reported. However, even if an outside investor is found, Terra Firma would retain majority control of EMI and its boss, Guy Hands, would remain chairman at the company.

EMI is profitable at the operating level but has been hit hard by the borrowing costs, forcing Terra Firma to twice inject equity into the operation in the past 18 months. Hands's latest attempt to recapitalize EMI would involve Terra Firma and new investors injecting another £1 billion of equity, but the plan could be contingent on him being able to persuade Citigroup to write off £1 billion in debt, and so far the two sides have been unable to reach an agreement, says The Guardian.
http://www.fmqb.com/article.asp?id=1625937





Merge Records: 20 Years of Glorious Noise
Derek Caney

In the summer of 1987, 19-year-old Mac McCaughan and his bandmates stumbled on an idea as old as rock 'n' roll itself.

Rather than sending demo tapes to major record companies, they followed in the do-it-yourself footsteps of punk-rock idols such as the Buzzcocks and Minor Threat and started their own label. But more than promote their own band, they wanted to document the local music in the college town of Chapel Hill, North Carolina.

That label evolved into Merge Records, which is celebrating its 20th anniversary. Along the way, the label has garnered mainstream hits by Arcade Fire and Spoon, attracted critical praise for bands like Magnetic Fields, and enjoyed smaller successes with Lambchop and McCaughan's own band Superchunk.

Its history was been documented in "Our Noise: The Story Of Merge Records" (Algonquin Books), which McCaughan wrote with Merge co-owner and Superchunk bassist Laura Ballance and Gawker.com scribe John Cook.

In an industry where success is measured in the millions of records sold -- or used to be, until sales starting slumping a decade ago -- Merge has thrived with sales figures in the thousands. The label's biggest seller, Arcade Fire's 2007 album "Neon Bible," sold 420,000 copies in the United States. Spoon's Merge label debut, "Girls Can Tell," sold more copies in six weeks than the band's previous album sold for a major label in a year.

Guided by McCaughan and Ballance's eclectic tastes and fiscal discipline, the label has succeeded where many others -- independent and major -- have failed.

"We operate in a conservative way," Ballance told Reuters from her office in Chapel Hill. "We were never in a position where we've had to say, 'We need a hit' or 'Oh crap, we have to sell some records really quick.'"

Consequently, they have managed to sign and retain their favorite acts even as major labels offered deals that seemed more generous.

Defying Convention

Merge eschews large upfront advances in favor of generous royalty rates, and allows bands own their master tapes. Acts on big labels often stay indebted to their record companies for years, because the money lavished on advances, marketing and distribution is all recoupable from the band's royalties.

For its first 10 years, Merge operated only on a handshake basis. There were no contracts. It was a model that several indie labels used, such as Factory Records and Touch & Go Records. But after a couple of such deals turned sour, the label had to go the more-conventional route.

The handshake model backfired for Merge when a band wanted to jump to a major label. ... And You Will Know Us By The Trail Of The Dead ... had a verbal agreement with Merge to record two albums.

After recording the critically-lauded "Madonna," they signed with Jimmy Iovine's Interscope Records (owned by Universal Music Group) without delivering Merge a second album.

"We started to feel like a farm league," Ballance said. "We always had an emotional attachment to bands when we start working with them. People don't necessarily have the loyalty you thought they did."

Some of the label's roster were simply friends of McCaughan and Ballance's. Canadian alternative band Arcade Fire was introduced to Merge because its drummer at the time, Howard Bilerman, allowed McCaughan to crash at his place while on tour.

The label pressed 10,000 copies of the band's full-length 2004 debut "Funeral." After a favorable review on the Pitchfork website, the first pressing sold out in a week and the album went on to sell 400,000 copies. The band's 2007 follow-up "Neon Bible" debuted at No. 2 on the U.S. pop chart.

Asked if the label has received any buyout offers, McCaughan said, "People have intimated that they would be interested, but it's never clear what they're interested in. So discussions never really get too far.

"It's usually couched in this language of, 'How can we work together?' And you just know they don't have any idea about the label," he said. "Because if they had, they would know that it's not going to be as simple as, 'Hey, how can we work together?' It's not insulting, as much as it is ignorant."

(Editing by Dean Goodman and Bob Tourtellotte)
http://www.reuters.com/article/idUST...ertainmentNews





Rage Win Christmas Chart Battle
BBC

Rock band Rage Against the Machine have won the most competitive battle in years for the Christmas number one.

The band's single, Killing In The Name, sold 500,000 downloads beating X Factor winner Joe McElderry's The Climb by 50,000 copies to clinch the top spot.

Their success followed a Facebook campaign designed to prevent another X Factor number one.

One retailer said it was a "truly remarkable outcome - possibly the greatest chart upset ever".

Speaking on the Radio 1 chart show, Zack de la Rocha from Rage said: "We are very, very ecstatic about being number one."

He added it was an "incredible organic grassroots campaign".

"It says more about the spontaneous action taken by young people throughout the UK to topple this very sterile pop monopoly," he said.

McElderry, 18, praised the campaign, adding: "It's been exciting to be part of a much-hyped battle and they definitely deserve congratulations."

Thanking all the fans who bought his single, he said: "This time last year I never thought for one minute that I'd win The X Factor, never mind about having a debut single out, so I'm just delighted to be in the charts.

"It's been such an incredible couple of months and I got the best Christmas gift I could ever have asked for in winning The X Factor."

He later told BBC Radio One he did not believe the internet campaign was a personal attack.

He said: "It's more against the show than me and I think if any other person had have won, the same thing would have happened, because the petition was going on before the winner had been announced."

Despite earlier in the week calling the campaign "stupid", X Factor judge Simon Cowell offered his congratulations to the couple behind it, Jon and Tracy Morter.

He said: "I am gutted for Joe because a number one single meant a lot to him but I have to congratulate Jon and Tracy, who started the Facebook campaign.

"I called Jon on Saturday to congratulate the two of them that, win or lose, they turned this into a very exciting race for the Christmas number one.

"I am proud of Joe - he worked really hard this week, but he has a great year ahead of him."

This is not the first campaign the Morters have launched to try to influence the charts - last year they attempted to get Rick Astley to the top spot.

Mr Morter, 35, said he learnt "how the charts work" from that experience, and "what you can get away with".

"When this year came around I just thought, let's have another go. If anything, last year was fun. This year it has gone stratospheric."

His wife Tracy said: "It was one of those little silly ideas that make you laugh in your own house.

"We really love music and remember when were were young the charts were really exciting. We just thought, wouldn't it be funny if that song got to number one?

"It took something really strong and forceful to get people behind it."

The Los Angeles rock band's hit also set another record: it has achieved the biggest download sales total in a first week ever in the UK charts.

McElderry's song was only released digitally after his victory in the X Factor, giving it less time to rack up sales than Rage Against The Machine.

On Friday the band's lead was just 9,000 copies, but sales then soared by 200,000 to secure victory.

RECENT CHRISTMAS NUMBER ONES
# 2000: Bob the Builder: Can We Fix It
# 2001: Robbie Williams & Nicole Kidman: Somethin' Stupid
# 2002: Girls Aloud: Sound of the Underground
# 2003: Michael Andrews feat Gary Jules: Mad World
# Band Aid 20: Do They Know It's Christmas?
# 2005: Shayne Ward: That's My Goal
# 2006: Leona Lewis: A Moment Like This
# 2007: Leon Jackson: When You Believe
# 2008: Alexandra Burke: Hallelujah

Rage Against The Machine are signed to Epic Records, which is part of Sony BMG, the same label as McElderry.

De la Rocha said the band would perform a free concert in the UK in 2010 to celebrate their chart win.

The past four Christmas number ones have all been by X Factor winners; Alexandra Burke's version of Hallelujah last year was one of the biggest selling festive singles ever.

Guitarist Tom Morello said it had "tapped into the silent majority of the people in the UK who are tired of being spoon-fed one schmaltzy ballad after another".

He added that proceeds from the single would go to homeless charity Shelter tying in with the Morters' Facebook campaign which includes an online link to give to the charity, raising over £70,000 so far.

The last big Christmas battle on a similar scale was between the Spice Girls' Goodbye and South Park character Chef's Chocolate Salty Balls in 1998. The Spice Girls won with 380,000 to their rival's 375,000.

Despite losing out on the single top spot, Cowell kept a hold on the album chart, with Susan Boyle's I Dreamed A Dream remaining at number one for a fourth week.
http://news.bbc.co.uk/go/pr/fr/-/2/h...nt/8423340.stm





Tim McGraw Ropes Decade’s Most Played Single
Ben Sisario

What’s the most popular song of the decade? There are plenty of ways to measure it — CD sales, paid downloads, unauthorized downloads. But perhaps the most tried and true is radio play.

Based on data from Nielsen BDSradio, which monitors radio stations throughout the United States, the most-played song on any station from Jan. 1, 2000, to Dec. 17, 2009, was Tim McGraw’s “Something Like That,” released in 1999. It received 487,343 spins, beating out the most popular song on Top 40 radio, Usher’s “Yeah!,” from 2004, by a fair margin. “Yeah!,” featuring Ludacris and Lil Jon, has been spun 416,267 times.

Other high-spinning tracks include Train’s “Drops of Jupiter (Tell Me)” (2001), which topped the Hot AC (as in adult contemporary) format, with 338,749; Papa Roach’s “Last Resort” (2000), the most popular on alternative stations with 221,767 spins; “Low,” Flo Rida’s song from 2007 featuring T-Pain, which was the most popular on the rhythmic format with 206,864 spins.

“Low” has also had the most paid downloads of any song this decade, with 5.2 million, as registered by another Nielsen branch, SoundScan.

A complete list of the most frequently played songs in each genre appears below:

Country: “Something Like That” (1999) by Tim McGraw, 487,343 spins
Top 40/Contemporary Hit Radio: “Yeah!” (2004) by Usher, featuring Ludacris and Lil Jon, 416,267 spins
Hot AC: “Drops of Jupiter (Tell Me)” (2001) by Train, 338,749 spins
Alternative: “Last Resort” (2000) by Papa Roach, 221,767 spins
Rhythmic: “Low” (2007) by Flo Rida, 206,864 spins
Album rock: “It’s Been Awhile” (2001) by Staind, 189,195 spins
Urban: “Drop It Like It’s Hot” (2004) by Snoop Dogg, featuring Pharrell, 169,511 spins
Urban AC: “Think About You” (2003) by Luther Vandross, 147,818 spins
Gospel: “Never Would Have Made It” (2007) by Marvin Sapp, 92,603 spins
Smooth jazz: “Pacific Coast Highway” (2005) by Nils, 29,328 spins
http://artsbeat.blogs.nytimes.com/20...yed-single/?hp





Susan Boyle Blocks Alicia Keys from Top of Chart
Keith Caulfield

There's just no stopping Susan Boyle on the U.S. pop chart.

Her debut album "I Dreamed a Dream" claimed the No. 1 spot on the Billboard 200 for a fourth week in a row Wednesday, denying Alicia Keys a shot at the top.

It sold 661,000 copies during the week ended December 20, according to Nielsen SoundScan, taking its total to 2,465,000 copies -- the second-best selling album of 2009. Only Taylor Swift's "Fearless" -- a 2008 release with 2,933,000 copies sold this year -- is standing in its way. And with two weeks left in the 2009 tracking year, anything is possible.

Keys' "The Element of Freedom" debuted at No. 2 with 417,000. All four of her previous albums -- three studio sets and an "Unplugged" release -- started at No. 1. Her last effort, "As I Am," kicked off with 742,000 copies during Thanksgiving week of 2007.
However, Keys still has a chance of eventually reaching No. 1. The set could rise to the top next month when (and if) the Boyle hysteria dies down.

The second-biggest new arrival was Robin Thicke's "Sex Therapy: The Session," which began at No. 9 with 123,000. It's the singer's third top 10 set in a row, following "The Evolution of Robin Thicke" (No. 5 in 2007) and "Something Else" (No. 3 in 2008). The latter album opened with 137,000 copies.

The only other debuts on the Billboard 200 were all the way down at Nos. 172 (the "Avatar" soundtrack) and 198 (Phil Vassar's "Traveling Circus").

After five weeks at No. 2, Andrea Bocelli's "My Christmas" slipped to No. 3 with 390,000. The seasonal set has blown through 1.9 million copies and is now Bocelli's third-best seller after 1997's "Romanza" (4.2 million) and 1999's "Sogno" (2.5 million).

Swift held at No. 4 as "Fearless" shifted 239,000, while Carrie Underwood's "Play On" (153,000) and Lady Gaga's "The Fame" (141,000) were also unchanged, at Nos. 5 and 6, respectively.

Michael Buble's "Crazy Love" rose two places to No. 7 with 140,000 copies, surpassing the one-million mark. Justin Bieber's "My World" held at No. 8 with 127,000.

The second volume of the "Glee" TV soundtrack fell seven places to No. 10 in its second week with 111,000.

Overall album sales totaled 15.14 million units, up 26% compared to last week, but down 12% compared to the same sales week of 2008. Year-to-date sales stand at 357.5 million, down 13% compared to the same total at this point last year.
http://www.reuters.com/article/idUST...ertainmentNews





At 94, She’s the Hot New Thing in Painting
Deborah Sontag

Under a skylight in her tin-ceilinged loft near Union Square in Manhattan, the abstract painter Carmen Herrera, 94, nursed a flute of Champagne last week, sitting regally in the wheelchair she resents.

After six decades of very private painting, Ms. Herrera sold her first artwork five years ago, at 89. Now, at a small ceremony in her honor, she was basking in the realization that her career had finally, undeniably, taken off. As cameras flashed, she extended long, Giacomettiesque fingers to accept an art foundation’s lifetime achievement award from the director of the Walker Art Center in Minneapolis.

Her good friend, the painter Tony Bechara, raised a glass. “We have a saying in Puerto Rico,” he said. “The bus — la guagua — always comes for those who wait.”

And the Cuban-born Ms. Herrera, laughing gustily, responded, “Well, Tony, I’ve been at the bus stop for 94 years!”

Since that first sale in 2004, collectors have avidly pursued Ms. Herrera, and her radiantly ascetic paintings have entered the permanent collections of institutions like the Museum of Modern Art, the Hirshhorn Museum and the Tate Modern. Last year, MoMA included her in a pantheon of Latin American artists on exhibition. And this summer, during a retrospective show in England, The Observer of London called Ms. Herrera the discovery of the decade, asking, “How can we have missed these beautiful compositions?”

In a word, Ms. Herrera, a nonagenarian homebound painter with arthritis, is hot. In an era when the art world idolizes, and often richly rewards, the young and the new, she embodies a different, much rarer kind of success, that of the artist long overlooked by the market, and by history, who persevered because she had no choice.

“I do it because I have to do it; it’s a compulsion that also gives me pleasure,” she said of painting. “I never in my life had any idea of money and I thought fame was a very vulgar thing. So I just worked and waited. And at the end of my life, I’m getting a lot of recognition, to my amazement and my pleasure, actually.”

Julián Zugazagoitia, the director of El Museo del Barrio in East Harlem, called Ms. Herrera “a quiet warrior of her art.”

“To bloom into full glory at 94 — whatever Carmen Herrera’s slow rise might say about the difficulties of being a woman artist, an immigrant artist or an artist ahead of her time, it is clearly a story of personal strength,” Mr. Zugazagoitia said.

A minimalist whose canvases are geometric distillations of form and color, Ms. Herrera has slowly come to the attention of a subset of art historians over the last decade. . Now she is increasingly considered an important figure by those who study her “remarkably monumental, iconic paintings,” said Edward J. Sullivan, a professor of art history at New York University.

“Those of us with a passion for either geometric art or Latin American Modernist painting now realize what a pivotal role” Ms. Herrera has played in “the development of geometric abstraction in the Americas,” Mr. Sullivan said.

Painting in relative solitude since the late 1930s, with only the occasional exhibition, Ms. Herrera was sustained, she said, by the unflinching support of her husband of 61 years, Jesse Loewenthal. An English teacher at Stuyvesant High School in Manhattan, Mr. Loewenthal was portrayed by the memoirist Frank McCourt, a colleague, as an old-world scholar in an “elegant, three-piece suit, the gold watch chain looping across his waistcoat front.”

Recognition for Ms. Herrera came a few years after her husband’s death, at 98, in 2000. “Everybody says Jesse must have orchestrated this from above,” Ms. Herrera said, shaking her head. “Yeah, right, Jesse on a cloud.” She added: “I worked really hard. Maybe it was me.”

In a series of interviews in her sparsely but artfully furnished apartment, Ms. Herrera always offered an afternoon cocktail — “Oh, don’t be abstemious!” — and an outpouring of stories about prerevolutionary Cuba, postwar Paris and the many artists she has known, from Wifredo Lam to Yves Klein to Barnett Newman.

“Ah, Wifredo,” she said, referring to Lam, the Cuban-born French painter. “All the girls were crazy about him. When we were in Havana, my phone would begin ringing: ‘Is Wifredo in town?’ I mean, come on, I wasn’t his social secretary.”

But Ms. Herrera is less expansive about her own art, discussing it with a minimalism redolent of the work. “Paintings speak for themselves,” she said. Geometry and color have been the head and the heart of her work, she added, describing a lifelong quest to pare down her paintings to their essence, like visual haiku.

Asked how she would describe to a student a painting like “Blanco y Verde” (1966) — a canvas of white interrupted by an inverted green triangle — she said, “I wouldn’t have a student.” To a sweet, inquiring child, then? “I’d give him some candy so he’d rot his teeth.”

When pressed about what looks to some like a sensual female shape in the painting, she said: “Look, to me it was white, beautiful white, and then the white was shrieking for the green, and the little triangle created a force field. People see very sexy things — dirty minds! — but to me sex is sex, and triangles are triangles.”

Born in 1915 in Havana, where her father was the founding editor of the daily newspaper El Mundo, and her mother a reporter, Ms. Herrera took art lessons as a child, attended finishing school in Paris and embarked on a Cuban university degree in architecture. In 1939, midway through her studies, she married Mr. Loewenthal and moved to New York. (They had no children.)

Although she studied at the Art Students League of New York, Ms. Herrera did not discover her artistic identity until she and her husband settled in Paris for a few years after World War II. There she joined a group of abstract artists, based at the influential Salon of New Realities, which exhibited her work along with that of Josef Albers, Jean Arp, Sonia Delaunay and others.

“I was looking for a pictorial vocabulary and I found it there,” she said. “But when we moved back to New York, this type of art” — her less-is-more formalism — “was not acceptable. Abstract Expressionism was in fashion. I couldn’t get a gallery.”

Ms. Herrera said that she also accepted, “as a handicap,” the barriers she faced as a Hispanic female artist. Beyond that, though, “her art was not easily digestible at the time,” Mr. Zugazagoitia said. “She was not doing Cuban landscapes or flowers of the tropics, the art you might have expected from a Cuban émigré who spent time in Paris. She was ahead of her time.”

Over the decades, Ms. Herrera had a solo show here and there, including a couple at museums (the Alternative Museum in 1984, El Museo del Barrio in 1998). But she never sold anything, and never needed, or aggressively sought, the affirmation of the market. “It would have been nice, but maybe corrupting,” she said.

Mr. Bechara, who befriended her in the early 1970s and is now chairman of El Museo del Barrio, said that he regularly tried to push her into the public eye, even though she “found a kind of solace in being alone.”

One day in 2004, Mr. Bechara attended a dinner with Frederico Sève, the owner of the Latin Collector Gallery in Manhattan, who was dealing with the withdrawal of an artist from a much-publicized show of female geometric painters. “Tony said to me: ‘Geometry and ladies? You need Carmen Herrera,’ ” Mr. Sève recounted. “And I said, ‘Who the hell is Carmen Herrera?’ ”

The next morning, Mr. Sève arrived at his gallery to find several paintings, just delivered, that he took to be the work of the well-known Brazilian artist Lygia Clark but were in fact by Ms. Herrera. Turning over the canvases, he saw that they predated by a decade paintings in a similar style by Ms. Clark. “Wow, wow, wow,” he recalled saying. “We got a pioneer here.”

Mr. Sève quickly called Ella Fontanals-Cisneros, a collector who has an art foundation in Miami. She bought five of Ms. Herrera’s paintings. Estrellita Brodsky, another prominent collector, bought another five. Agnes Gund, president emerita of the Museum of Modern Art, also bought several, and with Mr. Bechara, donated one of Ms. Herrera’s black-and-white paintings to MoMA.

The recent exhibition in England, which is now heading to Germany, came about by happenstance after a curator stumbled across Ms. Herrera’s paintings on the Internet. Last week The Observer named that retrospective one of the year’s 10 best exhibitions, alongside a Picasso show and one devoted to the American Pop artist Ed Ruscha.

Ms. Herrera’s late-in-life success has stunned her in many ways. Her larger works now sell for $30,000, and one painting commanded $44,000 — sums unimaginable when she was, say, in her 80s. “I have more money now than I ever had in my life,” she said.

Not that she is succumbing to a life of leisure. At a long table where she peers out over East 19th Street “like a French concierge,” Ms. Herrera, because she must, continues to draw and paint. “Only my love of the straight line keeps me going,” she said.
http://www.nytimes.com/2009/12/20/ar...20herrera.html





A Twittering Critic Prepares for the Final Tweet
Dave Itzkoff

If you’re a fan of opinionated music criticism and/or comic brevity on the Web, you have probably encountered the writer Christopher R. Weingarten and his online project, 1000 Times Yes. On his Twitter account, @1000TimesYes, Mr. Weingarten has been making good on his vow to provide bite-sized reviews of 1,000 albums over the course of the year — everything from mainstream releases like Green Day’s “21st Century Breakdown” (which he described as “The Who and Big Country for the fourth generation thinking ‘Blank Generation’ is about them”) to indie rock offerings like Little Girls’ “Tambourine” (a “blah-punk no-hit wonder,” Mr. Weingarten said).

On Tuesday night, Mr. Weingarten plans to publish his 1,000th and final Tweet in the series, at precisely midnight at a party he is hosting at Bruar Falls in Williamsburg, Brooklyn. (He’ll also be the D.J. for a set of Christmas-themed rap.) But before that happens, Mr. Weingarten spoke to us about the origins of 1000 Times Yes, and what the experience taught him about music, Twitter and the Internet. These are excerpts from that conversation.
Q.

Where did the idea for 1000 Times Yes come from?
A.

The idea of the project really arose from my resentment that I missed out on the whole blogging thing. I saw all these bloggers, many of whom are terrible writers, just leapfrogging past me because they had this sticky ability, and they’re easier to talk about because what they did was very niche-y and very Internet-y. It was the music-critic revolution and I had completely spaced on it because I had waited too long to get in. I spent the decade writing for a career. I wasn’t writing in my bedroom just so I can get Animal Collective tickets.
Q.

How did Twitter factor into your decision to finally take the plunge?
A.

Twitter really worked with me, the way I think about things. I like to talk in quick, snappy one-liners, and I like to read short, snappy, punchy writing. I never liked this long, sprawling, super-earnest enthusiastic blog writing. It was the first time I felt at home in a social network.
Q.

Do you remember what your very first Tweet was?
A.

It was a review of Dalek’s “Gutter Tactics.” According to my archive here, I sent it Friday, Jan. 2, 2009. I probably had the promo in my hands all throughout December and I had been playing it a lot, so I definitely had some time to sit on it. I know what’s on this, I really like it, I’m ready to talk about it. That’s how most information on the Internet starts. “I’ve been enjoying this. I would like other people to know about this.” So that was an easy one.
Q.

As the weeks and months went by, did you get more strategic about your reviews, or were you reviewing albums just to meet your quota?
A.

I was a music editor at CMJ New Music Monthly for many years, so I know there’s a science to what’s newsworthy, what has a news peg, what’s of interest to my audience. Is this a band that somebody somewhere cares about? Which is why I haven’t answered a lot of, “Yo, check out my band!” promo stuff. I hang out in New York a lot and see local bands, so if something catches my eye I’ll write about it. I don’t like to dog bands that no one knows about. That’s a pointless thing to do a takedown of a band no one cares about. “This is terrible! You’ve never heard of it.”
Q.

No one ever made their reputation on the Web by doing that.
A.

Yeah. Exactly.
Q.

Are there any of your Twitter reviews that you’re especially proud of?
A.

Oh, man. None come to mind. It is just a total blur to me.
Q.

O.K., so which ones were the most widely circulated?
A.

The ones that got retweeted were when I was being mean to bands, because people don’t say negative things about hyped bands in big sources. The Internet is based on a spirit of enthusiasm for the scene. I dissed Silk Flowers and everyone gave me a thumb’s up for that. Little Girls. Anyone over 25 should not be sold on this lo-fi nonsense. If I was 19 I’d be super-psyched on this. I would never want to persuade a kid from getting into indie rock, but if you’re a grown man, come on.
Q.

Were there any other factors that helped get you attention for the project?
A.

The biggest thing for me was the 140 Characters Conference (where Mr. Weingarten gave a widely circulated speech about the decline of music criticism in June). That was me getting around the insular universe of rock critics and Internet music nerds. That’s not hard because that world is a lot smaller than people want to admit. I felt like Keyboard Cat for a minute. I was a meme for a week. As any band will tell you, it’s easy to get around here and sell 4,000 records for eternity. Breaking yourself out into the world, where you’re in a Judd Apatow movie and people have heard of your band — that’s hard.
Q.

Have you decided what your 1,000th review will be?
A.

I will say, yes I have. I am keeping that under wraps.
Q.

What are your immediate plans once you’ve posted that last Tweet?
A.

Immediate plans? I am not going to listen to a new record until well into the new year. I haven’t had a lot of time to give things second listens or third listens. It’s a year based on a lot of first impressions, and it takes a lot of why you listen to music out of music. That’s the negative side of it. That’s why I’m not going to do it again in 2010. Music is supposed to be therapy. Music is supposed to be my happy place, and I can’t always been in my happy place.
Q.

What will you do with your Twitter account?
A.

I’m still going to review records. I love conversing with people immediately about records, and I like it as a medium. But there’s no way I’m going to hit 1,000 next year. I’m not sure if I’m even going to number them next year. Maybe if I made it 750 Times Yes, I’d be in a different position right now. 1,000 is a big number. I didn’t know how big that number was. With all the piles of promo CDs I get all the time, and leaks, I figured, Oh, yeah, I’ve got time to listen to 1,000 records a year. No, I apparently do not.
Q.

Are you parlaying your Twitter success into a book or movie deal?
A.

Here’s the thing that bums me out about it. I haven’t figured out anything about it beyond how to get attention. [Laughs.] Which in the Internet, is the economy of awesome. And in reality means nothing.
http://artsbeat.blogs.nytimes.com/20...e-final-tweet/





Book review

Moral Panics and the Copyright Wars
Barbara Fister

In the past week, I've encountered signs that publishers are doing their best to discourage readers from buying books.

For example, Knopf is raising a legal eyebrow at booksellers who have imported UK editions of the third volume of Stieg Larsson's popular Millennium Trilogy to satisfy readers who can't bear to wait for the lagging U.S. publication in May; Knopf calls such imports a violation of their copyright, even though impatient readers are savvy enough to order books online from British retailers if they're thwarted at home.

More recently, several publishers have announced that they will delay ebook releases until premium-price hardcovers have spent a few months on sale in order to protect their industry. Readers have lost the patience formerly required to wait for a cheaper format—like the year-long paperback embargo that is hardwired into publishers' business models, not taking into account robust online outlets supplying cheaper copies. The first sale doctrine wouldn't stand a chance in today's environment if it were open to debate.

These conflicts, framed around rights and legalities and the need to preserve traditional business practices, never seem to take into account the good news that readers are eager to read and are ready to pay for the privilege. Arrr, me hearties; you might turn honest readers into pirates if you stand between them and their books.

Causing panic

William Patry has a few things to say about pirates in his new book, Moral Panics and the Copyright Wars. The well-known blogger, senior copyright counsel for Google, and author of the seven-volume definitive work, Patry on Copyright, steps back from purely legal analysis to examine the super-heated rhetoric surrounding copyright battles. Corporations trying to extend their control and resist new technologies tend to frame disputes in stark moralistic terms. Copyright is property, and control of one's property is a moral right; those who use copyrighted material without paying the owners are thieves, parasites, and pirates.

But this is inaccurate. "By describing copyright as a private property right, proponents of the description hope to get policy maker and courts to believe that only private, and not public rights, are implicated," he writes. Copyright is not a private good; it is a social relationship. "The advantage in regarding copyright as a system of social relationships is that it focuses attention where it belongs: in mediating conflicts within that system," a focus that is hidden when the argument is framed purely as theft of private property.

This book examines the rhetorical framing devices used by corporate interests to expand copyright laws. The purpose of this framing is simple: "to get what you want by defining yourself positively and by defining your opponent negatively." Nothing works better than inducing a moral panic, the systematic distortion and exaggeration of a problem in order to make it more compelling, and in the process demonizing those defined as deviant, making them appear much more threatening than they are.

The making and molding of social issues

Social issues are often shaped by the careful cultivation of anxiety. An issue is often identified and named as something that challenges commonly-held moral values. ("Property is important; I'd be upset if somebody stole my property.") Issues are typified through dramatic narratives, telling worrisome stories that can be picked up, repeated, and amplified in the press. In the process the threat is often distorted to enhance its significance. As a result, the government is often called in to deal with the threat and to regulate behavior by extending its control. All of these players—claims-makers, the press, and the state—have something to gain by making people fearful. And moral panics do the job.

Stanley Cohen, who named the phenomenon in his 1972 book, Folk Devils and Moral Panics, called those who engineered panics "moral entrepreneurs." The interesting thing about the RIAA, the MPAA, the AAP, and other lobbies that want to extend copyright and limit the use of cultural materials is that they are not behaving like entrepreneurs—rather the opposite. Instead of seizing on technological opportunities to offer new products and open new channels to purchase them, they pull out all the stops to prevent innovation.

"The Copyright Wars must be understood as archetypal responses of businesses that are inherently non-innovative and that rely on the innovation of others to succeed . . . The problems of the Copyright Wars are not caused by technologies or by consumers acting badly, and they cannot therefore be solved by laws, and certainly not by more draconian laws. The problems—such as the decline in sales of CDs and DVDs—are the result of the copyright industries many and considerable failures to focus on satisfying consumers' desires as opposed to stifling those desires out of a woefully misguided view that copyright equals control and that control equals profits."

Patry makes the claim that innovation is not the problem, it's the solution because it "provides the means by which new content can be created and then distributed to consumers in a form and manner consumers desire. The problem lies with those in the copyright industries who are neither innovative nor willing to license to those who are." When innovation is rejected, consumers are only left with illegal (but easily available) means to get what they desire. (The criminalization of the majority of young people who find it natural to engage in read/write culture is what prompted Laurence Lessig to write Remix.)

Patry points out that the rhetoric of fear includes flatly misleading information. He cites an Ars Technica investigation into figures frequently cited in the press and in congressional hearings claiming that piracy results in 750,000 lost jobs and $250 billion in costs to the U.S. economy annually. Because they are repeated so often, they become authoritative numbers, but when painstakingly tracked to their source, they turn out to be complete rubbish. (This article, by the way, is a wonderful example of critical information literacy.)

Innovation was demonized in the past in ways that seem absurd in hindsight. Jack Valenti (yes, the same Jack Valenti who for years predicted the complete collapse of the film industry if pirates aren't punished) testified before Congress in 1982 that Hollywood's future "depends on its protection from the savagery and the ravages of this machine." Which machine is that? "I say to you that the VCR is to the American film producer and the American public as the Boston Strangler is to the woman home alone."

Balancing act

Patry's book unpacks the rhetorical devices used in copyright debates, but he does not oppose copyright. "For policy makers and the public, copyright is not a winner-takes-all proposition,” he writes. “Copyright is a system to advance public interests; those interests can be furthered by a copyright regime tailored to provide sufficient incentives to create new works. But at the same time we must recognize that the public interest is genuinely harmed by overprotection."

Though academic librarians are understandably caught up in the issues surrounding scholarly communication, a system in which much of the content is publicly funded and the authors are primarily rewarded by exposure, not protection, we still have a stake in popular culture and in the ways that copyright as it is defined today thwarts creative expression and hurts innovation. Moral Panics and the Copyright Wars is an informative interdisciplinary excursion into the issues that draws on legal, economic, and sociological theories to examine a debate that affects us and our students on a daily basis.
http://www.libraryjournal.com/article/CA6712145.html





Alternative 2009 Copyright Expirations
jrincayc

It's nearly the end of 2009. If the 1790 copyright maximum term of 28 years was still in effect, everything that had been published by 1981 would be now be in the public domain — so the original Ultima and God Emperor of Dune and would be available for remixing and mashing up. If the 1909 copyright maximum term of 56 years (if renewed) were still in force, everything published by 1953 would now be in the public domain, freeing The City and the Stars and Forbidden Planet. If the 1976 copyright act term of 75* years (* it's complicated) still applied, everything published by 1934 would now be in the public domain, including Murder on the Orient Express. But thanks to the Sonny Bono Copyright Term Extension Act, nothing in the US will go free until 2018, when 1923 works expire. http://yro.slashdot.org/story/09/12/...ions?art_pos=4





Holy See Declares Unique Copyright on Papal Figure

The Vatican made a declaration on the protection of the figure of the Pope on Saturday morning. The statement seeks to establish and safeguard the name, image and any symbols of the Pope as being expressly for official use of the Holy See unless otherwise authorized.

The statement cited a "great increase of affection and esteem for the person of the Holy Father" in recent years as contributing to a desire to use the Pontiff's name for all manner of educational and cultural institutions, civic groups and foundations.

Due to this demand, the Vatican has felt it necessary to declare that "it alone has the right to ensure the respect due to the Successors of Peter, and therefore, to protect the figure and personal identity of the Pope from the unauthorized use of his name and/or the papal coat of arms for ends and activities which have little or nothing to do with the Catholic Church."

The declaration alludes to attempts to use ecclesiastical or pontifical symbols and logos to "attribute credibility and authority to initiatives" as another reason to establish their “copyright” on the Holy Father's name, picture and coat of arms.

"Consequently, the use of anything referring directly to the person or office of the Supreme Pontiff... and/or the use of the title 'Pontifical,' must receive previous and express authorization from the Holy See," concluded the message released to the press.
http://www.catholicnewsagency.com/ne...n_papal_figure

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

December 19th, December 12th, December 5th, November 28th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
JackSpratts is online now   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - December 19th, '09 JackSpratts Peer to Peer 1 21-12-09 05:05 AM
Peer-To-Peer News - The Week In Review - December 12th, '09 JackSpratts Peer to Peer 0 09-12-09 08:07 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 07:17 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)