View Single Post
Old 20-04-06, 09:39 AM   #2
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,018
Default

Our Very First High-Def DVD Review

The Last Samurai (HD-DVD)
Peter M. Bracke

Well, here I sit. After years of hype, hope, anticipation, numerous delays and endless public bickering between the two warring high-def DVD camps, with the first next-gen DVD player and a quartet of discs in front of me, fired and ready for review. It's a big moment in the life of any unabashed hi-tech geek -- the launch of a new format is equivalent to Christmas, New Year's, Easter and Fourth of July all rolled into one. And since this is high-definition we're talking about, multiply that euphoria by ten -- because the ability to own affordable, pre-recorded HD content has been the holy grail of home theater enthusiasts for the past two decades. So the fact that HD-DVD is finally here and a genuine, tangible reality, well... it is hard to believe this day has finally come.

Which makes the pressure of what to watch first almost unbearable. If I'm being honest, I can't say that if I had my druthers, the first HD-DVD discs consumers would have to chose from would be 'Million Dollar Baby,' 'Serenity' and 'Phantom of the Opera' -- though they are all good movies to be sure. But I definitely would never have picked a Tom Cruise movie to kick off a new HD format, let alone one as big-budget, bloated and self-important as 'The Last Samurai' (geesh, even 'Top Gun' would have been better -- at least that has flying jets and Kenny Loggins on the soundtrack). But here I am, picking 'Samurai' as my first-ever high-def DVD review title, if only because it allows me to pay the biggest compliment I can think of to the new HD-DVD format -- despite the fact I would normally hate this movie, I loved every last single second of it. Because it looked and sounded so damn good on the format that it actually made me excited again to watch a Tom Cruise movie. Now, that is some kind of technological miracle.

But before we get on to the good stuff, a quick recap of 'Last Samurai' for those of you who actually want to know what the movie is about. It is the late 19th century in Japan, and American war captain Nathan Algren (Cruise) must lead a group of Japanese soldiers to defeat a rebellion of the countries that remain Samurai. But before you can say "Dances with Wolves," Algren is captured and imprisoned by the Samurai, and soon finds himself becoming spiritually transformed by the forces he once swore to destroy. But soon the Japanese forces will begin again their search for Algren and the Samurai -- ready to destroy the warrior culture and anyone who stands in the way of its eradication.

Truth be told, 'Last Samurai' is an entertaining movie. It's filled with tons of loud sword fights, panoramic vistas, Very Important Dialogue and lots of slo-mo shots of Tom flashing his sullen, toothy grin. And it certainly is a handsomely-mounted epic, with top-notch production and costume design, strong performances by its largely Japanese cast (particularly Ken Watanabe, who earned an Oscar nom for his role as Algren's mentor Katsumoto) and polished direction by Ed Zwick. But aside from feeling a bit too close to 'Shogun' meets 'Dances with Wolves' for my comfort, I just never once believed Cruise as Algren. But perhaps I'm biased -- years of Cruise's Scientology rants and couch-jumping antics have for me long overshadowed his skills as an actor. But in terms of slick Hollywood bombast with a dash of cultural clash thrown in, 'Last Samurai' is not a bad night of popcorn entertainment. And it makes a perfect disc to show off your new next-gen HD gear. So, forget the plot -- how does this sucker look and sound?

The Video: Sizing Up the Picture

Unfortunately, assessing the quality of an HD-DVD disc isn't as easy as standard DVD. While connecting the nearly decade-old DVD format we know and love to today's TVs is fairly straightforward (even with its multitude of connection options), and you always know the basic resolution output you're going to get, HD-DVD is a far more complicated proposition. With the HD format's multiple of resolution formats (720p, 1080i, 1080p, etc.), the quality of the picture you are going to receive is much more dependent on the equipment that you have, and how you connect it all up. In the HD world, one size does definitely not fit all. So that means there is a little good news/bad news when it comes to how much picture bang for your buck these first first HD-DVD discs can deliver.

First the good news. Warner has not skimped on its first HD-DVD releases. 'Last Samurai,' like Warner's other two initial HD-DVD offerings ('Million Dollar Baby' and 'Phantom of the Opera') showcases the movie by encoding it at 1080p -- meaning you're going to get every last pixel of the HD format's absolute maximum 1,920 x 1,080 resolution. Hooked up properly to a HDTV that can accept an 1080p input, and you're getting a professional-grade image that can only be bettered by today's multi-million dollar digital cinema formats (which would be pretty useless for the screen sizes in the majority of home theaters anyway).

Now, the bad news. There are very few HD display devices on the market today that can accept a 1080p input (we're using one of them, the HP Pavilion 65" DLP RPTV, as the centerpiece of our reference system), but then Toshiba's two first-gen HD-DVD players (the HD-A1 and its slightly snazzier cousin, the HD-XA1) can only output 1080i anyway. That means the performance you can get today out of these first HD-DVD discs is a slight notch down from full 1080p, but rest assured, it can still deliver one hell of a picture, and is certainly a major leap forward in image quality over even the best standard DVD image.

So that caveat aside, I watched 'Last Samurai' via the Toshiba HD-XA1's HDMI 1080i output, and I also did comparisons via the player's component output (which is still the best input the majority of HDTV sets on the market can accept). Then I fired up a showing of 'Last Samurai' I recorded off of HBO-HD recently on my Dish Network HD DVR, to make a comparison between HD-DVD and satellite HD broadcasts. Finally, I did a quick HD-DVD versus standard DVD comparison, just for giggles. (I would have liked to have also done a comparison between HD-DVD and the fledgling D-VHS HD videotape format, but alas 'Last Samurai' has not been released on D-VHS.)

Now, at last, the results. Watching 'The Last Samurai' at 1080i via HDMI on the HP was certainly an impressive experience. Quite simply, it delivered the best video I've ever seen on a pre corded consumer format. There are some shots that were breathtaking -- the kind of three-dimensional images you rarely seen outside of the cinema. Close-ups predictably had the biggest wow factor, though even some of the widest panoramic shots boasted noticeable fine detail that exceeded anything seen on standard DVD. Colors also "popped" incredibly well, with a few shots containing lots of deep oranges and lush greens that were incredibly striking. The transfer's blacks were also rock solid, and contrast excellent. Plus, I noticed none of the ringing or halos still frequently seen on standard DVD transfers, which gave 'Last Samurai' a very natural, film-like appearance.

After watching the full film via HDMI, I shuttled back and forth between the HDMI outs and the component outs on the Toshiba HD-XA1. First, great news for those who do not currently own an HDTV set with HDMI inputs -- Warner has elected not to enforce ICT, the controversial "down-res" copy protection scheme built into the HD-DVD spec, that allows a content provider to downgrade the quality of an HD DVD disc to standard DVD quality via the component outs. So that means you can watch all the studios initial HD-DVD offerings in full 1080i without HDMI -- at least for now, until Warner decides to implement ICT (which it has publicly stated it intends to do). Also note that component connectors are not capable of delivering full 1080p, however, so if you eventually want to upgrade your home theater to the full 1080p experience, you will still have to buy a new set down the road.

As for the picture quality via HDMI versus Component outs, I'd give the edge slightly to HDMI. I've always found it to be a ever-so-slightly more crisp picture, with more solid color reproduction and a tad richer blacks. However, 'Last Samurai' still looked great via Component outs, and it certainly rivaled any film-based satellite HD broadcasts I've seen.

Comparing the HD-DVD and the HBO-HD broadcast showing of 'Samurai' also bore some interesting results. I'd give the edge to HD-DVD, because of one considerable improvement versus satellite and terrestrial HD broadcasts -- macroblocking. I've subscribed to DirecTV, Voom and now Dish Network, and even the best of their HD broadcasts suffer from some sort of pixelization, mostly on fast motion. I noticed absolutely no macroblocking on the HD-DVD 'Samurai,' even on the most fast-cut, complex scenes. That's a real boon for those who want the best quality money can buy, especially when it comes to today's whiplash-paced big-budget action blockbusters. Aside from that issue, the HD-DVD and HBO-HD versions of 'Samurai' were pretty comparable, though I thought the HD-DVD also boasted more detail visible in the darkest areas of the picture, and slightly improved sharpness and contrast overall. Overall, purely in terms of picture clarity and resolution (minus the macroblocking issue) I'd rate the HD-DVD as about a 5 to 10 percent improvement over the satellite HD broadcast.

Finally, comparing the HD-DVD versus standard DVD, the victor was clear. HD-DVD is simply sharper, clearer, more vibrant and more real. I was also a bit surprised that the HD-DVD of 'Samurai' sported stronger colors and better blacks than the standard DVD version, even though they appear minted from the same master. The considerably increased detail of the HD-DVD format also gives the image a better sense of contrast, as distinct picture elements like the glint on a blade or fine clothing textures now "pop" off the screen more, as opposing areas of light and dark are now more pronounced. (I will take a closer look at HD-DVD quality versus upconverted SD DVD in my review of Universal's 'Serenity.')

However, the difference between the HD-DVD and standard DVD does narrow a bit if you output the standard DVD via the Toshiba's HDMI out, which can upconvert standard DVD's 480p native resolution to 720p or all the way up to 1080i (no 1080p option on these first-gen players). Watching the standard DVD of 'Samurai' upconverted to 1080i, I'd say it was about 15 percent better than the 480p out, with a sharper picture and surprisingly richer blacks (which I wasn't expecting). Though it still can't compare to the HD-DVD of 'Samurai,' I venture to guess that on smaller screen sizes, average consumers might not be wowed as much by the HD-DVD. Certainly, if these new next-gen high-def DVD formats are going to win over the mainstream, they need to be seen on larger screen sizes. At when compared to upconverted standard DVD, they just don't deliver the same quantum leap in quality that standard DVD did with VHS.

The Audio: Rating the Sound

Here's where the HD-DVD format hasn't quite reached its full potential. Though Warner's 'Phantom of the Opera' release contains the first soundtrack encoded in Dolby's proprietary TrueHD loseless compression format -- which promises to deliver up to 7.1 channels of surround sound equivalent in quality to the original studio master -- 'Samurai' boasts a more standard 5.1 surround track, albeit one recorded in Dolby Digital-Plus.

In brief, Dolby Digital-Plus is a new, slightly improved stopgap between TrueHD and the 5.1 Dolby tracks typical to SD DVD. Built on Dolby Digital, the DD+ multichannel audio standard for all next-gen HD DVD formats and HD broadcasts delivers higher bitrates and sampling rates, and can "matrix in" an additional two surround channels of surround information into an existing 5.1 channels of encoded information. In other words, play a Dolby Digital PLus soundtrack via your existing Dolby Digital 5.1 receiver, and get a standard 5.1 soundtrack. Run it through a Dolby Digital Plus receiver, and you can extract the full 7.1 experience.

Unfortunately, no DD+ nor TrueHD receivers are yet on the market. So the only way to get the increased benefits of the DD+ format's better bitrate and improved sound quality on the first-gen Toshiba decks is a bit complicated. The players will not try and output DD+ via the HDMI, coaxial or optical audio outputs. It will simply extract the core Dolby Digital 5.1 audio which should be compatible with any receiver. You can, however, run the player's dedicated 5.1 outs -- that's one cable for each channel -- to your receiver, which will allow you to extract out the extras 2 surround channels (provided your receiver supports 7.1 speakers).

What this all means for you non-tech types is that for the most part, until TrueHD and Dolby Digital-Plus receivers hit the market, you're pretty much going to get the same Dolby Digital 5.1 soundtracks your getting on SD DVD. Which is not a bad thing, at least in the case of 'Last Samurai.' It's a great, sometimes stupendous mix. There is strong dynamic range throughout the entire film, enlivened by very powerful, deep bass, particularly during combat sequences. Dialogue and the fine score had a pleasing tonal quality, still sounding warm and human despite the often bombastic sound effects. And the surround channels are used to often powerful effect from the very beginning, from discrete sounds and atmospheric effects to the gothic rain, thunder and gunshots that grow in prominence in the second half of the movie.

Still, the audio is still the one area where HD-DVD can not really shine just yet. But if the picture quality is any indication, once receiver technology catches up and more studios begin encoding discs with TrueHD, HD-DVD should really deliver the best audio yet heard in home theater.

The Supplements: Digging Into the Good Stuff

As was expected, Warner has ported over all of the supplements from the standard DVD release of 'The Last Samurai.' Which means none of the extras here will surprise anyone who already has the previous DVD. Still, it's better than the alternative -- the studio could have easily just gone no-frills for its first HD-DVD offerings, and charged a premium or the high-quality video and audio sans extras.

Before I dive into the goodies, a note on how they are presented. As these are the same extras as the SD DVD release, they were not created with HD in mind. All the video-based supplements are encoded at 480p, and outputted as pillarboxed 4:3 video. The quality is quite good, though the film clips, which are letterboxed within the 4:3 pillarboxed frame, certainly look crappy compared to the film in HD-DVD. Also notable is that, due to HD-DVD's considerable storage capacity, both the complete feature and all the SD DVD extras -- which required two-discs on the old release -- are now easy to access on this HD-30 dual-layer HD-DVD disc. The number of all these mega-disc special editions is, if not going to be completely eliminated, will certainly be greatly reduced.

Now, the extras. Once again we have an audio commentary with director Ed Zwick. Certainly, he is an articulate, knowledgeable guy. Though, to be honest, I have not been a fan of most of his commentaries. He comes off occasionally dry and humorless, and there is no exception here. But if you have 132 minutes to spare and want to know every last production detail -- from staging the fight scenes to working with a star of Cruise's magnitude, who as Zwick makes clear has his own input into every aspect of the film -- this commentary won't disappoint.

The video-based supplements kick off with the 21-minute History Channel Documentary, "History vs. Hollywood: The Last Samurai," which delves into the historical fact behind 'Samurai's fiction. Unfortunately, I didn't think it was as strong as most of the History Channel's other movie-inspired special, coming off as largely promotional with little in the way of historical grit. Maybe Cruise edited this one, too.

Next up are three longer featurettes that focus on the filmmaking due of Zwick and Cruise. "Edward Zwick: A Director's Video Journal" runs 26 minutes as is an assemblage of on-set diary footage narrated by Zwick and Cruise; the 12-minute "Tom Cruise: A Warrior's Journey" tracks Cruise as he undergoes sometimes grueling preparation to effectively portray a Samurai warrior; and the 17-minute "Making an Epic: A Conversation with Edward Zwick" is just that -- a one-on-one interview with the director, which is largely redundant with the audio commentary.

Ironically, it is the series of three short 5- to 7-minute featurettes that round out the set that I enjoyed the most, if only because they are free of the Zwick-Cruise grandstanding and back-patting. "A World of Detail: Production Design with Lilly Kilvert" shows us how the sets were built; "Silk and Armor: Costume Design with Ngila Dickson" examines the costuming; and "Imperial Army Basic Training: From Soldier to Samurai: The Weapons" dissects how the film's battalion of extras were drilled in combat and tactical maneuvers.

Finally, a couple of deleted scenes run about 5 minutes with optional Zwick commentary, but they are nothing thrilling. There is also a couple of minutes of promo footage of film's twin premieres in Tokyo and Kyoto, plus the theatrical trailer, which unfortunately is not presented in full 1080p HD (only 480p SD video).

HD Bonus Content: Any Exclusive Goodies in There?

Sadly, no. Aside from a couple of just-announced upcoming Universal HD-DVD titles, all of the studios' initial HD-DVD offerings are lacking in any extras aside from what you'll find on their SD DVD siblings. I'm sure that will change in the future, as if both the HD-DVD and Blu-Ray formats are going to succeed with the mainstream, they have to deliver more than just video and audio, but something new on the extras front, too.

However, even HD bonus content-free discs like 'Samurai' do have a few nifty tricks up their sleeve. Mainly the menu system, though sadly far slower than I thought, does allow you to perform previously static SD DVD functions like chapter search and special feature access "live" along with the movie. Meaning if you hit menu while the movie is playing, you don't have to wait a few seconds while you jump over to the menu, then back. Instead, an overlay appears over the film itself, and you can dynamically make selections. Kinda cool, and virtually seamless. I sense this is only the tip of the iceberg in terms of interactivity on HD-DVD, so I hope more studios tinker with these kind of functions on future releases.

But funny enough, the best "HD bonus feature" on 'Last Samurai' is Warner's new HD-DVD teaser promo, which shows clips from such upcoming HD-DVD titles as 'Batman Begins,' 'The Matrix' and 'Harry Potter and the Goblet of Fire,' all in glorious 1080p. They look great, and only gets me salivating for the next round of Warner releases. Bring 'em on!

Final Thoughts

So that wraps up my initial first impression of HD-DVD -- it looks great, but is it great enough? Admittedly, I've been living with HD for three years now. Being in L.A., I've been able to enjoy HD terrestrial and satellite broadcasts for quite a while, and also own a D-VHS deck. So HD-DVD is not so much something new as an extension of what I've already been enjoying. Certainly, it delivers on the bottom line -- it matches and, at times, exceeded the HD I've seen. But will it be enough to blow people away after all the hype? Purely on video quality, I hate to say probably not. Unless you are only used to standard DVD on a 32-inch or less TV, and see HD-DVD on a big screen, you may wonder what all the fuss is about. But if you have a dedicated home theater that can take full advantage of HD-DVD's capabilities, you certainly will be getting the best pre-recorded consumer video money can buy.
http://hddvd.highdefdigest.com/lastsamurai.html





The Phantom of the Opera (HD-DVD)
Peter M. Bracke

The 'Phantom' Menace

They say timing is everything in Hollywood, and that was certainly true for the 2004 film adaptation of the Broadway smash 'The Phantom of the Opera.' First produced for the stage in 1986, it took nearly twenty years for the Andrew Lloyd Weber musical extravaganza to hit the big screen, long after the play had first captured the cultural zeitgeist. Produced on a lavish budget of over $70 million, the film barely scraped $50 million in domestic box office receipts, disappearing to video stores as quickly as it faded out of the public's consciousness.

Perhaps things would have been different had 'Phantom' hit theaters a decade ago, when its sensibilities were a bit more in vogue. On stage, it was one of Broadway's hottest tickets in the '80s and early '90s, but its appeal has since been diminished by weak touring company productions, and its sales usurped by far hipper, post-modern stage smashes as 'Hairspray,' 'Avenue Q' and the unstoppable kitsch of 'Mamma Mia.' 'Phantom' was already a dated anachronism by the dawn of the new millennium, which made a big-screen version about as commercially appealing as a hip-hop 'Cats.'

For those unfamiliar with the story of 'The Phantom of the Opera,' it has been told and retold so many times it almost seems like a fairy tale, not based on the famous book by Gaston LeRoux. The Phantom (here played by Gerald Butler) is a disfigured musical genius, who lives hidden deep within the bowels of the Paris Opera House. Soon a young musical sensation, Christine (Emmy Rossum) becomes the Opera House's new star, and the Phantom is bewitched. Becoming his unwitting protege, he soon terrorizes the opera company to woo the love of his life. Needless to say, romantic tragedy ensues.

Sticking more or less faithfully to both the original source material as well as the Webber-Rice play, the movie version of 'Phantom' is a handsome, earnest, lively film. Director Joel ('Lost Boys,' 'St. Elmo's Fire') Schumacher would not seem to be the most likely candidate to helm a big-screen version of 'Phantom,' but Schumacher has never been an ironic filmmaker. He plays the material absolutely straight, which probably doomed the film commercially, but gives it a timeless feel lacking in far more clever if instantly dated modern musicals like 'Moulin Rouge' and 'Chicago.'

Indeed, as I watched 'Phantom,' I had to constantly remind myself what decade I was in. By 30 minutes into the film, when Christine takes her famous descent into the bowels of the Opera House with the Phantom by way of gondola, I felt like I had stepped into some weird musical mishmash of a big-hair '80s Heart video and the Pirates of the Caribbean ride at Disneyland. Funny, campy, cringe-inducing yet strangely endearing all at once, this 'Phantom' absolutely refuses to so much bat an eyelash in the direction of hip irony. That makes it cheesy to the extreme, but oddly captivating -- though perhaps not in the way intended.

Admittedly, I am probably not the target audience for the sappy sentiment of 'Phantom.' Nor I am much a fan of Webber's musical style, a sort of pop-opera mash-up that occasionally produces a nice tune (the title theme of 'Phantom' has a great, devilish bass line it is impossible not to tap your foot to), but more often than not revels in Disney-esque blandness. But despite all that, even I found myself roped in by the end of the film's 142 minutes. Perhaps that is more a tribute more to the power of LeRoux's original creation than Schumacher's penchant overblown theatrics, but I genuinely cared what happened to Christine and the Phantom. I can't say I shed any real tears by the time of the film's predictable, tragic climax (really, do I need to tell you what happens?), but it did kinda make me glad that earnest, sincere romantic films are still being made in Hollywood. Even if no one is going to see them anymore.

The Video: Sizing Up the Picture

My surprise enjoyment of 'Phantom of the Opera' continued with the film's impressive picture. I would even venture to say that of all the initial HD-DVD titles I have reviewed thus far, 'Phantom' has produced some of the most striking images. I don't know if I've ever described a video transfer as "delicate" before, but that is exactly the trick 'Phantom' pulls off here, perfectly straddling the line between technical razzle-dazzle and a realistic video image.

A lavish, sumptuously-mounted film, 'Phantom' is certainly overflowing with color, texture and subtle lighting, which quite frankly got lost even on the fine-looking standard DVD released last year. But not here. Quite frankly, my direct comparison between the HD-DVD and standard DVDs of 'Phantom' was no contest -- the HD-DVD blew it out of the water. On a good home theater setup, it is would be hard for anyone to say HD-DVD doesn't offer a considerable improvement over standard DVD, at least with 'Phantom' as your demo material.

Based on the usual approach to transfers of films such as this in the past, I expected 'Phantom's vibrant reds, oranges and midnight blues to be pumped up to oblivion, with all the characters looking not so much as they have been lit with light, but painted with day-glo colors. Instead, I was pleasantly surprised by how much detail there on the HD-DVD transfer. From the fine textures of the actor's skin in close-up to the most minute costume design details, I was often blown away by how terrific the image looked. Depth is incredibly three-dimensional in just about every scene, so much so that I'd say there are select shots here that rival the best video I've ever seen on any consumer format.

What also pushes 'Phantom' into the realm of true HD demo material is that, unlike Warner's other initial HD-DVD launch title, 'The Last Samurai,' it is almost completely lacking in film grain. As good as 'Samurai' looks, it was shot using the Super 35 process, which produces a bit of grain in the image, which often comes across as noise on video. But 'Phantom's images are so smooth and free of any apparent imperfections that I almost couldn't believe it wasn't some new sort of CGI enhancement. (Maybe it is?) But however they did it, this film looks absolutely smashing, and is certainly worth watching just to see how good a HD-DVD disc can look.
http://hddvd.highdefdigest.com/phant...opera2004.html





1080P — Time for a Reality Check!
Peter Putman

Thinking about buying a new 1080p rear-projection TV, front projector, or LCD TV? You might want to put your credit card back in your wallet after you read this.

It’s obvious that the buzzword in consumer TV technology this year is “1080p”. Several manufacturers are showing and shipping 1080p DLP and LCoS rear-projection TVs. We’ve seen RPTVs and front projectors with 1920x1080 polysilicon LCD panels at CESA, NAB, and InfoComm. And the trickle of large LCD TVs and monitors with 1920x1080 resolution is turning into a flood.

To get your attention, marketers are referring to 1080p as “full spec” HD or “true” HD, a phrase also used by more than one HD veteran in the broadcast industry. We’re hearing about “1080p content” coming out of Hollywood, from broadcasters, from cable systems, and from direct broadcast satellite services.

The budding format war between Blu-ray and HD DVD for the next generation of high definition DVD players promises the same thing — 1080p content at high bit rates, finally realizing the full potential of HDTV.

STOP!

Enough of this nonsense. It’s time to set the record straight, to clear up the air about what 1080p is and isn’t.

First off, there is no 1080p HDTV transmission format. There is a 1080p/24 production format in wide use for prime time TV shows and some feature films. But these programs must be converted to 1080i/30 (that’s interlaced, not progressive scan) before airing on any terrestrial, satellite, or cable TV network.

What’s that, you say? Those 1080p/24 could be broadcast as a digital signal? That’s true, except that none of the consumer HDTV sets out there would support the non-standard horizontal scan rate required. And you sure wouldn’t want to watch 24Hz video for any length of time; the flicker would drive you crazy after a few seconds.

No, you’d need to have your TV refresh images at either a 2x (48Hz) or 3x (72Hz) frame rate, neither of which is supported by most HDTVs. If the HDTV has a computer (PC) input, that might work. But if you are receiving the signals off-air or using a DVI HDCP or HDMI connection, you’ll be outta luck.

What about live HDTV? That is captured, edited, and broadcast as 1080i/30. No exceptions. At present, there are no off-the-shelf broadcast cameras that can handle 1080p/60, a true progressive format with fast picture refresh rates. It’s just too much digital data to handle and requires way too much bandwidth or severe MPEG compression. (Consider that uncompressed 1920x1080i requires about 1.3 gigabits per second to move around. 1080p/60 would double that data rate.)

How about Blu-ray and HD-DVD? If either format is used to store and play back live HD content, it will have to be 1920x1080i (interlaced again) to be compatible with the bulk of consumer TVs. And any progressive-scan content will also have to be interlaced for viewing on the majority of HDTV sets.

Here’s why. To cut manufacturing costs, most HDTV sets run their horizontal scan at a constant 33.8 kHz, which is what’s needed for 1080i (or 540p). 1080p scans pictures twice as fast at 67.6 kHz. But most of today’s HDTVs don’t even support external 720p signal sources, which requires a 44.9 kHz higher scan rate.

In the consumer TV business today, it’s all about cutting prices and moving as many sets as possible through big distribution channels. So, I ask you: Why would HDTV manufacturers want to add to the price of their sets by supporting 1080p/60, a format that no HDTV network uses?

Here’s something else to think about. The leading manufacturer of LCD TVs does not support the playback of 1080p content on its own 1920x1080 products, whether the signal is in the YPbPr component or RGB format. Only the industrial monitor version of this same LCD HDTV can accept a 1920x1080p RGB signal.

Now, don’t blame HDTV manufacturers for this oversight. They are only supporting the 1080 format in actual use, 1920x1080i, a legacy digital format that has its roots in the older Japanese MUSE analog HDTV format of the 1980s. That’s one big reason that 1080i has remained as a production and transmission format.

It gets worse. All kinds of compromises are made in the acquisition, production, and transmission of 1080i content, from cameras with less than full resolution in their sensors and reduced sampling of luminance and chrominance to excessive MPEG compression of the signal as it travels from antenna, dish, or cable to your TV.

But that’s not all. To show a 1080i signal, many consumer HDTVs do the conversion from interlaced to progressive scan using an economical, “quickie” approach that throws away half the vertical resolution in the 1080i image. The resulting 540p image is fine for CRT HDTV sets, which can’t show all that much detail to begin with. And 540p is not too difficult to scale up to 720p.

But a 540p signal played back on a 1080p display doesn’t cut the mustard. You will quickly see the loss in resolution, not to mention motion and interline picture artifacts. Add to that other garbage such as mosquito noise and macroblocking, and you’ve got a pretty sorry-looking signal on your new big screen 1080p TV.

Oops! Almost forgot, that same 1080p TV may not have full horizontal pixel resolution if it uses 1080p DLP technology. The digital micromirror devices used in these TVs have 960x1080 native resolution, using a technique known as “wobbulation” to refresh two sets of 960 horizontal pixels at high speed, providing the 1920x1080 image. It’s a “cost thing” again. (Let’s hope these sets don’t employ the 540p conversion trick as well!

To summarize: There are no fast refresh (30Hz or 60Hz) 1080p production or transmission formats in use, nor are there any looming in the near future — even on the new HD-DVD and Blu-ray formats. The bandwidth is barely there for 1080i channels, and it’s probably just as well, because most TVs wouldn’t support 1080p/60 anyway — they’d just convert those signals to 1080i or 540p before you saw them.

The 1280x720 progressive-scan HDTV format, which can be captured at full resolution using existing broadcast cameras and survives MPEG-2 compression better than 1080i, doesn’t make it to most HDTV screens without first being altered to 1080i or 540p in a set-top box or in the HDTV set itself. So what chance would a 1080p signal have?

Still think you’ve just gotta have that new 1080p RPTV? Wait until you see what standard definition analog TV and digital cable look like on it…
http://www.hdtvexpert.com/pages_b/reality.html





Why You Should Boycott Blu-Ray And HD-DVD

This page details all the things that are wrong with the next generation DVD players, and why you don't want any part of it. If you purchase a Blu-ray or HD-DVD player to watch high definition movies, you are essentially saying that you are perfectly ok with everything on this page, and that's no good. Therefore, I ask you to vote with your wallet and boycott Blu-ray and HD-DVD.

If you've ever watched HDTV, you know what a treat it is. At 5 times the resolution of normal television, it looks fantastic. I would love to be able to purchase or rent HD movies to watch at home. But I just can't bring myself to do it, for the reasons listed below. This is all very unfortunate. They have lost me as a customer. I hope to persuade you as well.

There are a lot of acronyms on this page, so here are some quick definitions for you:

DRM - Digital Restrictions Management - technology to restrict what you can do with media you purchase
AACS - Advanced Access Content System - the DRM infection used for both Blu-ray and HD-DVD
BD+ - an addition to AACS for Blu-ray discs, that provides additional restrictions to what you can do
MMC - Mandatory Managed Copy - a theoretical way for you to make a legal copy of a movie
HDCP - High-bandwith Digital Content Protection - Encryption of data over digital connections
HDMI - High Definition Multimedia Interface - A digital connection found on most new HDTV's, all HDCP compliant
DVI - Digital Visual Interface - Precursor to HDMI, found on many older HDTV's. However, many DVI connections are not HDCP compliant, making them worthless for Blu-ray and HD-DVD.
MPAA - Motion Picture Ass. of America - trade organization representing the major movie companies
RIAA - Recording Industry Ass. of America - trade organization representing the major music companies

Reasons to be outraged by Hollywood
If your HDTV does not have an HDMI port, or an HDCP-compliant DVI port, you won't be able to watch HD movies in high definition. Bad news for the 3 million people in the US who don't have digital HDTV's and will only be able to connect over analog (component) cables - your movies will be downsampled to 1/4 their resolution, making them essentially the same as a standard DVD. The studios are understandably scared of an open, high quality, digital video interface, so they are insisting that your TV supports digital encryption to fully enjoy its new movies. This helps them to sleep better at night, but realistically only the honest people will be inconvenienced. Someone will likely figure out a way around it, given enough time. Some studios have said they won't enable this restriction for their initial movie launches, but remember they can enable it at any time in the future.

On a similar note, you will also have problems playing these movies on your computer with an internal Blu-ray or HD-DVD drive. If you don't upgrade to an HDCP compliant video card and monitor, you're screwed. An HDCP compatible video card is different than a compliant one, and will not work.

AACS means that Blu-ray and HD-DVD will never be compatible with free software, affecting nearly everyone that wants to view these movies on their computer but isn't running Windows or Mac OS X. While this is a minority of computer users, they should not be ignored. Some might say history is doomed to repeat itself.

Mandatory Managed Copy (MMC) theoretically allows things such as making legal backups and streaming content from one part of your house to another, but the studios have the option of charging you money to do that. The first batch of HD players won't even support MMC. As well, all aspects of MMC will require your player to be connected to the internet, which isn't inherently bad, but is certainly open for abuse. Besides, what if you don't have an available internet connection close to your home theater? What if you don't have broadband? Answer: Too bad. More details re: MMC can be had in this insightful interview with an HD-DVD rep.

It's amazing that MMC even exists, considering this. Choice quote: "Even if CDs do become damaged, replacements are readily available at affordable prices". Translation: please purchase another copy of content you have already paid for, thank you.

The MPAA and RIAA think that DRM is more important than human life. Wow.

"Hacking" your player, for example to remove the region coding, or playing a bootlegged disc, may lead your player to self destruct. (Only applies to Blu-ray and BD+ from what I can gather).

More about internet connections: the MPAA originally wanted that to be a requirement just to play these movies. They have since changed their mind.

They also originally considered having each disc being playable by only one player, meaning that if you played a new movie in your player, your friend couldn't watch the same disc in his player. Again they changed their mind, but that it was even considered is pretty shocking.

Other reasons you don't need HD-DVD or Blu-ray
The jump from VHS to DVD was dramatic and obvious - superior video quality, digital surround sound, non-degrading storage format, multiple audio tracks, bonus features, etc. The jump from DVD to the next generation does not provide any benefits other than higher resolution, which to be fair is a great reason to want that upgrade, but that's it! Plus, DVD's still look pretty damn nice to most people. Don't fall for the "better sound" hype either. 5.1 channel Dolby or DTS is pretty much the best it's going to get. Do you really want more speakers behind you than in front of you?

Blu-ray vs. HD-DVD will be a format war, leaving both consumers and retailers very frustrated. Do you want to gamble with investing thousands of dollars in a technology that may not be around in a few years? Some studios will only release their movies on one or the other format (Sony Pictures obviously will only do Blu-ray), which means if you want access to all possible movies, you will either have to buy both players or get a dual-format player. Chances are both formats will not be very successful, because of the insane costs and the fact that most people do not own HDTV's. Besides, the future is probably video on demand, not on disc. Even Bill Gates agrees.

The players and the media are going to be expensive. HD-DVD players will run $500, Blu-ray will be $1,000, and those are minimum prices. Most of the movies will retail for over $30. For computer storage, blank media will also cost around $30 minimum. Surely these costs will drop over time, but that combined with the format war makes it obvious that you should wait a bit before jumping on.

The biggest lie of all is that we even need these new technologies to have HD video on a disc. DVD video has been around for almost 10 years now, and since then vastly superior video compression technologies have been introduced, namely MPEG-4 and all its variants (h.264, DivX, XviD, etc). These compression formats are absolutely amazing in regards to size vs quality. A hi-def movie in any of these formats could easily fit onto a normal DVD, let alone a dual layer one. The only problem is that you can't really 'update' your existing player. In the consumers' best interest, what they would do is release new DVD players that not only supported these newer formats, but also had the ability to be upgraded for future technologies. We wouldn't need these expensive blue lasers to fit more data on a disc. Unfortunately, this solution doesn't line the pockets of shareholders and executives, so it is unlikely to happen.

The public is not ready for a new format already. A lot of people have spent a lot of money building their DVD collections, a format that just became mainstream ~5 years ago. Do you really want to go out and replace all of those movies? These new players will be backwards compatible with your old movies for sure, but if you just blew a grand on a shiny new player, you're going to want to watch your favorite movies in all their HD glory, right? Haven't you ever heard someone say, "Well, looks like now I have to buy another copy of the White Album" ?
http://fuckbluray.com/boycott





Making and Breaking HDCP Handshakes
Ed Felten

I wrote yesterday about the HDCP/HDMI technology that Hollywood wants to use to restrict the availability of very high-def TV content. Today I want to go under the hood, explaining how the key part of HDCP, the handshake, works. I’ll leave out some mathematical niceties to simplify the explanation; full details are in a 2001 paper by Crosby et al.

Suppose you connect an HDMI-compliant next-gen DVD player to an HDMI-compliant TV, and you try to play a disc. Before sending its highest-res digital video to the TV, the player will insist on doing an HDCP handshake. The purpose of the handshake is for the two devices to authenticate each other, that is, to verify that the other device is an authorized HDCP device, and to compute a secret key, known to both devices, that can be used to encrypt the video as it is passed across the HDMI cable.

Every new HDCP device is given two things: a secret vector, and an addition rule. The secret vector is a sequence of 40 secret numbers that the device is not supposed to reveal to anybody. The addition rule, which is not a secret, describes a way of adding up numbers selected from a vector. Both the secret vector and the addition rule are assigned by HDCP’s central authority. (I like to imagine that the central authority occupies an undersea command center worthy of Doctor Evil, but it’s probably just a nondescript office suite in Burbank.)

An example will help to make this clear. In the example, we’ll save space by pretending that the vectors have four secret numbers rather than forty, but the idea will be the same. Let’s say the central authority issues the following values:
secret vector addition rule
Alice (26, 19, 12, 7) [1]+[2]
Bob (13, 13, 22, 5) [2]+[4]
Charlie (22, 16, 5, 19) [1]+[3]
Diane (10, 21, 11, ,14) [2]+[3]

Suppose Alice and Bob want to do a handshake. Here’s how it works. First, Alice and Bob send each other their addition rules. Then, Alice applies Bob’s addition rule to her vector. Bob’s addition rule is “[2]+[4]”, which means that Alice should take the second and fourth elements of her secret vector and add them together. Alice adds 19+7, and gets 26. In the same way, Bob applies Alice’s addition rule to his secret vector — he adds 13+13, and gets 26. (In real life, the numbers are much bigger — about 17 digits.)

There are two things to notice about this process. First, in order to do it, you need to know either Alice’s or Bob’s secret vector. This means that Alice and Bob are the only ones who will know the result. Second, Alice and Bob both got the same answer: 26. This wasn’t a coincidence. There’s a special mathematical recipe that the central authority uses in generating the secret vectors to ensure that the two parties to any legitimate handshake will always get the same answer.

Now both Alice and Bob have a secret value — a secret key — that only they know. They can use the key to authenticate each other, and to encrypt messages to each other.

This sounds pretty cool. But it has a very large problem: if any four devices conspire, they can break the security of the system.

To see how, let’s do an example. Suppose that Alice, Bob, Charlie, and Diane conspire, and that the conspiracy wants to figure out the secret vector of some innocent victim, Ed. Ed’s addition rule is “[1]+[4]”, and his secret vector is, of course, a secret.

The conspirators start out by saying that Ed’s secret vector is (x1, x2, x3, x4), where all of the x’s are unknown. They want to figure out the values of the x’s — then they’ll know Ed’s secret vector. Alice starts out by imagining a handshake with Ed. In this imaginary handshake, Ed will apply Alice’s addition rule ([1]+[2]) to his own secret vector, yielding x1+x2. Alice will apply Ed’s addition rule to her own secret vector, yielding 26+7, or 33. She knows that the two results will be equal, as in any handshake, which gives her the following equation:

x1 + x2 = 33

Bob, Charlie, and Diane each do the same thing, imagining a handshake with Ed, and computing Ed’s result (a sum of some of the x’s), and their own result (a definite number), then setting the two results equal to each other. This yields three more equations:

x2 + x4 = 18
x1 + x3 = 41
x2 + x3 = 24

That makes four equations in four unknowns. Whipping out their algebra textbooks, the conspiracy solves the four equations, to determine that

x1 = 25
x2 = 8
x3 = 16
x4 = 10

Now they know Ed’s secret vector, and can proceed to impersonate him at will. They can do this to any person (or device) they like. And of course Ed doesn’t have to be a real person. They can dream up an imaginary person (or device) and cook up a workable secret vector for it. In short, they can use this basic method to do absolutely anything that the central authority can do.

In the real system, where the secret vectors have forty entries, not four, it takes a conspiracy of about forty devices, with known private vectors, to break HDCP completely. But that is eminently doable, and it’s only a matter of time before someone does it. I’ll talk next time about the implications of that fact.

[Correction (April 15): I changed Diane’s secret vector and addition rule to fix an error in the conspiracy-of-four example. Thanks for Matt Mastracci for pointing out the problem.]
http://www.freedom-to-tinker.com/?p=1005





Sun DRM Finds A Home In Korean IPTV Pilot
Faultline

Sun Microsystems may have already found its first customer, in a Korean IPTV system, for its DReaM (DRM Everywhere Available) open source DRM, a system that is not meant to be completed for at least another 12 months.

This was revealed by the director of conditional access at Korean company Alticast, as he was speaking at a Sun Microsystems event at the end of March. Alticast revealed plans to build the DReaM conditional access system into an IPTV pilot, but also to build a commercial product based on it for implementation throughout the Far East. Sun says it is still between nine and 15 months away from a product, but since this is based on an Open source process, code exists already for most of the system.

This week Sun released the source code for two components of DReaM, its DReaM-CAS (Conditional Access System) and DReaMMMI (Mother May I) the underlying mechanism for always asking a central resource for permission to access content. In papers that Sun put out this week it has described both of these processes. DReaMCAS or D-CAS currently only manages access to content in the MPEG-2 format.

Sun told us in October that it plans to create a royalty-free, interoperable DRM technology, independent of any specific hardware or operating systems which focuses on the concept of a user being given access to content, rather than one specific device being authenticated. This is something that may come more easily to Sun, since it can rely on the Liberty Alliance initiative which is was also behind, for allowing a single copy of a persons identity to act as a trust source for other services, without having to reveal identities to other services.

Sun kicked off into DRM with a European Eurescom project started in 2001 and reported on in 2003, funnily enough called OPERA,where it worked with DMD Secure, Exavio, SDC AG and T-Systems and some European operators. The inappropriate name (Opera is a browser company) came from InterOPERAbility of DRM technologies.

SDC and Sun built a system that was based on the Java SIM card found inside a mobile phone, and had Windows Media DRM authenticated with RealNetworks, and RealNetworks authenticated with SDC’s client and all of it using the Java based SIM for identity.

The iCOD TV system (internet Contents on Demand) is being built now and phase one lasts until February 2007, by a combination of Korea Telecom, which is handling the network design including the QoS services, Etri which is handling MHP/OCAP compatible standardized middleware, Sitec making the set top and Alticast designing a downloadable Conditional Access system based on D-CAS and the EPG. It has been funded by the consortium and some Korean government money with the aim of Korea developing its own IPTV stack of components.

The network design is supposed to offer fast channel zapping by using a new version of the Internet Group Management Protocol (IGMP) although that is really only responsible for a small part of the delay time in a channel change. The system will use H.264 compression over an MPEG 2 transport stream.

Billing and purchasing of content is expected to work directly through the downloadable open source conditional access system. At present DReaM will only protect content through the network and is not yet ready to operate on stored video programming, although Sun is likely to address that prior to releasing products of its own.

The system uses AES encryption, requires a constantly open two way IP connection and it sends encrypted keys to the content along with the content, and these have to be decrypted by an existing public key. Entitlement messages are delivered out of band in a separate communication using the Mother May I protocols. More D-CAS applications will generate the entitlement messages, and a Java smart card will be used for authentication, which will store and manage viewers rights and viewing history.

The EPG that Alticast is designing looks similar to the successful Microsoft-demonstrated Mosaic system, and will show multiple live TV streams on one screen for selecting programming, as well as offering picture in picture capability for watching two views of the same program simultaneously.

Currently in Korea NDS dominates the conditional access market, with Nagravision coming a distant third, while Gemstar dominates EPG systems.
http://www.theregister.co.uk/2006/04..._korean_pilot/





Reasons to Love Open-Source DRM
Eliot Van Buskirk

I acknowledge that the title of this column is strange. Aside from the fact that most savvy music listeners (justifiably) hate DRM, the very idea of using open-source software to enforce digital rights management runs counter to everything commonly assumed about the technology: that it needs to be secret, obscure, proprietary.

But open-source DRM is exactly what Sun Microsystems has proposed, with its DReaM initiative. Its goal is to promulgate an open-source architecture for digital rights management that would cut across devices, regardless of the manufacturer, and assign rights to individuals rather than gadgets.

Assuming it catches on, this would create a bizarro world version of the copy-protection landscape. Today, consumers largely scorn DRM schemes in favor of unprotected MP3s ripped from CDs or downloaded off P2P networks. One reason for this is because iTunes-purchased music only plays on iPods, and subscription files from services such as Rhapsody only play on Microsoft Janus-compatible MP3 players. If DReaM works, consumers will be able to access their purchased songs through a number of providers, and using a wide variety of devices.

Sun is talking about a sea change on the scale of the switch from the barter system to paper money. Like money, this standardized DRM system would have to be acknowledged universally, and its rules would have to be easily converted to other systems (the way U.S. dollars are officially used only in America but can be easily converted into other currency). Consumers would no longer have to negotiate separate deals with each provider in order to access the same catalog (more or less). Instead, you -- the person, not your device -- would have the right to listen to songs, and those rights would follow you around, as long as you're using an approved device (more on that later).

The idea of Lawrence Lessig endorsing any DRM scheme is enough to make certain heads explode. But the "fair use" champion approved Sun's plan, because Sun worked with the Creative Commons "pretty much from the outset, to support their license definitions," according to Tom Jacobs, director of engineering at Sun Labs and the project lead of the Open Media Commons.

Lessig's statement read, "In a world where DRM has become ubiquitous, we need to ensure that the ecology for creativity is bolstered, not stifled, by technology. We applaud Sun's efforts to rally the community around the development of open-source, royalty-free DRM standards that support 'fair use' and that don't block the development of Creative Commons ideals."

Assuming Sun supports fair use by including the means for copyright works to be duplicated for educational purposes, parody and criticism, our current concept of fair use should survive this new DRM. That said, the EFF has its doubts. I have my own doubts, too. Sun's DReaM "Usage Scenarios" document says that its fair-use mechanism is purely optional for rights holders. I'd like to see Sun make this a mandatory part of the DReaM licensing rules.

Another potential objection to Sun's plan is that it sounds a lot like existing Microsoft or Apple DRM, in which secure content only plays on certified devices. But there's one major difference in that area: The certification process would be run by a standards body, rather than by individual companies.

I asked Jacobs to explain who would certify the players, and what would block the non-certified players from playing DReaM-protected content. "There will be an independent legal entity whose sole job it would be to take submissions of devices or players and do certification and testing of the device," he said. He expects that group will be in place by the summer.

Any manufacturer in the world would be able to add support for DReaM files at a negligible expense (remember, this is open source) and submit its device to the standards body for certification, similar to the way CSS worked with DVD players. Players and programs that aren't certified cannot legally use the DReaM scheme to play protected content.

Assuming companies such as Samsung and Panasonic (both of which attended Sun's recent Open Media Community Workshop) jump at the chance to include DReaM in their devices, the next piece of the puzzle would be for online music distributors to adopt the system. With internet investment on the rise and record labels growing increasingly frustrated with Apple's stranglehold on the digital music market, I don't doubt that any number of upstart services would be willing to gamble on selling DReaM-protected files, assuming the device support is there.

Jacobs expects the fiercest resistance to come from backers of existing, closed-source DRM. "If you happen to be one of those handful of winners -- there are probably two winners at the moment -- you want to make sure there's a lot of FUD out there about how hard it is for the whole world to switch over to anything other than what they've already got. But in reality, everyone else is on the outside, looking with great envy at the potential for success that's been shown by this first generation of digital distribution solutions. And so all these other suppliers on the outside are looking at how … they (can) get in."

"They're looking for a solution just like what we're describing here," Jacobs says.

Maybe this is the free market finally waking up to do something about a system that everyone (except for Apple and Microsoft) seems to agree is broken. As Jacobs pointed out to me, web developers had difficulty coding pages to display properly on different browsers before standards were put in place. If Sun's DReaM comes true, the same could happen for protected music.
http://www.wired.com/news/technology/0,70548-0.html





Lessig, Stallman on 'Open Source' DRM
Andrew Orlowski

When Sun trumpeted its 'open source DRM' last month, no one at first noticed an unusual name amongst the canned quotes. Lending his support to the rights enforcement technology was Free Software Foundation, Electronic Frontier Foundation board member, and Software Freedom Law Center director, Professor Lawrence Lessig. A name usually associated with the unrestricted exchange of digital media.

Debian activist and copyright campaigner Benjamin Mako Hill noticed, and thought this was odd. "The fact that the software is 'open source' is hardly good enough," he wrote, "if the purpose of the software is to take away users' freedom - in precisely the way that DRM does."

Was DRM less bad because it was 'open source'? Professor Lessig tells us that he should have reviewed the Sun Microsystems press release before it went out. It doesn't fully reflect his position, he says, and he's emphatic that this blessing doesn't constitute an endorsement.

"Rockstars and newspapers endorse things," he told us by email. "I don't."

Richard M Stallman, who founded the Free Software movement and devised the original GNU public license, diplomatically didn't dwell on the support Lessig gave Sun's DRM.

"Anyone can mis-speak," he said. "But I hope people can learn from this."

He warns that, if DRM is open source, it might actually be worse than proprietary DRM, and he issued a rallying cry for free software campaigners that DRM is incompatible with freedom.

(How 'open source' Sun's DRM really is questionable, says Mako, since it allows proprietary implementations - that's something Lessig may not have been aware of at the time of the Sun press release.)

What follows, then, is excerpted from interviews with all three over the past week. Unprompted, both Stallman and Lessig spent considerable time with us discussing the wider context. Take away the need for DRM, they both point out, and the discussion becomes moot. Both have differing views on how this can be achieved, but it merits an article in itself, so we'll follow up with a Part Two in the very near future.
Stallman on freedom

As he so often does, Stallman began by drawing a sharp distinction between "open source" and the free software movement. This is more than mere semantics, as becomes apparent when he turns to DRM, because it's a distinction that reflects very different philosophical and moral approaches to writing software.

"The values of the Free Software Movement are the freedom to cooperate, and the freedom to have control over your own life. You should be free to control the software in your computer, and you should be free to share it," he sums up.

"The weakness of the 'Open Source' approach, is that it has been designed as another way to talk about the issues, one that cites only practical values. It agrees with the conventional attitude that what matters about software is what job it does, and how much money it costs. That's exactly the same attitude Microsoft wants you to take."

"Both 'open source' and proprietary developers are saying that convenience matters - but we're saying freedom and community matter more. We're not saying convenience doesn't matter, but there's more than just having a reliable and powerful program."

"I'm willing to undergo the tremendous inconvenience to create a free program that's a replacement for a proprietary program. That's why we have the GNU/Linux system, because a lot of people were prepared to make practical sacrifices so we can have that freedom."

Now here's where this underpins the DRM discussion.

Stallman says that the if you accept the proposition that 'open source' is good because it results in more powerful and reliable software, this makes 'open source DRM' worse than proprietary DRM. As he explains -

"If you think that the important thing is for the software to be powerful and reliable, you might think that applying the OS development model to DRM software is a way to make DRM powerful and reliable," he explains.

"But as far as I'm concerned, that makes it worse - because it's job is restricting you. And if it restricts you reliably, that means you've been thoroughly shafted.

"If you look at the issue from the perspective of the FSM, you come to a completely opposite conclusion, which is: the whole point of DRM is to deny your freedom and prevent you from having control over the software you use to access certain data. That's the direct opposite of our goal. So our goal is not served by having a free program that implements DRM. It doesn't make anything any better for our freedom. So from the point of the Free Software movement in general, a TiVoized program is not good at all, because it doesn't deliver the freedom that Free Software stands for."

Stallman offers a neat encapsulation of this approach:

"We're not very concerned with how a program was developed, we're concerned with what people are allowed to do with it now."
TiVo-ization

Stallman then explains the perils of TiVo-ization. This was the case that prompted the writing of Clause 3 of the revised GPL 3.0, [draft (http://gplv3.fsf.org/draft) - rationale behind change (http://gplv3.fsf.org/rationale#SECTI...00000000000000), which is currently in a lengthy consultation process.

What's TiVo-ization, Richard?

TiVo uses a lot of free software, he explains. It's a stripped down GNU/Linux system, containing portions including the kernel which are under a GPL license and have the source code available.

"Released under GPL, this would be great except for one thing," says Stallman. "If you install a modified version in it, it won't run. The hardware has been set up to detect modified versions and not run them."

Stallman then reiterates the four freedoms that he says underpin Free Software. Real programmers count from zero, so freedom Zero is the freedom to run the program as you wish; One is the freedom to study and change the software; Two is the freedom to redistribute copies as you wish; Three is the freedom to distribute modifed versions as you wish.

"TiVo nominally gives you Freedom One, but practically it does not; it turns it into a sham," he says.

Stallman says the specific example is important - and he implicitly rejects the idea that the market will supply demand for these freedoms in this case.

"If the TiVo was one amongst a spectrum of products that run that software, then it might not matter. You might be OK, and no one would get TiVos anymore. But in fact, often there is no other alternative, and we know there are conspiracies amongst large companies to ensure there is no alternative. So we can't just count on competition to make this problem unimportant," he says.

"We're trying to make sure Freedom One will never be turned into a sham."

Stallman sums up the position that DRM will never be 'free'.

"The crucial thing in Free Software is a moral principal - after all, Free Software didn't begin in 1982. Licenses had existed before 1982. So Free Software is not about a 'model', it's about an ethical stand. Users must have these Four Freedoms."

So what was Professor Lessig's rationale for supporting an 'open source' DRM? He explained more on his weblog the following day, and went into even more detail with us this week.

"If all one says is (a) 'Sun's openDRM is great,' that's praising DRM," says Lessig . "But if one says (b) 'we should live in a world without DRM, and we should be building infrastructure and laws that render DRM unnecessary, but if we have DRM, then Sun's is better than Hollywood's,' then that's not 'praising DRM' but identifying a lesser evil. Again, what I did was give a speech at Sun conference where I said (b).

He expanded -

"There's no disagreement about where we should end up - No DRM."

"The only real disagreement is about the dynamic consequences - how this new kind of DRM affects the ecology for DRM generally. About this, I think honest people have to say no one knows, but we each have our own hunch. My view is openDRM pollutes the control freaks' plan so significantly that it can't achieve what they want - a general infrastructure of control built into the technology. Of course, I could be wrong about that."

Lessig stresses he hasn't endorsed the Sun technology.

"How do you say free on the Apple platform? How do you even have the argument? There is no doubt some version of DRM is with us over the next 5 years at a minimum. I want it to be possible to wage the war for free culture in that space as easily as it can be waged in this world."

"We can win the battle against it without eradicating DRM from every corner of cyberspace. Instead, I view 'the battle' about DRM much like I view 'the battle' over free software. Free software (in the Stallman sense of that term) 'wins the battle' when it is the major platform upon which software development is done. In that sense, free software has already won in certain important fields of battle, and in that sense, I certainly think free software will 'win the battle.' But when it wins, it won't trouble me that there are machines out there that are running Windows. To close the loop on the analogy, once 'the battle' against proprietary software is lost, Windows will have lost its virulence."

To Lessig, it's simply a pragmatic solution. Mako Hill can understand the pragmatism, but he isn't impressed.

"His answer is that where DRM exists – where we have already lost, it’s better to beg for scraps from the table."

"I think what Lessig is seeing is that everybody who buys an iPod buys a machine with DRM, and there's a billion songs out there that have DRM on them, and he’s saying there are all these hundreds of millions of devices that use DRM, so do we want it to be an open source, friendly DRM? It can only take certain fair use rights into account if it’s going to be effective at all."

"I think what he got was a promise the system could be used in a way that protects fair use. But media producers have the right of choosing which implementations they want. Do you think Time Warner will allow their media to play on machines that allow people to copy things?"
http://www.theregister.co.uk/2006/04..._stallman_drm/





Circumvent PDF DRM… with Gmail!
Andreas

Apparently, Gmail’s built-in “View as HTML” functionality, which allows you to view the content of PDF files (and other types of documents) as if they were classic webpages, works regardless of the files’ usage restrictions (= DRM). I don’t think this is a bad thing ;-) but just wonder how Google can back up this design decision — or is it a mistake?

You can try this out yourself:

Create or download a DRMed PDF file. I used this file from the Adobe site.
Open the file with Adobe Reader and click on the “Lock” icon in the down-left corner to see which restrictions apply. Our example file can be printed, but editing or text extraction is not allowed.
Send the file as an attachment to your Gmail account.
Open the mail you’ve sent yourself and choose the “View as HTML” option below the (empty) mail body.
An HTML version of the encrypted file is displayed — part of the layout might be gone, but the text can be extracted with a simple copy/paste command. In case the original PDF file had printing restrictions, those are stripped as well.

Note 1: a quick test with a PDF file with print restrictions yielded similar results — Gmail converted the file in question without any problem.

Note 2: Google Search on the other hand does comply with the copy restrictions inside PDF files — this search query for instance gives only non-encrypted PDF files a “View as HTML” option.
http://akira.arts.kuleuven.ac.be/and...ith-gmail.html





FCC Commissioner Wants To Push For DRM Just 'Cause She Likes It
Mike

The FCC's purpose is supposed to be about regulating the scarce resource (yes, this is up for debate, according to some) that is wireless spectrum. However, over the years, they've repeatedly worked to go beyond that basic charter, sometimes leading to them getting slapped down by those who remember what the group's purpose is. However, that won't stop some of the commissioners from pushing the envelope on pet projects apparently. Newest Commissioner Deborah Tate has apparently announced that while she knows its outside the FCC's authority, she's a huge fan of copy protection and hopes to use her new position as a "bully pulpit" on the topic. Apparently, her love of country music has brought her to this studied position -- despite increasing evidence that copy protection tends to shrink the overall market, and create fewer opportunities for musicians.
http://techdirt.com/articles/20060419/0210252.shtml





Eff Files Brief In Free Speech Case
Ryan Paul

The Electronic Frontier Foundation (EFF) has filed a friend of the court brief with the Center for Democracy and Technology (CDT) and the American Civil Liberties Union (ACLU) arguing that free speech rights on the Internet must take precedence over foreign intellectual property law. The case in question, Sarl Louis Feraud International v. Viewfinder Inc., could potentially have a significant impact on freedom of speech rights, Internet regulation, and international law.

Viewfinder, the defendant, is an American company that produces and maintains a web site called Firstview.com which features information about recent fashion industry trends and events. The plaintiff, Sarl Louis Feraud International, is a fashion design company. Viewfinder took numerous photographs at a fashion show and published those photographs on the Firstview web site. Sarl Louis Feraud argues that its intellectual property was infringed when Viewfinder published photographs of various Sarl Louis Feraud fashions. Under French law, fashion designs are considered to be protected intellectual property, but this is not the case under American law. Since the web site was viewable in France, Sarl Louis Feraud decided to bring a suit against Viewfinder. After being awarded a default judgement in French court, the plaintiff attempted to enforce that judgement in America.

The United States adheres to a principle of comity, which means that it will typically enforce the rulings of foreign courts when applicable, and when doing so does not violate ordre public, the fundamental principles of local law. By nature, the concept of ordre public is extraordinarily vague, so cases where it comes into play tend to get very convoluted very quickly. In this particular case, Sarl Louis Feraud International v. Viewfinder Inc., the defendent argues that enforcing the judgement of the French court would violate ordre public because it infringes upon first amendment rights. Judge Lynch ruled in favor of the defendent, pointing out that freedom of speech is an essential and inseperable facet of American legal tradition:

Viewfinder's last, and sole persuasive, argument is that the French judgment is "repugnant to fundamental notions of what is decent and just" because Viewfinder's conduct is protected by the First Amendment. The freedoms of speech and of the press protected by the First Amendment are not mere vagaries of legal policy, matters of legal detail that might as easily have been resolved differently by our legislatures or courts. Freedom of speech is a matter of constitutional command, binding even on the will of the majority as expressed in legislation. The very Congress of the United States "shall make no law abridging the freedom of speech, or of the press." Even among the basic human rights protected by the United States Constitution, the First Amendment occupies a special place. As Justice Cardozo put it, the American legal tradition "reflects a pervasive recognition of th[e] truth" that freedom of speech is "the matrix, the indispensable condition of nearly every other freedom."

The plaintiff protested the ruling, arguing that publication of the photographs doesn't "possess sufficient communicative elements to bring the First Amendment in to play." There are precedents in first amendment case law (particularly Spence v. Washington) where it is established that protection can be rescinded if the form of speech in question doesn't "convey a particularized message," or if the message conveyed isn't likely to be understood by the audience. The plaintiff has appealed the ruling, and plans to continue the legal battle.

In their brief, the EFF, ACLU, and CDT, insist that enforcement of the French court's ruling would have a severely detrimental impact on American freedom:

To open the door to foreign restrictions on U.S. speakers even the slightest crack would allow foreign courts to impose restrictions on speech that even Congress could not validly enact, fundamentally undermining First Amendment protections for Internet speech. This door must be kept closed, and closed tightly.

The part about the ability of congress to enact the restrictions is very important. The text of the brief echoes Judge Lynch's allusion to the fact that the first amendment clearly forbids congress from making laws that infringe on free speech. This is relevant because it implies that the abridgement of free speech constituted by the French judgement is clearly and overtly "repugnant to the fundamental notions of what is decent and just." In essence, if an action is so repugnant that our own elected representatives aren't permitted to do it with our consent, then it makes absolutely no sense to a allow a foreign court to do it on the basis of comity.

The brief also argues that enforcing the French ruling would create a dangerous slippery slope in which foreign governments could use their own highly restrictive legislation to impose limitations on free speech conducted over the Internet by American citizens:

Recognizing the essential character of the Internet as a global medium, American courts overwhelmingly have rejected attempts to censor it. ... when nations seek to control content on the Internet by applying their domestic laws extraterritorially to speech originating in the United States, the broader threat to freedom of expression is palpable. ... The conclusion that the French Order may be enforced in the United States despite its conflict with our constitutional freedoms would establish an international regime in which any nation would be able to enforce its legal and cultural "local community standards" on speakers in all other nations. In such a regime, ISPs and content providers would have no practical choice but to restrict their speech to the lowest common denominator in order to avoid potentially crushing liability.

The implications of such a regime would be wide-sweeping given the range of speech-restrictive laws in foreign nations that U.S. courts would be required to enforce. In addition to China and France, a host of nations impose restrictions on speech that would be deemed unconstitutional in the United States.

The brief is really great reading for anyone who's following the intersection of the Internet and international law. The brief's authors present many excellent arguments and examine in great detail the detrimental consequences associated with Internet censorship as well as free speech rights. They also illuminate the importance of this particular case, showing how the the future of free speech on the Internet could be threatened not just by traditional censorship from countries that place tighter limits on freedom of speech, but by the steady, international expansion of intellectual property laws.
http://arstechnica.com/news.ars/post/20060417-6609.html





Supreme Court Won't Hear Falwell's Appeal
Gina Holland

Evangelist Jerry Falwell on Monday lost a Supreme Court appeal of a case that sought to shut down a Web site with a similar name but opposite views on gays.

Falwell claims that a gay New York City man improperly draws people to a site by using a common misspelling of the reverend's name as the site's domain name.

A federal judge sided with Falwell, who runs a Virginia-based ministries, on grounds that Christopher Lamparello's domain name was nearly identical to the trademark bearing Falwell's name and could confuse Web surfers.

But last year, the 4th U.S. Circuit Court of Appeals disagreed and said that Lamparello was free to operate his "gripe site" about Falwell's views on gays at http://www.fallwell.com . Lamparello "clearly created his Web site intending only to provide a forum to criticize ideas, not to steal customers," the court said.

The Jerry Falwell Ministries site is: http://www.falwell.com .

Falwell's Web site is more high-tech, with pictures of the minister, and sales material for books and videos.

Lamparello's Web site is mainly in black and white, with no photographs or items for sale. He says that Falwell is wrong in preaching that gay people are sinners who can change. At the top of the site a disclaimer reads: "This Web site is NOT affiliated with Rev. Dr. Jerry Falwell or his ministry."

Falwell's attorneys have fought over domain names in the past. Three years ago, an Illinois man surrendered the domain names jerryfalwell.com and jerryfallwell.com after Falwell threatened to sue for trademark infringement.
http://www.washingtonpost.com/wp-dyn...700465_pf.html





F.B.I. Is Seeking to Search Papers of Dead Reporter
Scott Shane

The F.B.I. is seeking to go through the files of the late newspaper columnist Jack Anderson to remove classified material he may have accumulated in four decades of muckraking Washington journalism.

Mr. Anderson's family has refused to allow a search of 188 boxes, the files of a well-known reporter who had long feuded with the Federal Bureau of Investigation and had exposed plans by the Central Intelligence Agency to kill Fidel Castro, the machinations of the Iran-contra affair and the misdemeanors of generations of congressmen.

Mr. Anderson's son Kevin said that to allow government agents to rifle through the papers would betray his father's principles and intimidate other journalists, and that family members were willing to go to jail to protect the collection.

"It's my father's legacy," said Kevin N. Anderson, a Salt Lake City lawyer and one of the columnist's nine children. "The government has always and continues to this day to abuse the secrecy stamp. My father's view was that the public is the employer of these government employees and has the right to know what they're up to."

The F.B.I. says the dispute over the papers, which await cataloging at George Washington University here, is a simple matter of law.

"It's been determined that among the papers there are a number of classified U.S. government documents," said Bill Carter, an F.B.I. spokesman. "Under the law, no private person may possess classified documents that were illegally provided to them. These documents remain the property of the government."

The standoff, which appears to have begun with an F.B.I. effort to find evidence for the criminal case against two pro-Israel lobbyists, has quickly hardened into a new test of the Bush administration's protection of government secrets and journalists' ability to report on them.

F.B.I. agents are investigating several leaks of classified information, including details of domestic eavesdropping by the National Security Agency and the secret overseas jails for terror suspects run by the C.I.A.

In addition, the two lobbyists, former employees of the American Israel Public Affairs Committee, or Aipac, face trial next month for receiving classified information, in a case criticized by civil liberties advocates as criminalizing the routine exchange of inside information.

The National Archives recently suspended a program in which intelligence agencies had pulled thousands of historical documents from public access on the ground that they should still be classified.

But the F.B.I.'s quest for secret material leaked years ago to a now-dead journalist, first reported Tuesday in the Chronicle of Higher Education, seems unprecedented, said several people with long experience in First Amendment law.

"I'm not aware of any previous government attempt to retrieve such material," said Lucy Dalglish, executive director of the Reporters Committee for Freedom of the Press. "Librarians and historians are having a fit, and I can't imagine a bigger chill to journalists."

The George Washington University librarian, Jack Siggins, said the university strongly objected to the F.B.I.'s removing anything from the Anderson archive.

"We certainly don't want anyone going through this material, let alone the F.B.I., if they're going to pull documents out," Mr. Siggins said. "We think Jack Anderson represents something important in American culture — answers to the question, How does our government work?"

Mr. Anderson was hired as a reporter in 1947 by Drew Pearson, who bequeathed to him a popular column called Washington Merry-Go-Round.

Mr. Anderson developed Parkinson's disease and did little reporting for the column in the 15 years before his death in December at 83, said Mark Feldstein, director of the journalism program at George Washington, who is writing a book about him.

His files were stored for years at Brigham Young University before being transferred to George Washington at Mr. Anderson's request last year, but the F.B.I. apparently made no effort to search them.

Kevin Anderson said said F.B.I. agents first approached his mother, Olivia, early this year.

"They talked about the Aipac case and that they thought Dad had some classified documents and they wanted to take fingerprints from them" to identify possible sources, he recalled. "But they said they wanted to look at all 200 boxes and if they found anything classified they'd be duty-bound to take them."

Both Kevin Anderson and Mr. Feldstein, the journalism professor, said they did not think the columnist ever wrote about Aipac.

Mr. Anderson said he thought the Aipac case was a pretext for a broader search, a conclusion shared by others, including Thomas S. Blanton, who oversees the National Security Archive, a collection of historic documents at George Washington.

"Recovery of leaked C.I.A. and White House documents that Jack Anderson got back in the 70's has been on the F.B.I.'s wanted list for decades," Mr. Blanton said.

Mr. Carter of the F.B.I. declined to comment on any connection to the Aipac case or to say how the bureau learned that classified documents were in the Anderson files.
http://www.nytimes.com/2006/04/19/wa...rtner=homepage





New RFID Travel Cards Could Pose Privacy Threat
Anne Broache and Declan McCullagh

Future government-issued travel documents may feature embedded computer chips that can be read at a distance of up to 30 feet, a top Homeland Security official said Tuesday, creating what some fear would be a threat to privacy.

Jim Williams, director of the Department of Homeland Security's US-VISIT program, told a smart card conference here that such tracking chips could be inserted into the new generation of wallet-size identity cards used to ease travel by Americans to Canada and Mexico starting in 2008. Those chips use radio frequency identification technology, or RFID.

"If you haven't been to some of our busiest land crossings, I always refer to them as economic choke points...We ought to use technology to improve that," said Williams, whose office operates the biometric program used to verify that the fingerprint of a person using a U.S. visa to cross a U.S. border matches that of the person who was issued the visa.

Williams' remarks at an industry conference are likely to heighten privacy concerns about RFID technology, which has drawn fire from activists and prompted hearings before the U.S. Congress and the Federal Trade Commission. One California politician has even introduced anti-RFID legislation.

Many of the privacy worries center on whether RFID tags--typically miniscule chips with an antenna a few inches long that can transmit a unique ID number--can be read from afar. If the range is a few inches, the privacy concerns are reduced. But at ranges of 30 feet, the tags could theoretically be read by hidden sensors alongside the road, in the mall or in the hands of criminals hoping to identify someone on the street by his or her ID number.

Williams defended a remotely readable RFID'ed identity card to audience members who suggested selecting one that could be scanned from only a few inches away. Border police oppose that idea because "they're concerned about people dropping cards, about people sticking their hands out the window," he said. "They don't think that meets their mission needs"--that is, speeding up the border-crossing process.

Those forthcoming cards, called "PASS" (for People Access Security Service), are part of a federal requirement that, starting Jan. 1, 2008, anyone entering the United States from Mexico or Canada must carry a passport or "alternative" travel document. Homeland Security envisions that document will take the form of a "vicinity-read" wallet-size card that will capture information from a distance and automatically display the cardholder's picture and other biographic information on the border agent's computer screen.

Homeland Security has said, in a government procurement notice posted in September, that "read ranges shall extend to a minimum of 25 feet" in RFID-equipped identification cards used for border crossings. For people crossing on a bus, the proposal says, "the solution must sense up to 55 tokens."

The notice, unearthed by an anti-RFID advocacy group, also specifies: "The government requires that IDs be read under circumstances that include the device being carried in a pocket, purse, wallet, in traveler's clothes or elsewhere on the person of the traveler....The traveler should not have to do anything to prepare the device to be read, or to present the device for reading--i.e., passive and automatic use."

An internal agency spat?
But Homeland Security could run into some internal opposition in the form of the State Department, which appears to be leaning toward the "proximity" method instead of remotely readable RFID'ed identity cards.

"We think proximity read offers greater security protections," Frank Moss, deputy assistant secretary of state for passport services, said Tuesday. That method would also have a better chance of getting past the scrutiny of privacy advocates in the requisite rule-making process, added Moss, who joked that he had been labeled the "anti-Christ" by one person who commented on the State Department's e-passport proposals.

RFID chips are already going to appear in U.S. passports starting in October 2006, the Bush administration ruled last October. And the possibility of RFID-implanted drivers' licenses because of the Real ID Act has caused New Hampshire's House of Representatives to disavow the proposal entirely.

Moss ticked off a list of reasons why Americans shouldn't be concerned about the safety of RFID'ed passports any longer. He admitted the State Department was wrong to claim last year that the e-passport chips could be read within only 10 centimeters. He credited the scathing comments from privacy watchdogs for the agency's decision to adopt two safeguards: a cryptographic technique known as basic access control and "antiskimming material" on the passport's front cover, which "greatly complicates" the capture of data when the book is fully or mostly closed, Moss said.

The government agencies said they need to reach an agreement on the RFID technology they'll use in the next month so that they can begin soliciting proposals from private firms for the chip's design. They hope to begin producing the PASS cards no later than nine months from now, Moss said.

"What we're putting in the card is possibly nothing but a 96-digit serial number that is random and would do nothing but point back to a database...someone would have to hack into our database at the same time," Homeland Security's Williams said, adding that the agency is considering delivering the cards in a "Mylar sleeve that would block the technology when people aren't using it." They're also exploring using a card that would have to be activated by the user, through a fingerprint or some other biometric method, before any information could be read remotely.
http://www.zdnetasia.com/news/softwa...9352807,00.htm





Study Fuels a Growing Debate Over Police Lineups
Kate Zernike

The police lineup is a time-honored staple of crime solving, not to mention of countless cop movies and television shows like "Law and Order." Each year, experts estimate, 77,000 people nationwide are put on trial because witnesses picked them out of one.
In recent years many states and cities have moved to overhaul lineups, as DNA evidence has exposed nearly 200 wrongful convictions, three-quarters of them resulting primarily from bad eyewitness identification.

In the new method, the police show witnesses one person at a time, instead of several at once, and the lineup is overseen by someone not connected to the case, to avoid anything that could steer the witness to the suspect the police believe is guilty.

But now, the long-awaited results of an experiment in Illinois have raised serious questions about the changes. The study, the first to do a real-life comparison of the old and new methods, found that the new lineups made witnesses less likely to choose anyone. When they did pick a suspect, they were more likely to choose an innocent person.

Witnesses in traditional lineups, by contrast, were more likely to identify a suspect and less likely to choose a face put in the lineup as filler.

Advocates of the new method said the Illinois study, conducted by the Chicago Police Department, was flawed, because officers supervised the traditional lineups and could have swayed witnesses.

But the results have empowered many critics who had worried that states and cities were caving in to advocacy groups in adopting the new lineups without solid evidence that they improved on the old ones.

"There are people who'd say it's better to let 10 guilty persons free to protect against one innocent person being wrongfully convicted," said Roy S. Malpass, a professor at the University of Texas at El Paso and an analyst for the Illinois study, who served on a research group on eyewitness identification for the National Institute of Justice in 1999.

"I'm fine with that when we're dealing with juvenile shoplifters," Dr. Malpass said. "I'm not fine with that for terrorists. We haven't figured out the risk there."

The new lineups lack some of the drama of the old. In some places, witnesses view lineups on laptop computers to make them completely "blind" to influence from someone administering the process.

Psychologists who favor these so-called sequential double-blind lineups say that showing witnesses people one at a time makes lineups more difficult for the witness, and therefore better. Witnesses have to compare the person in front of them against their memory of the crime, rather than simply against the other faces in the lineup.

"It turns a lineup into a much more objective, science-based procedure," said Gary L. Wells, a professor of psychology at Iowa State University and a prominent proponent of blind sequential lineups. "The double-blind is a staple of science; it makes as much sense to do it in a lineup as it does in an experiment or drug trial."

In classroom studies by Dr. Wells and others, the sequential method was found to reduce the number of times witnesses chose an innocent person, without reducing the number of times they chose the right one.

The movement to change lineups took off in the 1990's after a growing number of DNA exonerations. New Jersey was the first state to adopt the sequential method, in 2001. The Wisconsin Legislature recently recommended the same approach, as did commissions in North Carolina, Virginia and, last week, California. Boston and Hennepin County, which includes Minneapolis, use sequential lineups, and Washington, D.C., is studying them in one district.

Still, lineup methods remain an open debate: law enforcement officials in California and New York have resisted changes, arguing that the evidence in favor of the sequential approach is not firm.

A guide for prosecutors produced in 1999 by the National Institute of Justice study group said "there is not a consensus" and declined to recommend sequential lineups as a "preferred" method.

But before the Illinois study, released last month, no one had compared the two methods in the field.

The experiment was part of an overhaul package recommended in 2002 by the Governor's Commission on Capital Punishment, set up by former Gov. George Ryan of Illinois after DNA evidence exonerated several death row inmates. It tested the two methods for a year in three dissimilar cities; half the lineups were conducted sequentially and half were done simultaneously.

"Surprisingly," the study said, the sequential lineups proved less reliable than the simultaneous ones.

Out of 700 lineups, witnesses in those using the simultaneous method chose the correct suspect 60 percent of the time, compared with 45 percent of the time for the sequential lineups. Witnesses in the sequential lineups were more likely to pick the wrong person — someone brought in as filler — choosing incorrectly 9 percent of the time, versus just 3 percent in the simultaneous lineups.

And witnesses declined to make a pick in 47 percent of the sequential lineups, compared with 38 percent of the simultaneous ones. (Percentages were rounded.)

"If you are going to take officers outside their comfort zone, you have to be able to sell them on the reasons you are doing it," said Sheri Mecklenburg, general counsel to the superintendent of the Chicago Police Department and director of the experiment. "Based on this study, I think we'd have a difficult time having them believe this is a way to get more reliable eyewitness identifications."

Prosecutors elsewhere say the results make them less inclined to move to sequential lineups.

"This is very powerful because it's real," said Patricia Bailey, an assistant district attorney in Manhattan who has considered lineup changes for New York City. "This isn't a classroom study where people are watching a 30-second video of a crime that happened to someone else."

Paul A. Logli, president of the National District Attorneys Association, said that his group would discuss lineups at its convention this fall, but that many prosecutors were doubters.

"I think many prosecutors think doing it sequentially runs contrary to human nature," Mr. Logli said. "Human nature tells me that having the ability to compare is more helpful than destructive. Doing it sequentially is almost like this is a trick question."

Dr. Wells of Iowa State said the Illinois study had not validly compared the two lineup methods because simultaneous lineups had not been done "blind."

But Dr. Malpass of the University of Texas and Ms. Mecklenberg said the point was to study the new method against the status quo.

The new study will be the focus of a conference Friday at the Loyola University Chicago School of Law. Thomas P. Sullivan, a former United States attorney in Chicago and the co-chairman of the governor's commission that recommended the study, said that already, the results had "changed the debate."

"It has put a cloud over the sequential system," Mr. Sullivan said. "I think it will retard the system throughout the country until this gets sorted out."

But others say changes to lineups should focus on other elements that studies have shown produce more reliable picks: reducing pressure on witnesses by advising them that they do not have to pick someone; making sure that "fillers" strongly resemble the suspect; and recording what the witness says upon choosing a suspect, so juries can hear how certain they were about a pick.

"I don't understand why the rest of these reforms shouldn't be adopted immediately," said Barry C. Scheck, a co-director of the Innocence Project, a legal clinic that uses DNA evidence to try to overturn wrongful convictions. "The controversy over sequential blind has obscured the fact that all the other reforms are not in dispute."

Ms. Mecklenburg, in Chicago, said, "There are no sides in this debate."

"We all want the same thing," she said. "Whether you are a prosecutor or police or defense counsel, we all want reliable eyewitness identifications."
http://www.nytimes.com/2006/04/19/us...rtner=homepage





Microsoft To Pull Plug On Windows 98, ME In July
Gregg Keizer

Microsoft has begun reminding users of Windows 98, Windows 98 SE, and Windows Millennium that it will cut off all support for the aging operating systems in July.

"Microsoft is ending support for these products because they are outdated and these older operating systems can expose customers to security risks," the company said in a notice posted to its Web site. "We recommend that customers who are still running Windows 98 or Windows Me upgrade to a newer, more secure Microsoft operating system, such as Windows XP, as soon as possible."

After July 11 -- the scheduled release date for that month's security bulletins -- Microsoft will no longer provide security updates for the OSes. Only online self-help support will be available, and that's guaranteed only through July 11, 2007.

On Tuesday, Microsoft provided three critical security patches for Windows 98 and Millennium, part of its lifecycle pledge to fix onerous bugs until all support is dropped.

Previously, Microsoft had extended the final support stage for Windows 98 and Millennium from January 2004 to June 30, 2006. In January of this year, the Redmond, Wash. developer announced that it would move the end date to July 11 to account for one final security bulletin release.

No-charge and extended hotfix support for Millennium ended December 31, 2003, and for Windows 98, on June 30, 2003.

Although Windows 98 may be found on numerous aged consumer machines, it's little-used in corporations. In a 2005 survey, for instance, Canadian asset-tracking firm AssetMetrix said that Windows 98 was running only about 5 percent of enterprise PCs.
http://www.informationweek.com/share...leID=185300782





Hands-On Testing Of The New Linux Virus
Joe Barr and Joe Brockmeier

Thanks to one of our readers, NewsForge has obtained a copy of the widely reported Windows/Linux cross-platform "proof of concept" virus. News reports thus far on the code have contradicted themselves: some reported the virus can replicate itself on both Windows and Linux, others saying it has a viral nature only on Windows. Testing by both NewsForge staff and Hans-Werner Hilse may reveal why the confusion.

Our tests shows the code's viral nature is sometimes -- but not always -- effective on both platforms, depending on the kernel being used. Of course, it's impossible for us to test every version of the kernel out there, but thus far, it looks like those prior to version 2.6.16 are susceptible, and at least some of those after that release are not. Here's how we tested at NewsForge.

Our first test was run on an AMD64 box with a fresh install/update of Ubuntu Dapper Flight 5 386 with the 2.16.15-20-386 kernel, with the WINE and GHex -- a binary viewer/editor -- packages also installed. After unzipping the viral package (clt.zip) into an empty directory, we tested CLT.EXE by executing it under WINE in a subdirectory containing only a small executable and linkable format (ELF) file, called hello, written in assembler, that we created for the test. We ran CLT.EXE, and a small window popped up saying that the "dropper" -- as the code calls itself -- had executed successfully.

When we examined the hello ELF file with GHex, however, it showed no signs of contagion -- not even the lines of text which were supposedly installed in lieu of the virus itself when run on Linux. We soon learned that the reason hello remained uninfected in the first test was that the hello executable file is too small, not because the viral code could not replicate on Linux. Another NewsForge staffer testing CLT.EXE under VMWare found that it did infect larger ELF files.

Next, we copied the programs more, date, and ls from /bin into the test directory. When we ran CLT.EXE again, all three of those ELFs were infected. Each was 4,096 bytes larger than it had been before the test. But did those 4,096 additional bytes actually contain the viral code? Would the ELF files still execute? Those questions became the basis for our next test scenario.

Instead of running CLT.EXE under WINE, we repeated the tests in a different directory, using uninfected copies of the same target programs, and then executing an infected version of ls in that directory. The only difference we could detect was that the pop-up window no longer appeared: more, ls, and date were all infected and hello remained untouched.

Meanwhile, on the other side the Atlantic

Hans-Werner Hilse has been digging deeper into the viral code than we did. Hilse described his initial testing as follows:
2.4.26 running in qemu (I took an ISO of Damn Small Linux) -- qemu simulates a Pentium II -- I was also able to see the infection taking place.

I found a difference for the program flow when I compared the systrace from the 2.6 kernels mentioned above (2.6.16.2 and 2.6.15.4). Indeed the mmapping fails on the newer kernel:

(command is "strace -viqo strace.log ./echo x", "echo" is /bin/echo but an infected version, another copy of echo -- not infected -- is waiting as "E" to be infected).

--->2.6.15.4: [08047416] readdir(3, {d_ino=10933298, d_name=""}) = 1
[0804742d] stat("E", {st_dev=makedev(3, 7), st_ino=10933298, st_mode=S_IFREG|0755, st_nlink=1, st_uid=1000, st_gid=100, st_blksize=4096, st_blocks=40, st_size=16600, st_atime=2006/04/14-21:31:09, st_mtime=2006/04/14-21:31:09, st_ctime=2006/04/14-21:31:09}) = 0
[0804744d] open("E", O_RDWR) = 4
[08047463] ftruncate(4, 24792) = 0
[0804747e] old_mmap(NULL, 28672, PROT_READ|PROT_WRITE, MAP_SHARED, 4, 0) = 0xb7fca000
[0804749c] munmap(0xb7fca000, 24792) = 0
[080474a7] ftruncate(4, 20696) = 0
--->2.6.16.2:
[08047416] readdir(3, {d_ino=722045, d_name=""}) = 1
[0804742d] stat("E", {st_dev=makedev(3, 1), st_ino=722045, st_mode=S_IFREG|0755, st_nlink=1, st_uid=1000, st_gid=100, st_blksize=4096, st_blocks=48, st_size=20696, st_atime=2006/04/14-21:31:09, st_mtime=2006/04/14-21:31:31, st_ctime=2006/04/14-21:35:16}) = 0
[0804744d] open("E", O_RDWR) = 4
[08047463] ftruncate(4, 28888) = 0
[0804747e] old_mmap(NULL, 32768, PROT_READ|PROT_WRITE, MAP_SHARED, 1, 0) = -1 ENODEV (No such device)
[080474a7] ftruncate(4, 20696) = 0

so obviously the mmap in 0804747e fails on the newer kernel version. What makes me wonder now is that the syscall is named "old_mmap". Sounds like a deprecated interface...

Hilse continues to test, looking to discover why the virus works on some kernels and not on others. The last we've heard from him was this:
My earlier suggestion seems to become true. The ebx register is actually reset to "1" after the "ftruncate" call. Compare the both register dumps to see this. Before the ftruncate-syscall, ebx is 7. After that call, when interrupted at breakpoint #3, it is 1.

I don't know for sure if this is normal behaviour. But many sources on the Web seem to suggest that for Linux syscalls the registers except eax should stay unmodified (i.e. get reset when returning from kernel into userspace). This is what many assembler examples seem to suggest. So I really think the kernel might be wrong here.

We sent an email to Linus Torvalds to let him know about our testing. He replied:
That said, it sounds like it's a regular program that just happens to work on both Windows and Linux, and that happens to do things that are perfectly OK per se (i.e. writing to files that are owned by the user). So it's interesting just because of the "works on both Linux and Windows" angle, not because of any viral nature.

A post we sent to the Linux Kernel Mailing List offering the "proof of concept" code to any kernel hackers interested to trying to figure out why it works on some versions of the kernel but not on others has been ignored. Obviously, the people concerned about the inner workings of the kernel are not as concerned about this threat as those selling anti-virus software.

Based on our examination of the code, Linux users need to be aware of two things. First, given the right permissions, a virus can replicate itself on Linux. This, however, has never been in doubt -- it remains to be seen whether malware authors can create a virus that can spread as easily on Linux as viruses on Windows. Second, this "proof of concept" is an excellent example of why running as root is a very bad idea.
http://os.newsforge.com/article.pl?sid=06/04/17/1752213


Torvalds Creates Patch For Cross-Platform Virus
Joe Barr

Linus Torvalds has had an opportunity to examine the testing and analysis by Hans-Werner Hilse which we reported on yesterday, and has blessed it as being correct. The reason that the virus is not propagating itself in the latest kernel versions is due to a bug in how GCC handles specific registers in a particular system call. He has coded a patch for the kernel to allow the virus to work on even the latest Linux kernel.

That may sound terribly complex, so let's break it down. A system call is made when an application, in this case, the virus, wants the kernel to perform a task for it: perhaps to read some data, or write it to a file, or so on.

As part of the housekeeping done by an application before such a call, specific registers -- a register is a temporary storage address which can be accessed as fast as possible by the CPU -- are loaded with additional information required to perform whatever task the call is asking for.

If you wanted to move a string of data like "CAPZLOQ TEKNIQ 1.0" from one place in memory to another, you would need to load the address where the string begins in one specific register, the address where you want it moved to in another register, and the number of bytes to move in yet another.

By convention, applications assume that certain registers will not be changed during the call. The reason the virus did not work in the latest kernel is that one register, the ebx register, which the virus expects to remain unchanged, is being overwritten.

The bug, which seems to me is more of a bug in GCC than the kernel, doesn't seem to appear in most code. It takes the rare combination of hand-crafted assembler code and the use of old, now deprecated, system calls to appear. This lends support to the speculation that this virus is not new code at all, in spite of how Kaspersky Lab is trying to use it to drum up new business.

I wrote Torvalds with Hilse's suspicion that the problem is caused by the ftruncate system call, manifested in the erroneous old_mmap function. According to Torvalds:

This is exactly right. "sys_ftruncate()" seems to corrupt %ebx due to a compiler issue. We've had that issue before: the kernel uses some special calling conventions where the system call stack is both the saved register area _and_ the argument area to the system calls.

That speeds up system call entry, since we avoid any unnecessary argument setup costs, but sadly gcc then thinks the callee function owns the argument stack, and can overwrite it. We've had hacks to avoid it in the past, but the ftruncate case has gone unnoticed (see later on why it doesn't matter for any normal apps).

So, for sys_ftruncate(), gcc compiles it to


sys_ftruncate:
movl 4(%esp), %eax # fd, fd
xorl %ecx, %ecx # length
movl 8(%esp), %edx # length, length
movl $1, 4(%esp) #,
jmp do_sys_ftruncate #

where that "movl $1, 4(%esp)" overwrites the original argument stack (the first argument, which is the save-area for %ebx).

Sad, sad. This particular case only happens with "-mregparm=3", which has been around for a long time, but only became default in 2.6.16. Which is probably why Hans-Werner didn't see the problem with older binaries. He just compiled with a different configuration.

Now, the reason normal programs don't care is that glibc saves and restores the %ebx register in the system call path. So if you use the regular C library, you'd never care. The virus has probably been written by hand in assembly, and because it didn't save/restore %ebx, it was hit by the fact that the system call modified it.

(To make it even harder to hit - it probably also only happens with the old "int 0x80" system call mechanism, not with the modern "syscall" entrypoint. Again, you'd probably only see this on old hardware _or_ if you wrote your system call entry routines by hand).

So the virus did a number of strange things to make this show up, but on the other hand the kernel does try to avoid touching user registers, even if we've never really _guaranteed_ that. So the 2.6.16 effect is a mis-feature, even if a _normal_ app would never care. It just happened to bite the infection logic of your virus thing.

Hilse has tested the patch provided by Torvalds as a workaround, and reports:

Indeed, this worked. With a recompiled kernel, everything is running as expected. And yes, it is using the int 0x80 interface from assembly code. As it's viral code, it is trying to avoid any overhead and reuses registers as much as it can (from what I can tell).

Leave it to open source hackers to debug and fix aging viral code so that it works correctly. And shame on the anti-viral industry, Kaspersky Lab in particular, for its attempts to deceive the public by passing off old code as something new.
http://software.newsforge.com/articl.../04/18/1941251





Howto Install The Base GNU/Linux System Onto a USB Thumbdrive With The Root Partition Encrypted





Invention: The TV-Advert Enforcer
Barry Fox

For over 30 years, Barry Fox has trawled through the world's weird and wonderful patent applications, uncovering the most exciting, bizarre or even terrifying new ideas. His column, Invention, is exclusively online. Scroll down for a roundup of previous Invention articles.

The advert enforcer

If a new idea from Philips catches on, the company may not be very popular with TV viewers. The company's labs in Eindhoven, The Netherlands, has been cooking up a way to stop people changing channels to avoid adverts or fast forwarding through ads they have recorded along with their target programme.

The secret, according to a new patent filing, is to take advantage of Multimedia Home Platform - the technology behind interactive television in many countries around the world. MHP software now comes built into most modern digital TV receivers and recorders. It looks for digital flags buried in a broadcast, and displays messages on screen that let the viewer call up extra features, such as additional footage or information about a programme.

Philips suggests adding flags to commercial breaks to stop a viewer from changing channels until the adverts are over. The flags could also be recognised by digital video recorders, which would then disable the fast forward control while the ads are playing.

Philips' patent acknowledges that this may be "greatly resented by viewers" who could initially think their equipment has gone wrong. So it suggests the new system could throw up a warning on screen when it is enforcing advert viewing. The patent also suggests that the system could offer viewers the chance to pay a fee interactively to go back to skipping adverts.

Micro Electrical Generator

There is little point in building tiny micro-electro-mechanical devices (MEMS) if they need big batteries to work. So Washington State University, US, has been working on a radical solution - a microscopic generator that burns hydrocarbon fuel to generate electricity.

Within the device, droplets of fuel are deposited onto a flat metal plate (about 1 millimetre to a side) and then ignited. As the plate heats up, drops of liquid mercury travel along a connected tube to a strip of piezoelectric material. Heat from the mercury causes the piezoelectric strip to flex, generating a small pulse of electric power.

Some of this power is used to create an electrostatic charge which moves the mercury droplets back towards the hot plate to pick up another dose of heat. This lets the system generate a continuous series of electric pulses.

Each micro-generator can only produce about 1 milliwatt of power but an array of several thousand could produce several watts - enough to let MEMS do plenty of useful work.
http://www.newscientisttech.com/arti...-enforcer.html





Seagate Ships First Perpendicular 3.5" Hard Drive
Wolfgang Gruener

Scotts Valley (CA) - Seagate will announce on Tuesday the industry's first 3.5" hard drive that is based on perpendicular recording technology. Targeting enterprise applications, the new Cheetah 15K.5 doubles the capacity of its predecessor and promises 30% more performance.

With the 2.5" perpendicular drive out the door, Seagate has begun transferring the new recording technology, which promises higher capacities and improved performance, into the next form factor. The new Cheetah 15K.5 is positioned as the firm's new flagship hard drive and provides enterprise users significantly more storage space and bandwidth, the manufacturer said.

Compared to the preceding non-perpendicular 15K.4 generation, which has been offered in 36 GB, 73 GB and 146 GB versions, the new 15K.5 is available with 300 GB (four platters), 147 GB (two platters) and 73 GB (one platter). The sustained data transfer rate is up about 30% from 58-96 MB/s in the 15K.4 to about 73-125 MB/s in the 15K.5. Seagate claims that the new Cheetah is the first hard drive to break the 100 MB/s data transfer barrier.

No changes are announced to the drive's reliability, which is still rated at 1.4 million Mean Time Between Failure (MTBF). Also, the platters of the enterprise drive continue to spin at 15,000 rpm; average seek time remains at 3.5-4.0 ms.

Seagate said that the 15K.5 is currently shipping to OEMs with 3 Gb/s serial attached SCSI (SAS), Ultra320 SCSI, and 4 Gb/s fibre channel interfaces. Channel shipments are scheduled for June of this year. Prices of the Cheetahs have not been announced.
http://www.tgdaily.com/2006/04/17/se...perpendicular/





Review: external storage with ESATA

Thecus Brings SATA to External Storage
Patrick Schmid

Most users do one of two things when their PC runs out of hard disk space: they either add an additional drive or rely on external storage with a USB 2.0 or Firewire connection. However, both options have their disadvantages, since installing a new drive can be a cumbersome process, while external hard drives do not offer the same level of performance as that of directly attached Serial ATA (SATA) or UltraATA. However, external SATA (eSATA) is a worthy alternative.

eSATA falls under the DAS (Direct Attached Story) category, which comprises storage products that are hooked up externally at a high speed using fast interfaces such as SATA, SAS or UltraSCSI.

Taiwanese storage firm Thecus' eSATA device accommodates two SATA drives, with a maximum bandwidth of 300 MB/s. Still eSATA is not going to replace USB 2.0 or Firewire any time soon: eSATA is comparatively a nascent technology and most motherboards do not yet offer it, while USB or Firewire can be found on almost any computer today. For this reason, Thecus decided to equip the N2050 with both eSATA and USB ports.

………………………………………………………………………………………………………………..

Conclusion

The date transfer rate of 30 MB/s that USB 2.0 offers does indeed pale in comparison to 100 MB/s for eSATA, while the WD1500 drives are capable of delivering even better performance in RAID 0. It is also good to see that Thecus did not throw the USB 2.0 interface away, because it is a nice backup interface whenyou want to use the device with other computers via USB 2.0.

However, its setup hinders the N2050's performance. The RAID controller seems to prevent even faster data-transfer rates, which we proved was possible by running a RAID 0 setup with exactly the same drives, but on a controller from Promise. With our reference test system, we achieved up to 170 MB/s transfer rates. But this shortcoming in potential is only important in high-performance scenarios such as video editing.

We also hope Thecus will offer JBOD support (not everybody really needs RAID) as well as a way of securing the RAID mode selector switch so that an accidental reset will not cause you to lose your data.
http://www.tomshardware.com/2006/04/...ernal_storage/





We like to watch

Most Web Users Are 'Silent Surfers'
Quentin Reade

The majority of web users don't participate online, preferring just to passively read information presented to them, according to new research.

Although 92 per cent of European websites surveyed prompt visitors to participate, the majority (53 per cent) of European internet users are passive, silent surfers.

Just 23 per cent of web users in the UK, France, Germany, Sweden, Spain and Italy respond to prompted participation, such as polls and competitions, and a further 24 per cent are unprompted contributors that maintain blogs, websites or post in forums.

The study by JupiterResearch, also warns that the growth of consumer-generated content comes from a sizeable and vocal minority – mainly young and male - and this has a disproportionately wide influence.

Julian Smith, analysts at JupiterResearch, said he expected the influence of this vocal minority to grow:

“With the adoption of content creation tools democratising the publishing of information, consumers will increasingly be exposed to informal, peer produced content alongside formal, professionally created content.”
http://www.webuser.co.uk/news/83383.html?aff





Pays to be popular

Social Bookmarking: Vote Early, Vote Often
Rafe Needleman

"Do you find this page useful?" That's a question that used to appear on the bottom of Web pages. You could click yes or no, or maybe even send a note to the publisher, and it was good. For the publisher. Ultimately, your feedback might have influenced the site owner to improve the site, but it was a slow process, and the feedback loop was not transparent, so you, the user, couldn't see what impact you were making.

The new thing in feedback, at least for tech sites, is to flag a story on Digg.com. If you don't know it, Digg is a fantastic site that collects pointers to Web links (stories, blog entries, and so on) from its users. Links that are popular bubble up to the top of the list. Users of Digg then see these links, and if they also like them, they click a little Digg It button to add another vote. It appears under a square box--called a chiclet by some (like me)--that lists the number of votes the link got. Del.icio.us is similar in some ways. (For a cool mashup of Digg and Del.icio.us, see DiggLicious.com, which displays new Digg and Del.icio.us links in real time, as they are posted.)

There are dozens of other social bookmark sites. Some focus on different topics (such as video) or have interesting user interface fillips. But they are all about the same thing: enabling members of a community to share the content they like the best.

The Digg effect
Digg now rivals Slashdot as the most important nonsearch driver of links to technology sites, which is leading many tech publishers to consider two things: first, how they can get users to Digg their stories and, second, how they can add Digg-like functionality to their own sites.

The easy way for publishers to hasten their entry into the Digg ecosystem is to add a Digg It button or link to pages on their sites. Surprisingly, considering the importance of Digg, few publishers have done this. Our sister site, ZDNet, has some buttons on its blog entries. It's more common to see Add To Del.icio.us links on stories, perhaps because the coding required for that is slightly simpler than the unsupported code it takes to add a Digg It link.

Digg-like voting systems are taking off on other sites. The community search tool Wink, for example, allows users to vote on results, using Digg-like chiclets. The RSS readers Rojo and Newsvine also have voting. Like Digg, these services collect stories from around the Internet, and the voting system helps them rank relevance so that they can present users with better results.

Coming to a site near you
I will not be surprised when content sites begin adding local chiclet voting to their stories. By local, I mean sites may simply use a Digg-like interface to allow users to flag stories they like. The sites can then present these stories on their front page as "most popular." Even if the stories are not submitted to any social or community link sites, such as Digg, it's a good thing for improving relevance on a site. More clicks on chiclets mean a more popular story, and the popular stories get promoted highly, which makes the site seem better.

At some point, chiclet voting may work for both local sites and roll-up sites such as Digg. For example, we may soon be able to vote on a story and have it flagged on the story's own site, as well as on Digg, Del.icio.us, Reddit, or any other combination of social bookmark sites.

Ultimately, all these voting and bookmarking tools help publishers, by providing information on which stories are popular and which are not. When publishers, bloggers, or site managers see massive traffic coming to a story from Digg, they know they've hit on the right formula, and chances are they'll be thinking about ways to do more stories like that one. It's a virtuous circle. So when you read a story you like and you see an option to vote on it by submitting your vote to a bookmarking service, go ahead and do it. You'll be doing good work for the publisher, and in return, you'll be improving the sites you like the most.

And yes, if you like this story, you can Digg it and add it to Del.icio.us.
http://reviews.cnet.com/4520-3000_7-...xt&tag=nl.e501





Ad-Aware PR
roy_batty

[Abstract]
Ad-Aware is a poorly written anti-spyware program from Lavasoft. Running it gives you a false sense of safeness. There can be done numerous attacks against this software. I'll show some of the problems and attacks in this write-up. Here's just a summary of the most visible problems I've run into.

1. Definition file
1.1. "Encrypted" with xor \
1.2. Packed with ZIP with simple password - trivial to intercept def
updates and change the defs
to make the malware invisible
1.3. No checksum in the def file /
1.4. Big redundancy in the def file
1.5. !!! Multiplying the number of entries in the def file with constant
1.46 to make it look it has more definitions !!!

2. Program
2.1. Poorly written checksum algo
2.2. Poorly written scanning algo (slow as hell)
2.3. CSI works only for in-memory images and is useless

You want the proofs? Read the following text ...



---------------------------------------------------------------------------

1. [Intro]
"Lavasoft is the industry leader and most respected provider of anti-spyware solutions. Lavasoft develops and delivers the highest quality antispyware solutions to keep your computer or network free of compromising and intrusive threats to your privacy."
--

This write-up reviews the industry leading antispyware solution from the most respected provider of anti-spyware solutions - Ad-Aware from Lavasoft. I will show that this software is just a piece of crap, nothing more, nothing less. PR sells, right?

2. [Ad-Aware SE]
"Ad-Aware SE is the latest version of our award winning and industry leading line of antispyware solutions and represents the next generation in Spyware detection and removal. It is quite simply the most advanced solution available to protect your privacy. With the all new Code Sequence Identification (CSI) technology that we have developed, you will not only be protected from known content, but will also have advanced protection against many of their unknown variants. "
--

2.1. "Encrypted" with xor
The reference file defs.ref is just a plain ZIP file that is then "encrypted" using following algo.

void decode_mem(char *b, unsigned int b_s)
{
static char decode_string[] = "\x00\x50\x50\x50\x50\x50\x50\x50\x68\x69\x73\x20\x70\x67\x6 7\x67\x67\x67\x67\x20\x6d\x75\x73\x74\xe0\xe0\xe0\xe0\xe0\xe 0\x6e";

int unsigned y = 0;
for(unsigned int x = 0; x < b_s; x++)
{
b[x] ^= decode_string[y];
if(++y == (sizeof(decode_string) - 1))
y = 0;
}
}


Pointer b points to memory with the content of defs.ref and b_s is just size of the buffer.

2.2. Packed with ZIP with simple password After "decrypting" there is a ZIP file with file 29388543757543549 inside. The file name is visible in ad-aware.exe. The ZIP file is password protected and the password is "This program ^u@_LSstreams145681902". First part of the password [This program ^u@_LSstreams] is in plaintext inside ad-aware.exe, second part is created runtime.

2.3. No checksum in the def file
After "decrypting" and decompressing there is a definition file with
following structure.

[header]
[family names]
[www names]
[family descriptions]
[obj_stream]

offset size description
32h WORD? internal build
80h ???? version of ref file, ends with 0ffh
100h ???? family names, separated by 0ffh, ends with 0ffffh
???? ???? www names, separated with 0ffh, ends with 0ffffh
and content of ini file
(comments for family names)
gets stored to description.ini

???? ???? stream of objects ... starts with word OBJ_STREAM(n)[x]
where n is prolly number of the stream streamu (1 for now)
- IMHO preparation for incremental updates -
and x is nunmber of objects in the stream
at the end of stream there is 0ffffh again

[Example of reference file, info gives ad-aware directly]
Definitions File Loaded:
Reference Number : SE1R47 24.05.2005 offet 80h
Internal build : 55 offst 32h
File location : G:nadanadadefs.ref ...
File size : 435074 Bytes file size before decompression
Total size : 1439523 Bytes file size after decompression
Signature data size : 1408291 Bytes sizeof(family descriptions) + sizeof(obj_stream)
Reference data size : 30720 Bytes family names size
Signatures total : 40174 [x] * 1.46 + www names
CSI Fingerprints total : 886 entries in OBJ_STREAM with type 0f0h
CSI data size : 30371 Bytes sizeof entries with type 0f0h
Target categories : 15 known before
Target families : 679 count of family names



There is no checksum of the content of the definition file nor is the file signed. It is trivial to modify the content of the file, for example modifying checksums of malware binaries by malware that wants to hide itself from Ad-aware is thus really _very_ easy.

2.4. Big redundancy in the def file
The definitions consist of registry keys, www sites, file names and the most visible part form the checksums of malware binaries.
[ Snippet from defs.ref]
...
3830397280 10842529196097280 97280
3657194622 106199918742094622 94622
3830994208 1059056701129094208 94208
3697194208 105862934264094208 94208
3697194210 1058568963132094210 94210
...


Every checksum entry consists of a header, reference to family name and three ASCII numbers. Two of the numbers are checksums concatenated with file size and the third one is the file size.

...
38303[97280]
108425291960[97280]
[97280]
...


2.5. Poorly written checksum algo Computation of first level checksum


unsigned int compute_first_level_fingerprint(unsigned char *b)
{
unsigned int checksum = 0;

for(unsigned int x = 0; x < 0x600; x += 0x20)
{
checksum += b[x];
checksum += x;
}

return checksum;
}


Computation of second level checksum


unsigned int compute_second_level_fingerprint(unsigned char *b, int l)
{
unsigned int checksum = 0;
unsigned int x = 0;

for(; x < 0x2000; x += 0x2)
{
checksum += b[x];
checksum += x;

if(x >= (l - 2))
break;
}

for(x = (l >> 1); x < (l >> 1) + 0x7ffc; x += 0x2)
{
checksum += b[x];
checksum += x;

if(x >= (l - 2))
break;
}

return checksum;
}


Pointer b points to buffer holding content of the file, l is the buffer/file size.


...
sprintf(size, "%d", x);
sprintf(first_level, "%d%d", compute_first_level_fingerprint(b), x);
sprintf(second_level, "%d%d%d%d", compute_second_level_fingerprint(b, x), (unsigned char) b[x >> 1], (unsigned char) b[x - 4], x);
...

first_level now holds the first level checksum
second_level now holds the second leve checksum
size now holds the file size



Now we can just do a string compare against checksum entries in data file. If match is found, the fourth word is a index into family names string list. There are also entries that have description incorporated, but the entry structure is very easy to guess - feel free to explore it on your own.

As you can see, the checksum is really very basic one and could be easily spoofed. Colisions are easy to find. Next thing is the ASCII format of the checksums and file size concatenating.

Lavasoft claims "Now Ad-Aware and Ad-Watch Use much smaller reference files" and I just have to say: you really want me to believe that?

2.6. !!! Multiplying the number of entries in the def file with constant
1.46 to make it look it has more definitions !!!

2.7. Poorly written scanning algo (slow as hell)
"Scanning speed increased" is written in PR blablas that come with Ad-Aware SE. I must laugh when I hear this. I must laugh _very_ loudly.

The psudo-C code of Ad-Aware file scan algo follows.


for entry from entries
{
alloc_mem(file_size);

read_file_to_memory(); // no memory mapped files, ReadFile()
count_checksums();

if(does_match_entry(entry, checksums))
break;

free_mem();
}


The real "Scanning speed increased" algo follows.


map_file_to_memory();
count_checksums();

for entry from entries
{
if(does_match_entry(entry, checksums))
break;
}
unmap_file_from_memory();


So if you run the Ad-Awares file scan and you hear disk making noisy sounds, it's not like Ad-Aware is doing a good job finding the malware on your drive. It's just it uses very poorly written algo, that makes a lot of unnecessary disk reads thus wasting resources of your computer.

2.7. CSI works only for in-memory images and is useless

"Uses our all new CSI (Code Sequence Identification) technology to identify new and unknown variants of known targets"
Oh. What a technology! I wondered how they're doing this, I was thinking about some emulation engine, code shrinker, advanced pattern matching ... I also thought (everyone must think that) that CSI is used on file scanning basis. It's not. CSI scanning is used only when scanning memory and thus ... is useless. Another PR blabla.

3. [Outro]
"Lavasoft's Ad-Aware SE, the world's leading brand in antispyware solutions, has been acknowledged and awarded in variety of distunguished magazines and publications all over the world." --

Acknowledged!

btw it's not just a coincidence that the Ad-Aware engine uses another PR crap firm F-Secure in their products for fighting with spyware. Nice simbiotic.
This text was written in the city of Sofia

(C) 1999-06 Roy Batty, who is a stranger in the world he was made to live in roy.batty@phreaker.net

Eddie lives...somewhere in time

---------------------------------------------------------------------------


[decode.cpp]
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>

void decode_mem(char *b, unsigned int b_s)
{
static char decode_string[] = "\x00\x50\x50\x50\x50\x50\x50\x50\x68\x69\x73\x20\x70\x67\x6 7\x67\x67\x67\x67\x20\x6d\x75\x73\x74\xe0\xe0\xe0\xe0\xe0\xe 0\x6e";

int unsigned y = 0;
for(unsigned int x = 0; x < b_s; x++)
{
b[x] ^= decode_string[y];
if(++y == (sizeof(decode_string) - 1))
y = 0;
}
}


void main(int argc, char *argv[])
{
if(argc < 2)
{
printf("Syntax: decrypt_def_file.exe <def_file>\n");
return;
}

HANDLE h = CreateFile(argv[1], GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, 0, 0);
if(h == INVALID_HANDLE_VALUE)
return;

DWORD s = SetFilePointer(h, 0, 0, FILE_END);
SetFilePointer(h, 0, 0, FILE_BEGIN);

HANDLE m = CreateFileMapping(h, NULL, PAGE_READWRITE, 0, 0, NULL);
CloseHandle(h);

if(h == INVALID_HANDLE_VALUE)
return;

char *b = (char *) MapViewOfFile(m, FILE_MAP_ALL_ACCESS, 0, 0, 0);
CloseHandle(m);
if(!b)
return;

decode_mem(b, s);

UnmapViewOfFile(b);

return;
}
[decode.cpp]

http://rootkit.com/newsread.php?newsid=471





Yahoo Profit Falls 22%; Ad Sales Up
Laurie J. Flynn

Yahoo reported a 22 percent decline in first-quarter profit on Tuesday but met Wall Street's earnings forecast, helped by solid growth in advertising sales.

As a result, Yahoo's shares rose $1.92, or 6.1 percent, in after-hours trading, after closing up 33 cents, at $31.30, in the regular session.

Investors were relieved to see Yahoo avoid a repeat of last quarter, when it issued a cautious forecast that set off a 13 percent decline in its share price.

Yahoo, which runs the largest Internet portal, said profit fell to 11 cents a share from 14 cents a share a year earlier. Revenue in the period, which ended March 31, rose 34 percent, to $1.57 billion from $1.17 billion.

Yahoo said revenue in its branded and search advertising businesses rose 35 percent, to $1.38 billion, as more businesses paid to have their ads displayed next to search results. Sales in Yahoo's fee-based services business, including its deal with Internet service providers like AT&T and Verizon Communications to sell digital subscriber line services, increased 25 percent, to $186 million.

"Our overall advertising business saw solid growth and our user numbers continued to climb," Terry S. Semel, the chief executive, said.

Yahoo continues to face vigorous competition in advertising from Google, which leads the category, and Microsoft, which holds a distant third place. But Yahoo executives dismissed reports that it was rapidly losing to Google in share of overall searches. In a departure from the company's usual practice of not commenting on such issues, Susan L. Decker, the chief financial officer, told analysts that Yahoo had a 15 to 20 percent gain in search queries during the quarter.

Earlier Tuesday, comScore, a market research company, released the results of a survey that showed Yahoo had lost considerable market share to Google in the last year. In March, Google's share of the online searches in the United States rose 19 percent, to 43 percent of all searches from 36 percent in March 2005, while Yahoo's share declined 8 percent, to 28 percent of all searches, during the same period. Microsoft's share of searches declined to 13 percent from 17 percent in the period, comScore said.

Yahoo executives said Tuesday that new technology the company was developing to increase its advertising search revenue would soon be ready for a phased roll-out.

"It's a great opportunity for us," Mr. Semel said. The new technology, he said, would improve Yahoo's ability to attach paid ads to searches, an area in which Google is considered to have superior technology today. "It will allow us to do a better job every time we do a search," Mr. Semel said.

The company first described the new system last year. Executives said they would describe it more fully next month at a meeting with analysts. Safa Rashtchy, an analyst with Piper Jaffray, said the system was crucial to Yahoo's growth, and the company would have benefited by introducing it sooner.

Net revenue, or sales excluding the fees Yahoo shares with its advertising partners, rose 33 percent, to $1.09 billion from $821 million.

"This was a very good quarter," Mr. Rashtchy said. "Expectations were realistic, in fact low, but the results were solid and consistent."

Yahoo said its net income fell to $159.9 million from $204.6 million a year earlier, partly because of higher stock option expenses.

Excluding the stock expenses, Yahoo's earnings were $231 million, or 15 cents a share, up from $195 million, or 13 cents a share, a year earlier.

Looking ahead, Yahoo's outlook for the second quarter was also in line with expectations. The company forecast second-quarter revenue of $1.08 billion to $1.16 billion, and full-year net revenue of $4.6 billion to $4.85 billion.

Analysts surveyed by Thomson First Call expect Yahoo to post second-quarter sales of $1.08 billion.
http://www.nytimes.com/2006/04/19/business/19yahoo.html





AT&T and Verizon: We Own Your Congress
Preston Gralla

Wonder why AT&T and other telcos are winning the net neutrality debate, and just about every other issue that comes before government? It's simple: Money talk. Telcos spent a whopping $60 million in lobbying money just at the federal level, second only to the health care industry, Business Week reports.

That's bad enough, but there's worse as well. The Center for Public Integrity compiled a list of the top 100 money-givers to Congress between 1998 and 2005, and telcos dominate the list.

Here are a few of its findings:

* Verizon Communications Inc. $81,870,000
* SBC Communications Inc. $58,035,037
* AT&T Corp. $53,349,499
* Sprint Corp. $47,276,585
* BellSouth Corp. $33,732,827
* Qwest Communications International Inc. $24,523,480

So next time you wonder why telcos always seem to get their way with the feds, just look at those numbers. They're buying the best Congress money can buy.
http://techsearch.cmp.com/blog/archi...verizon_1.html





The RIAA vs. the EFF: Who Will Redefine Copyright for the Digital Age
Hannibal

In a recent editorial, an attorney representing a defendant in one of the RIAA's 19,000 lawsuits over P2P technology made the case that the RIAA's arguments in Elektra v. Barker, if accepted by a judge, have the potential to undermine the very nature of the Internet. Here at Ars, we've previously touched on the RIAA's radical notion, first introduced in this case, that simply making files available on a shared folder constitutes infringement (regardless of whether the files were actually accessed by another party). The new editorial reiterates the dangerous absurdity of parts of the RIAA's arguments, but in favorably citing a recent EFF amicus brief in the case it also raises the question of who, in fact, is really the party arguing for the pre-digital status quo.

The EFF's most recent brief, summarized here and available here, has been covered here at Ars before. But it's worth taking another good look at, because of the odd and ironic way that it places the EFF on the side of arguing for a pre-Internet model of distribution, while the RIAA, in its own twisted fashion, tries to drag copyright law into the digital age.

To recap briefly, the EFF has caught the RIAA and their allies (the MPAA and the US Attorney General's Office) trying sneak through the courts a complete overhaul of existing copyright legislation. The change in the definition of a copyright owner's exclusive right of distribution that the RIAA seeks to have the court acknowledge is at once troubling and fascinating—troubling because, hey, it's the RIAA that's pushing this, and we all know they're Pure Evil(TM), and fascinating because in its own odd way the attempted alteration would "update" copyright law to take account of the reality of digital distribution in a manner that it currently does not.
Exclusive rights, and the RIAA's arguments against P2P

US copyright law gives rightsholders the following exclusive rights over their work:
The right to reproduce the copyrighted work;
The right to prepare derivative works based upon the work;
The right to distribute copies of the work to the public;
The right to perform the copyrighted work publicly; and
The right to display the copyrighted work publicly.

(Bitlaw has a good, quick rundown of these rights. For even more practical, layperson-friendly discussion of these issues, I like Nolo's The Copyright Handbook: How to Protect and Use Written Works, by Stephen Fishman.)

In its lawsuits against P2P users, the RIAA wants to argue two things. First, it wants to argue that illegal downloaders are infringing on the copyright owners' exclusive right to reproduce their copyrighted work. As the very word "copyright" suggests, the rightsholders' exclusive right to make copies of a work is at the very heart of copyright law, and the EFF in fact concurs with the RIAA on this issue, holding that P2P downloading is technically copyright infringement from a legal standpoint.

More controversially, though, the RIAA would like to argue that those who upload copyrighted works onto P2P networks are infringing on rightsholders' exclusive right to distribute copies of the work to the public. This argument that a digital transmission of a work that results in a copy on the other end equals distribution is where things start to get really, really interesting.
Defining "distribution"

The letter of US copyright law specifically and clearly stipulates that in order to "distribute" a copyrighted work, an actual, physical exchange of a material object (a book, a CD, a tape, etc.) must take place. In this sense, the Copyright Act isn't really cognizant of the concept of digital distribution of copyrighted works—it just isn't on the law's radar at all. Apparently, the DMCA didn't really fix things in this regard, either.

As far as the courts are concerned, the issue of distribution and digital transmission is significantly murkier. The EFF's brief illuminates just how murky the issue is, with some courts taking for granted that digital transmission does constitute infringing distribution and at least one lower court hewing to the letter of the law and ruling that it does not. The EFF therefore urges the US District Court to be the first court to explicitly tackle the issue of digital transmission and distribution, and to define "distribution" so that it requires the exchange of a physical object.

The part that's fascinating and somewhat ironic, at least to me, is that the EFF is now in the position of arguing in favor of an outmoded, pre-Internet concept of "distribution," and one that runs directly counter the plain sense of the way that both the language and the concept of "digital content distribution" is currently employed in just about any online venue where the topic is discussed. Would anyone argue that Apple is not, in fact, in the business of distributing music now? (Actually, that's a bad example, because Apple may be arguing exactly that in their ongoing dispute with Apple Records. Still, you get the point.)

Anybody who moves music, video, and text online is in the "distribution" business, more or less. That's how we think about it, and that's how we talk about it. So does it really make sense to continue to define distribution strictly in terms of the transfer of material objects, and then to adopt the fiction that consumers of digital content are somehow licensed "reproducers" and that the distributors of it are not actually doing any "distributing."

I'm not actually sure what's to be done here. On the one hand, the RIAA can't be allowed to single-handedly remake copyright law so as to favor its own business model and possibly wreck the Internet. But on the other hand, copyright law needs to be fundamentally rethought for the digital age. I just don't trust anyone in Washington—neither those in Congress who'll vote on any overhaul legislation nor those on K Street who'll write it—to do anything but stick it to consumers and innovators. The last time we tried something even remotely like this, we got the DMCA.
Conclusions

Ultimately, the Elektra v. Barker case is about case law and legal precedent. On one side is the RIAA and their allies, who'd like to have a high court "clarify" in a ruling that will bind the lower courts that digital transmission is indeed "distribution" per the Copyright Act. On the other is the EFF, who would also like a binding, clarificatory ruling to the opposite effect.

However this case turns out, it's going to have major repercussions for everyone in the digital content business. And however it turns out, I have no doubt that it's only the opening salvo in a battle that will take at least a decade to sort itself out. This case will be appealed, regardless of who wins, and I'm certain that the issue of digital transmission and distribution will eventually be dealt with in Washington, either by the US Supreme Court or by Congress.
http://arstechnica.com/news.ars/post/20060418-6626.html





Online Music Hitting False Notes
Troy Wolverton

The online music business is booming, but is anybody making real money at it?

If you're doing the actual selling of music -- or music subscriptions -- to consumers, the short-term answer appears to be not much. Competition, marketing expenses, a faulty business model, slow-changing consumer attitudes and the use by some companies of music as a lure to sell other goods and services have all conspired to keep the business largely in the red.

And that's not likely to change anytime soon, no matter -- or arguably because of -- how many millions of songs Apple Computer (AAPL:Nasdaq) sells through its iTunes music store, analysts say.

"It's a long-haul business right now," says Aram Sinnreich, managing partner of Radar Research, a Los Angeles-based consulting firm. "It will be at least three years before anyone can make a serious profit selling digital music," largely because of the hold that Apple has on the market.

And even then, Sinnreich says, making money will require some serious changes in the way consumers are buying online music.

"Nobody will ever make money from selling 99-cent downloads" as Apple does through iTunes, he says. "There's not a margin in it."

Regardless of what's happening on the bottom line, though, companies are certainly seeing huge growth in digital music sales. The Recording Industry Association of America, for instance, estimates that the U.S. retail market for digital downloaded songs and albums grew to $503.6 million in 2005 from $183.4 million in 2004. The RIAA estimated that U.S. consumers spent another $421.6 million on ring tones and other mobile music content last year and some $149.2 million on music subscription services.

(The organization did not have estimates for prior-year ring tone or subscription sales, meaning that according to the RIAA's data, they effectively grew from a base of $0).

This boom has contrasted sharply with the overall music industry, where retail sales fell about 0.6% last year to $12.27 billion.

At Apple in particular, iTunes revenue is becoming increasingly important to the company's overall sales. The company groups iTunes song sales with sales of iPod accessories. In the first quarter, that unit tallied $491 million in revenue, or 8.5% of the company's overall sales, up from $177 million, or 5% of the company's sales, in the same period a year earlier. Apple has sold more than 1 billion songs through iTunes; citing data from Nielsen SoundScan, Apple CEO Steve Jobs has said that iTunes accounts for more than 80% of the total U.S. market for digital music sales.

That success to date has had some investors and financial analysts salivating at the company's prospects, giving yet one more reason to be bullish on Apple's shares.

But they may be getting a bit ahead of themselves. While Apple says iTunes is operating in the black, financial analysts generally estimate that the store is marginally profitable at best.

And even that performance seems to be a rare exception. Napster (NAPS:Nasdaq) , which operates a rival online music services, is losing considerable money each quarter. Another rival, RealNetworks (RNWK:Nasdaq) , has been consistently losing money on its actual operations, which exclude gains from investments or legal settlements, the company's real money makers of late. And few analysts think Yahoo!'s (YHOO:Nasdaq) music service, which is charging a bargain basement price of $5 a month for a subscription, is operating in the black either.

The problem digital music vendors face is that they haven't figured out a business model that works, analysts say. The industry is reminiscent of the early days of e-commerce, when everyone and his brother rushed in to set up a Web store but nobody had figured out how to turn a profit.

Selling downloads a la carte isn't a profitable business, because for every 99-cent song (like Apple sells), stores have to pay the music labels 65 cents or more, according to analysts' estimates. Add in marketing and technology costs, and the margins start to become very slim.

Many analysts see subscription services as the long-term future for the business, because they offer greater margins and recurring revenue. Indeed, Chris Gorog, Napster's CEO, estimates that the margins for subscription services are "four times" greater than those for individual downloads.

But today's subscription services aren't compatible with the iPod, which is by far the most popular digital music player. And Apple has shown no signs of losing its lead in the digital music player market or any interest in making iTunes subscription-based music -- or opening the iPod to other music services.

The iPod's success is holding back subscription services "to a tremendous degree at this point," Sinnreich says.

But other analysts say that's not all Apple's fault. Even if selling songs one-by-one online isn't great for music vendors, it's the method consumers are most familiar and comfortable with. Apple has made much of the idea that iTunes allows consumers to own, not rent, music, and that message has resonated with consumers who for years have been buying CDs and, before that, LPs and tapes.

For subscription services to take off, it's going to "require far more consumer education and changes in behavior compared to how they have traditionally purchased music," says Ross Rubin, an analyst with industry research firm NPD Group.

Some analysts believe that there's little future in merely selling songs -- or even subscriptions to digital music. Profitable companies will be those that defray costs in other ways, who use music to lure in customers for other services or products, or who see music as an incremental, not primary, revenue source.

Apple, for example, uses iTunes to support sales of iPods -- not the other way around. Cellular network providers are already making considerable incremental profits off ring tones, analysts say.

If so, the online world could end up looking a lot like the offline world, where music store companies have been closing and consolidating in recent years. In their place, most music CDs are now being sold through general or electronics retailers such as Best Buy (BBY:Nasdaq) or Wal-Mart (WMT:NYSE) , companies that often use discounted CDs to lure in customers to buy other products.

MusicNow, the digital music service from Time Warner's (TWX:NYSE) AOL unit, is profitable, says Neil Smith. But that has a lot to do with being able to defray marketing costs across AOL's huge user base, he says. And selling subscriptions to MusicNow is just one piece of a larger strategy of marketing music and music-related content, he says.

At this point, the companies that make money on digital music "won't be the pure-play music companies," says Smith. "If history is any guide, the only people that make money [in music] are the [music] labels."
http://www.thestreet.com/_googlen/te.../10279448.html





Music Labels Jockey for 'American Idol' Exposure
Jeff Leeds

Question: Who's going to get the ax this week on "American Idol"?

The answer for many in the recording industry is: Who cares? More pressing by far, in certain music circles these days, is who wins and loses among record labels trying to land their artists and songs on the blockbuster television show, now in its fifth season.

Most of the show's viewers may obsess on the drama leading to the coronation of a new Idol. And music executives are watching too: the show tends to bestow commercial success on its champions, who have the benefit of starting their music careers with a built-in bloc of voters behind them. Viewers, not to mention new fans, snapped up CD's from past winners, particularly Carrie Underwood and Kelly Clarkson, by the millions.

But the show has become so popular that even the songs performed on the show — either by established artists making guest appearances or by the contestants themselves — receive a sales boost in some cases. While the exposure does not spark sales of every song or artist every week — far from it — the sales spikes so far have been enough to turn "American Idol" into a coveted booking for established artists looking to return to the mainstream or maintain their public appeal. As a result, the on-screen anxiety over who makes the cut each week is now being mirrored by off-screen hand-wringing over how to tap the exposure offered by the show, which is due to close out its season late next month.

For one clear measure of the "Idol" Effect, consider the Canadian singer-songwriter Daniel Powter. In early February Mr. Powter, who had enjoyed a radio smash in Europe last year with his song "Bad Day," had only begun to develop fans in the United States.

But on Feb. 7 "American Idol" producers started using the song as a send-off for departing contestants. Sales of the digital single of "Bad Day" have since exploded, with the song logging multiple weeks at No. 1 as downloads routinely exceed 110,000 copies a week, totaling more than 690,000 copies through April 9, according to Nielsen SoundScan data. Mr. Powter's CD is expected to have its debut on the national sales chart today in the Top 10, a respectable start for a new artist.

Geoff Mayfield, a senior analyst at Billboard magazine, notes that the show has appeared to spur sales for particular acts since its first season. But this year, "as the show gets more established as a ratings giant, there are more people in the consumer base that are reacting to it," Mr. Mayfield said. "There's no longer a variety show like there used to be. Aside from Leno and Letterman, what do we have on television that reminds us of what a variety show used to be? It serves that purpose."

"American Idol," which has been averaging 30 million viewers a night, has crushed every rival on the television schedule, including the Grammy Awards, which has not gone unnoticed by music executives, who had viewed the recording academy's annual telecast as the biggest opportunity of the year to secure television exposure for artists. So as the stature of "Idol" has increased, so has the eagerness of record labels hoping to place their artists on the show.

Nigel Lythgoe, one of the executive producers of the show, said that in choosing the themes and artists that will appear, "we've always said we are going to introduce back to America the greatest songwriters in the world."

Yet the show also made a habit of picking artists who happen to have new CD's to promote, including Barry Manilow, Kenny Rogers and Shakira.

When it comes to determining which artists will have the chance to bask in the show's exposure, the clout of "Idol" means that the producers can largely choose whom they wish. "They're holding all the cards," said Monte Lipman, president of Universal Records.

The show has a deal under which the winner's CD will be distributed by Sony BMG's RCA Music Group. That label is also home to acts that are appearing this season, including Mr. Manilow and Rod Stewart. But the producers have been willing to spread the exposure a bit; they had an episode in which contestants had to perform Stevie Wonder tunes (he's with Universal). And Mr. Rogers, who records for EMI, has been a guest.

Artists do occasionally decline. Mr. Lythgoe said the producers had extended an invitation to Prince, though it is unclear whether that famously private star would be willing to spend time working up song arrangements with the contestants, as past guests have. "He's a very shy man," said Mr. Lythgoe.

It was the show's obvious commercial power during past seasons that encouraged the producers to make guest appearances a staple of the show this year. "We realized we had this influence as well," Mr. Lythgoe said. "We're giving them something back. They're not just coming on this corny American show."

Major stars, he added, "know longevity in this business comes from riding the waves that come by you. At the moment, we are the wave."

The producers also carry most of the influence over which songs are performed on the show. As Mr. Lythgoe explained it, the producers choose an over-arching theme for each week's show, and then provide contestants with extensive lists of songs to choose from that both fit the theme and have been approved for use by music publishers, who control the song copyrights.

There are still efforts to game the system, as one senior label executive affirmed, agreeing to speak only on condition of anonymity, to avoid angering the show's producers. He said he tried to send packages of his company's music to the contestants in the hope that one might choose to perform one of its tunes. Mr. Lythgoe responded, "We've got a very good security firm."

It is not clear exactly what determines whether a particular contestant's performance will generate a sales spike. The results can be unpredictable. On the March 1 episode, for example, the contestant Chris Daughtry sang "Hemorrhage (in My Hands)," which had been a hit for the rock band Fuel in 2000. Sales of the digital single soared to more than 17,900 copies the following week, up from just over 1,100, according to Nielsen SoundScan.

Mr. Daughtry's timing was fortuitious: Fuel recently parted ways with its lead singer, and is now auditioning for a new one. "It just came out of the blue," said Paul Geary, the band's manager. "Our Web site lit up. This guy gets on the air and nails the band's biggest hit. I think he'd be a contender for the gig."
http://www.nytimes.com/2006/04/19/ar...on/19idol.html





P2P saved the music

New Paradigm Opened Market To New Music
Christopher Hutsul

Crooner Michel Bublé was the toast of the Junos April 2. It didn't matter that the throwback crooner (who's the poor man's Harry Connick Jr., who is a poor man's Dean Martin, who is the poor man's Frank Sinatra), has nothing to do with the real musical revolution that's going on in Canada right now.

If the event were truly about excellence, alternative Canadian acts Arcade Fire, Broken Social Scene, Metric and Stars would have been centre stage. Those bands, and a handful of other performers, have turned on music fans around the world.

The buzz at festivals like SWSX in Texas and CMJ in New York ? important gatherings for the makers of new music and the people who listen to it ? is all about this family of indie rockers from the north.

Our scene is the darling of New York music press: it seems like every week there's another story in the New York Times about Canada's pop mastery. Indeed, Canadian rock stars, whether reflected in CD sales or not, are on fire. For the first time since Martha and the Muffins, it's cool to be a Canadian rocker.

It only makes sense that the recording industry would showcase an artist like Bublé in favour of an indie act like Arcade Fire, because in the end, Bublé fans pay for music, while many Arcade Fire fans don't. It's not that Arcade Fire fans are bad people ? it's just that those who'd seek out new, challenging music are also more likely to have the technological wherewithal to be able to download for free. I suspect that Bublé fans are, in a sense, a throwback too. To them, the place to find music is the record store. If you're in the business of selling CDs, you're going to want to promote musicians that appeal to this kind of audience. The hype at the Junos won't hurt.

But the real story is the renaissance that's happening in Canadian pop music.

It's hard to quantify the success and influence of the aforementioned bands in any traditional sense, because so many of their fans have illicit copies of their material. On paper, it might appear that Bublé has sold many more albums than, say, the new wave Montreal band Wolf Parade. But if you were to count actual tracks distributed, legally and illegally, you'd see a different picture.

The Canadian Recording Industry Association says the industry's in a slump. That may be true, but music as an art form is being enjoyed more fervently than it has since, well, probably ever.

It's hard to pin down what, exactly, brought about this renaissance. But isn't it curious that Canada has enjoyed this renaissance half a decade after the recording industry said file sharing would kill music?

The advent of Napster ? a simple computer program that allowed people to share MP3 files freely ? sent the industry into a paranoid tizzy. Big time performers such as Metallica and Dr Dre risked their street cred to denounce the software. Dour anti-piracy campaigns were launched to make us feel guilty about illegal file sharing. Keep music coming, they said, as if file sharing was the death blow.

Not only has music survived, it has flourished, at least, in a spiritual sense, if not in CRIA's books.

I believe Napster re-engaged us with music, and reminded us how rewarding the discovery of music can be. It exposed us to sounds we wouldn't have heard through traditional channels, and challenged us to expand our repertoire of listening material. For audiences, it provided an opportunity to sample music from genres that we previously had no access to.

Your neighbourhood record store stocked only the tiniest sliver of music being made around the world.

Now, access to all this new music, and volumes from the pioneers of blues, rock and jazz, is dizzying. It has led, ultimately, into the creation of more complex, informed content.

I chatted with Mark Mothersbaugh of rock group Devo on the subject a few weeks ago. He contrasted our current cultural environment with that of the 1970s. Today's music catalogue is far more diverse than what was available to him in his youth. He says file sharing has helped music mature.

"When I was a kid I went to the record store and there was this one little bin with 30 records in it called `alternative,' " says Mothersbaugh. "Now that bin is gigantic. It's bigger than a record store ? it's the whole Internet."

Mothersbaugh disagrees with the notion file sharing is inherently wrong, and says we need to look at the bigger picture.

"Historically speaking . . . music wasn't always something that you could be charged money for," he says. "Music was something free. (We didn't start charging for music) until recorded music came into place . . . and we were able to say `Hey, if you listen to this music you have to pay me.' That's a relatively new concept in our civilization.''

We hear a lot less about file sharing today ? Napster became a pay site a la iTunes music store. But it still goes on.

For new bands that don't have the backing of major labels, the business model has changed. Instead of recording an album, sitting back, waiting for royalties to come in, whining about how the record label has mishandled your publicity, you hope that young MP3 fans will do your viral marketing for you. So that when you pull into Nowhere USA, there's a venue packed with screaming fans.

It's a more intimate financial relationship between the performer and his or her audience.

While this new model may not benefit the recording industry, it spells good times ahead for the people who make music and love music. It's a better place to be than in the pre-Napster era, when alternative Canadian acts could neither sell records or draw a crowd.

Instead of bringing an end to music, file sharing has, in a sense, saved it.

After last's week's Junos, it dawned on me that the only real threat to music in Canada is the ongoing glorification of pablum over art.
http://www.thestar.com/NASApp/cs/Con...d=968350072197





Tech Firms: Don't Fence Us In
AP

Media and technology companies warned Tuesday that new European Union broadcasting rules could restrict the growth of emerging media formats such as video broadcasts through the internet and mobile phones.

An alliance of companies, including ITV, Yahoo, Vodafone, Intel and Cisco Systems, warned that a European Commission proposal to impose rules for traditional broadcasters on new media providers could have "unintended consequences" and hurt investment.

The draft legislation proposed in December aims to level the playing field by applying the same rules to everyone.

However, Intellect, a London-based business lobby representing technology companies, said Tuesday that it would be difficult enforce strict rules designed for television broadcasters on video transmitted over the internet or on third-generation mobile phones.

The EU proposal could ultimately mean less investment for an area that has enormous growth potential — leading to fewer firms, less innovation and higher prices, the group said in a statement.

"Many services unconnected to scheduled broadcast television will be unintentionally caught," it said.

"Citizen media such as blogs, video-casts and the like are one of the most exciting developments enabled by new technology. This phenomenon has the potential to create new businesses ... but this proposed regulation severely risks stunting its growth," it said.

EU officials were not immediately available Tuesday to respond to the criticism, but the EU executive has insisted that it has no plans to regulate the internet.

The European Internet Services Providers Association is also concerned about the "lack of clarity" in the EU draft law and is unsure what kind of technologies would be governed by the stricter rules, the association's Secretary General Richard Nash said.

"The U.K. government has taken a pro-active line stimulating the debate. In other countries, there's less awareness of it," he said.

The law will need the backing of the European Parliament and 25 European Union governments before it can enter into force. The Parliament is likely to vote on it later this year.
http://www.wired.com/news/technology/0,70681-0.html





Apple to Build New 50-Acre Campus In Cupertino
Katie Marsal

Apple Computer is planning to build a new, 50-acre campus about a mile from its present headquarters in Cupertino, Calif., chief executive Steve Jobs revealed this week.

"What's happened at Apple is that our business has basically tripled in the last five or six years," Jobs said on Tuesday evening at a Cupertino city council meeting. "And what that's meant, is that our headcount in Cupertino has dramatically expanded."

"We are in 30 other buildings now," Jobs continued, "and they keep getting further and further away from the campus."

Having its employees spread over so many locations quickly became "very frustrating" from Apple, Jobs said. So several months ago the company decided that it needed to build a new campus.

"We've rented every scrap of building we could find in Cupertino," Jobs explained.

With a dearth of corporate real estate in Cupertino, Jobs and company initially feared that they would have to shop for expansion space outside the town that has been Apple's home for nearly three decades.

"After looking at a lot of things, we found something in Cupertino that was a possibility.... It was more expensive, a lot more expensive," Jobs told council members.

He explained that Apple had gone ahead and acquired 9 separate properties located next to each other along Interstate 280 near Pruneridge Rd. -- across the street from the 100-acre HP parcel.

Jobs said the company plans to level the buildings that currently reside on the combined 50-acre lot to form what will eventually be the company's second home, about a mile away from its current headquarters.

"We're pretty thrilled," Jobs told the city council. "Since we're your largest taxpayer, I thought you might be happy for us."

Jobs said it would likely take three to four years to design and build the campus that will accommodate 3,000 to 3,500 employees.

"We'll probably get larger still," Jobs said referring to the time period beyond its corporate expansion.
http://www.appleinsider.com/article.php?id=1682





File-Sharing Still Campus Problem
Josh Hirschland

This year, University administrators have considered subscribing to a legal online music service. Through an agreement with a program like Napster, iTunes, or, most likely, Cdigix, Columbia would join more than 100 colleges and universities nationwide in giving students this kind of service.

I spoke to administrators dealing with the potential agreement in November, expecting that they would talk about the deal as a perk that tour guides could talk about while promoting Columbia. Instead, they wrapped the deal in language not of rights, but of liability.

“It’s a way of being proactive because eventually, universities become easy targets,” Robert Taylor, senior associate director at Columbia’s Student Development and Activities office, said. “We want to figure out what proactive steps we can take, so we can at least demonstrate that we’re serving our educational mission giving students opportunities to not engage in illegal behavior.”

However, schools that have subscribed to the legal services have found wildly uneven responses. In an interview with the Chronicle for Higher Education, administrators from four schools differed in whether their schools had received fewer complaints of file-sharing from the recording industry.

“A key concern is that students can really get into trouble,” Walter Bourne, head of IT Security and Policies for Columbia University Information Technology, noted. “There are a number of students here who are talking to lawyers about paying money.”

In four rounds of lawsuits filed last year against users of i2hub, a file-sharing program shut down in November that allowed those at research institutions to swap songs over a super high speed network, 635 users, including 39 at Columbia, were targeted by the Recording Industry Association of America, and while the RIAA has been quiet on how much these suits actually cost, various reports have put the settlement number at between $3,500 and $4,000. Also, in the last week, there have been reports that RIAA officials have been recommending that students facing fees for sharing music drop out of school.

However, subscribing to a legal service is not a guarantee that people will use it. In studies held last spring by several schools that began using legal music services, they found that many students never switched over to them.

There has been a slowdown as of late. Since i2hub was shut down—an action that founder Wayne Chang has said was due to pressure from the RIAA, no University users have been specifically targeted in lawsuits.

The slowdown appears to be an anomaly. In a statement, the RIAA said, “We have a comprehensive approach to addressing piracy on college campuses: ... [including among other actions,] when necessary, enforcing our rights against individuals who violate the law. We have and will continue to pursue each of these components as part of our overall effort to discourage students from illegal downloading and encourage them to turn instead to legal services.”

It’s not as though the targets have gone away: in total, according to an October release from Internet traffic checker Big Champagne, there are more than 9 million monthly users of p2p programs like KaZaA and eDonkey worldwide, double the number that there were just two years prior. And though i2hub may have shut down, there are still hubs hosted on college campuses that facilitate file sharing. These are all open to the RIAA’s scrutiny and even private hubs have been made the brunt of RIAA action in the past.

The fact remains that the only way to avoid lawsuits is to have individuals stop downloading illegally, and nothing—not the shutdown of i2hub, not schools subscribing to legal services, and not having front page stories about their classmates being sued out of thousands of dollars and potentially having to leave school—has proved effective in doing that.
http://www.columbiaspectator.com/vne.../443de916e5658
















Until next week,

- js.


















Current Week In Review






Recent WiRs -

April 15th, April 8th, April 1st, March 25th

Jack Spratts' Week In Review is published every Friday. Please submit letters, articles, and press releases in plain text English to jackspratts (at) lycos (dot) com. Include contact info. Submission deadlines are Wednesdays @ 1700 UTC.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
JackSpratts is offline   Reply With Quote