View Single Post
Old 23-05-07, 01:34 PM   #2
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,017
Default

New Internet Radio Royalty Fees Pressure Webcasters

Internet-based radio stations are battling the record industry -- and the clock -- as they face hefty new copyright fees in two months.

Radio listeners rarely pay for music. That burden is usually placed on stations or, in the case of online music, Webcasters such as Live365, National Public Radio, Pandora Internet Radio and AOL Music.

Webcasters pay royalties, as a percentage of their earnings, to collection companies, which then split the royalties between the music industry and musicians.

But with CD music sales decreasing and with audience shifting online to cheaper music alternatives, the Copyright Royalty Board, a Library of Congress panel, decided in March to increase royalty fees for streamed online music.

Under the new rules, Webcasters would no longer pay royalties as a percentage of earnings. Instead, they would pay a fee each time a user listens to a song.

Webcasters are multiplying the number of songs streamed each year by the estimated 70 million Americans who listen to Internet radio by the $0.0008 per song royalty rate set for 2006 by the federal panel -- and do not like what they see.

"We don't have the money to pay up," Live365 Chief Executive Mark Lam told the Washington Post, calculating that the fees his radio network would have to pay based on its 4 million listeners per month could rise from $1.4 million in 2006 to between $7 million and $8 million in 2007.

With the royalty rate set to rise to $0.0019 per song per listener in 2010, large Webcasters would see an increase in royalty expenses of about 40 percent to 70 percent of revenues, according to a congressional estimate. For small Webcasters, the royalty increase could be up to 1,200 percent of revenues.

It's an increase Webcasters say would devastate their industry. They predict that, in the short term, few small online stations will survive the fee change, which includes 18 months of retroactive payment.

Greg Scholl, president of the online music distribution company The Orchard, told National Public Radio the change could also have a long-term effect.

"Higher rates means less diversity of programming; it means slower development of the digital music space; and it means more difficult time for independent artists and labels to take advantage of this incredible new medium ... to build audiences and make and sell music," Scholl said.

With the fee change slated to take effect July 15 -- dubbed "the day the music dies" by Webcaster supporters -- the clock is ticking for Webcasters, who are scrambling to lobby Congress to change the royalty fees as they re-evaluate how they do business online.

Webcasters -- large and small, public and private -- have joined the SaveNetRadio coalition, seeking support from listeners, musicians and politicians to fight the copyright board's new fees.

SaveNetRadio supports the Internet Radio Equality Act, a bipartisan bill introduced in the House and Senate that would set royalty rates comparable to those paid by satellite radio -- about 7.5 percent of revenue.

The bill also would set special rules to limit fees paid by non-commercial online radio stations, such as college stations, and public online stations, such as National Public Radio.

But SoundExchange, the nonprofit company established by the Recording Industry Association of America to collect digital music royalties, called the Webcaster coalition's lobbying move a "money grab."

SoundExchange said the bill would save larger companies that operate online stations -- AOL, Yahoo!, Microsoft, Clear Channel -- from paying up to $100 million in royalties, money that would be kept out of the artists' pockets.

"The fact that [SaveNetRadio] would advance the profit-grinding agenda of big Webcasters without regard to the artists they are hurting speaks to SaveNetRadio's true mission and evident hypocrisy," Rebecca Greenberg, national director of the Recording Artists' Coalition, said in a SoundExchange press statement.

Many independent musicians, however, depend on online radio for exposure, and Laurie Joulie, the director of the artists collective Roots Music Association, said they would side with the Webcasters.

"Artists understand that when we proactively support the overall viability of the industry, we support them," Joulie wrote in a BusinessWeek editorial.

SoundExchange Executive Director John Simson wrote in his BusinessWeek editorial that the new fee is reasonable when broken down by listener -- $0.68 per month for a 40-hour-per-month online radio listener.

"I think in any new area like the Internet there will be some businesses that survive and some that don't," Simson told the New York Times.

"Whether you're a corner market versus a big supermarket, you both have to pay the same amount for the milk that you sell. It's not like the little guy gets a cheaper price for milk," Simson said, NPR reported.

Tim Westergren of Pandora Internet Radio said in a 10 Zen Monkeys Webzine interview, that he viewed his popular 6.5-million-listener, personalized radio company as a "wildly promotional service" that shouldn't be subject to the increased fees.

To keep costs down as the July 15 deadline looms, Pandora has closed its service to international audiences because of "legal realities."
http://www.pbs.org/newshour/extra/fe...adio_5-21.html





Artists and Labels Seek Royalties From Radio
Jim Puzzanghera

With CD sales tumbling, record companies and musicians are looking at a new potential pot of money: royalties from broadcast radio stations.

For years, stations have paid royalties to composers and publishers when they played their songs. But they enjoy a federal exemption when paying the performers and record labels because, they argue, the airplay sells music.

Now, the Recording Industry Assn. of America and several artists' groups are getting ready to push Congress to repeal the exemption, a move that could generate hundreds of millions of dollars annually in new royalties.

Mary Wilson, who with Diana Ross and Florence Ballard formed the original Supremes, said the exemption was unfair and forced older musicians to continue touring to pay their bills.

"After so many years of not being compensated, it would be nice now at this late date to at least start," the 63-year-old Las Vegas resident said in Milwaukee, where she was performing at the Potawatomi Bingo Casino. "They've gotten 50-some years of free play. Now maybe it's time to pay up."

The decision to take on the volatile performance royalty issue again highlights the rough times the music industry is facing as listeners abandon compact discs for digital downloads, often listening to music shared with friends or obtained from file-sharing sites.

"The creation of music is suffering because of declining sales," said RIAA Chief Executive Mitch Bainwol. "We clearly have a more difficult time tolerating gaps in revenues that should be there."

It's not the first attempt to kill the exemption. In the past, politically powerful broadcasters beat back those efforts.

But with satellite and Internet radio forced to pay "public performance royalties" and Web broadcasters up in arms about a recent federal decision to boost their performance royalty rate, the record companies and musicians have a strong hand.

Broadcasters are already girding for the fight, expected to last more than a year. In a letter to lawmakers this month, the National Assn. of Broadcasters dubbed the royalties a "performance tax" that would upend the 70-year "mutually beneficial relationship" between radio stations and the recording industry.

"The existing system actually provides the epitome of fairness for all parties: free music for free promotion," wrote NAB President David Rehr.

Performance royalties are collected from traditional radio stations in nearly all major industrialized countries, but U.S. musicians and record companies can't because there is no similar royalty on the books here.

"The time comes that we really have to do this," said John Simson, executive director of SoundExchange, a group created by the recording industry to collect and distribute Internet and satellite music royalties.

For record labels and musicians, addressing the issue now is crucial because digital radio, now being rolled out, allows broadcasters to split a signal into several digital channels and play even more music exempt from performance royalties.

Groups preparing to push Congress to change the law include the RIAA, the National Academy of Recording Arts and Sciences, the American Federation of Musicians and other organizations. The U.S. Copyright Office has long supported removing the exemption.

The groups have a major ally in Rep. Howard L. Berman (D-Valley Village), who now chairs the House subcommittee dealing with intellectual property law. Berman is "actively contemplating" leading a legislative push to end the exemption.

"Given the many different ways to promote music now that didn't exist as effectively when this original exemption was made," he said, "the logic of that I think is more dubious."

Congress granted composers and publishers of music copyright protection in 1909. But the recording and radio industries were in their infancy, and the actual musical recordings were not covered. Congress extended limited copyright protection to musical performances in the 1970s to guard against an earlier form of piracy: the copying of records and tapes.

But by then, broadcasters were influential enough to snuff out any talk of making them pay musicians and recording companies for playing their music.

"The old saying is the reason broadcasters don't pay a performance royalty is there's a radio station in every congressional district and a record company in three," said Chris Castle, a music industry lawyer.

Broadcasters even successfully fought a group of singers and musicians led by Frank Sinatra in the late 1980s who tried to pressure Congress into changing the law. Broadcasters also prevailed in 1995, when Congress exempted them from new fees for digital recordings that everyone else had to pay.

"Congress has always recognized that broadcasters generate enormous sums of revenue to record companies and artists in terms of airplay," said NAB Executive Vice President Dennis Wharton. Radio stations also have public-interest obligations that satellite and Internet broadcasters don't have to worry about, he said.

Satellite radio, Internet broadcasters and cable television companies offering digital music channels now pay performance royalties. The recording industry and musician groups say it's time for traditional radio stations to pony up.

"Most of the artists in the world are kind of middle-class cats, trying to piece together a living," said Jonatha Brooke, a singer-songwriter who is part of the Recording Artists Coalition advocacy group. "It's important to be recognized and paid for our work."
http://www.latimes.com/entertainment...,1028211.story





SoundExchange Offers Olive Branch to Small Webcasters Over Royalties
Eric Bangeman

Smaller webcasters will not have to pay higher royalties on Internet broadcasts, at least not until 2010. SoundExchange, the licensing authority backed by the Big Four labels, has relented on its desire for higher royalties and announced that it will allow smaller webcasters to continue to pay royalties at the same rates they have since 2002. Larger, commercial webcasters aren't getting any love from SoundExchange; they will have to begin paying the higher rates beginning next month, which SoundExchange says will ensure that the "subsidy" will be available only to small webcasters who are "forming or strengthening their business."

Those higher rates result from a March Copyright Royalty Board ruling that drastically raised the royalty rates for Internet radio webcasters. Under the CRB's new fee structure, webcasters would have to pay SoundExchange a flat fee for each user on every song that they stream. The fees would double over the next five years and are retroactive to 2006.

NPR and a group of webcasters quickly appealed the Copyright Royalty Board's decision, arguing that the Board's ruling was an "abuse of discretion" that would result in many webcasters going dark. A panel of judges upheld the CRB's new fee structure with one relatively minor modification.

Late last month, Rep. Jay Inslee (D-WA), Sen. Ron Wyden (D-OR), and Sen. Sam Brownback (R-KS) introduced the Internet Radio Equality Act (IREA), which would overturn the CRB's new fee structure. Instead, SoundExchange would continue to receive a percentage of the webcasters' revenue, which would be set at the same 7.5 percent mark paid by satellite radio providers.

The IREA was a major factor in SoundExchange's decision to roll back the clock on the licensing fees, which the licensing group admitted in a press release announcing the extension. "Although the rates revised by the CRB are fair and based on the value of music in the marketplace, there's a sense in the music community and in Congress that small webcasters need more time to develop their businesses," said John Simson, executive director of SoundExchange. "Artists and labels are offering a below-market rate to subsidize small webcasters because Congress has made it clear that this is a policy it desires to advance, at least for the next few years."

SaveNetRadio has rejected SoundExchange's offer. "The proposal made by SoundExchange today would throw 'large webcasters' under the bus and end any 'small' webcaster’s hopes of one day becoming big," said SaveNetRadio spokesperson Jake Ward. "Under government-set revenue caps, webcasters will invest less, innovate less and promote less. Under this proposal, Internet radio would become a lousy long-term business, unable to compete effectively against big broadcast and big satellite radio—artists, webcasters, and listeners be damned."

The organization still supports the passage of IREA, believing it is the best way to ensure a level playing field for all webcasters, because larger webcasters like Live365 are still in trouble. "The new CRB regulations now require Live365 to pay up to $10 million on July 15, much of it for a $500-per-channel minimum for its roughly 10,000 channels to cover SoundExchange 'processing' fees," a company spokesperson recently explained to Ars. "[This is] despite the fact that Live365 handles all station administration, licensing, and submits reports to SoundExchange as a single service." If the IREA doesn't pass, Live365, Pandora, and other large webcasters may find themselves off the air.
http://arstechnica.com/news.ars/post...royalties.html





NAB Board Adopts Resolution On Internet Royalty Rates
FMQB

The opposition continues over the Copyright Royalty Board's (CRB) decision to raise the royalty rate for radio stations that stream on the Internet. On May 10, new legislation was introduced to the U.S. Senate which would wipe out the decision. Senators Ron Wyden (D-OR) and Sam Brownback (R-KA) introduced the Internet Radio Equality Act, which would vacate the CRB decision and set a 2006-2010 royalty rate for Webcasters at the same rate currently paid by satellite radio (7.5 percent of revenue).

Now, the NAB Radio Board has unanimously adopted a resolution regarding the topic. "The radio board of the National Association of Broadcasters recognizes that the new streaming rates established by the Copyright Royalty Board (CRB) will cause significant harm to broadcasters that stream over the Internet," the NAB said in a statement. "The radio board supports a comprehensive approach to addressing the CRB rate determination, including legislation that vacates the CRB decision and establishes an interim royalty rate structure."

Back in March, the CRB voted to drastically increase the royalties paid to musicians and record labels for streaming songs online. The board's new rules dictate that the current rate of 0.08 of a cent each time a song is played would more than double by 2010. Since then, many operators of Internet-only radio stations and non-commercial stations that stream online have expressed fears that the new costs will put their Web streams out of business.
http://fmqb.com/Article.asp?id=411158





Massive Outage Hits XM Radio
Humphrey Cheung

XM Radio is experiencing massive outages after one of its satellites was disabled. The company says satellite number 1 is down for “performance” reasons and the company is currently performing a “software update”. While the company has a spare satellite, many customers in the United States and Canada are still experiencing problems.

XM subscribers on the xmfan.com website are talking up a storm with an outage message thread that has reached 17 pages as of 2:05 PM PST.

We called up XM technical support and it was apparent that the customer service representative was reading from a script. He told us that “engineers were aware of this problem” and that they did not have an estimated time of completion. He added that we should continuously point our antennas towards the southern sky for when the signal did come back.

Interestingly enough, we were able to receive XM Radio just a few minutes ago on an XM-equipped car in Culver City California. XM Radio operates mainly on satellite, but those signals are also bounced to cars with repeater towers in some urban areas.
http://www.tgdaily.com/content/view/32127/118/





What the Copyright Office Thinks About Fair Use
Nate Anderson

Fair is fair... or is it?

The Sony Betamax Supreme Court decision was one of the most important "fair use" decisions of the last 25 years, but it's been a constant source of frustration for Marybeth Peters, the Register of Copyrights in the US since 1994. As head of the Copyright Office, Peters is in charge of the triennial DMCA anticircumvention review process. And every three years, her office sees the Sony case used as the basis for the most popular requested exemption: DVD ripping.

Each time the Copyright Office deals with the issue, consumer groups contend that fair use rights to use the material on DVDs are being violated by access controls, and they want an exemption in order to back up discs or to use video clips in noninfringing ways. After all, didn't the Sony case put an official blessing on all recording equipment that had substantial noninfringing uses? Doesn't this mean that consumers have a right to use DVD rippers and that an anticircumvention exception should therefore be made for all DVDs? The EFF certainly thought so, arguing as much at the first triennial rulemaking back in 2000.

But when I spoke with Peters about fair use, she pointed out that the Sony decision is in fact a narrow one and that fair use itself is often ambiguous unless defined by a judge. The Court's ruling in the Sony case was limited to "free, over-the-air television for time-shifting," she tells Ars. "It is not space-shifting; it's not anything beyond that. It's not off cable, it's not off video-on-demand, and yet if you talk to most consumers, they think that anything they do in the home that comes through their television set is fair use."

"That becomes a consumer expectation that you hear about that they want enabled," she continues, "and I don't disagree with that; that's what the market is demanding, and that's what the market should provide, but don't call it fair use."

"I don't want to say it's a crapshoot"

Her comment points out that fair use in the US can be a vague concept. Section 107 of the Copyright Act allows for the fair use of material "for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research" but speaks in broad terms rather than specific instances. Fair use can extend beyond these listed purposes (note the "such as" statement in the law), but to qualify as "fair," a use has to pass the famous four-part test, which considers the following factors:

• The purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
• The nature of the copyrighted work;
• The amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
• The effect of the use upon the potential market for or value of the copyrighted work.

What this means in practice is that people cannot know if something is fair use without testing their theory in front of a judge. This has happened on plenty of occasions—like the Sony/Universal case that opened the door to legal VHS recordings from TV broadcasts—but these rulings are generally quite narrow, applying only to the specific circumstances of the case. "Once a court has actually handed down a decision with regard to specific facts," Peters says, "if you fall within those facts, you're safe, but once you start wandering away from those facts then—I don't want to say it's a crapshoot—but it's not clear."

In the minds of many Americans, though, "fair use" means a whole host of things that are not contained in the Copyright Act or outlined in a judicial decision. As Peters puts it, "'fair use' has become a shortcut for what 'I think the balance should be as I look at the copyright law.'"

Take DVD ripping as an example. As noted above, it's an issue that Peters hears about without fail every three years as users seek a DMCA exemption to the anticircumvention protections that extend to DVDs. Why has the Copyright Office rejected the proposed exemption at each triennial rulemaking to date? In her words, it's because the widely-hacked CSS encryption on DVDs does not actually prevent fair use at all, and those who think otherwise don't understand exactly what rights fair use grants them.
http://arstechnica.com/articles/culture/fair-use.ars





Very nice work

Fair(y) Use
Jack

Using borrowed clips from several Disney films, many of whose stories were themselves snatched from the public domain, film history professor Eric Faden of Bucknell University has assembled a short advocacy piece on copyright and fair use.

Opera users who viewed the YouTube stream will find a 29 MB .TMP of the file sitting right in their cache. It’s easily converted to a 49MB .MPG with Batch FLV Converter (or just change the file ending to .FLV if you can play those).

Everyone else can download the 71 MB .MP4 of the primer here.





From last month

In Media, We Distrust
Victoria Shannon

Happy World Intellectual Property Day! Yes, it's that time of year again, April 26, when we all pause to give silent thanks for the rules and regulations over the creative branches of human activity.

You mean you have yet to raise your voice today in praise of copyrights, patents and trademarks? You aren't alone. Professionals in the arts, entertainment and media businesses, whose livelihood intellectual property rights are designed in part to protect, aren't exactly celebrating, either.

They have little to be happy about, with those rules and regulations being flouted around the world - when their music, films, writings, design innovations and original software are being digitized, replicated and poured freely into the open arms of the Internet or sold cheaply on the street.

Recent studies suggest that the media and entertainment industries have only themselves to blame. Asked to rank their level of trust in a dozen industries ranging from insurance to health care, respondents around the world invariably put media and entertainment dead last, according to Edelman, the U.S. public relations and consulting company that conducted the surveys.

The technology industry, meanwhile, comes out consistently at No. 1 or No. 2.

In follow-up studies in Britain and France of consumers under 35 years old, the world's first generation of "digital natives," the company found that many say they will not buy an entertainment company's products because they don't believe they are getting good value for their money. Four out of 10 in Britain said that, while more than half did so in France.

In a world where pirated material can be had free or for next to nothing, "value for money" takes on a new dimension: What value is a record company giving me when I pay full price for a CD, for instance, compared with when it's free?

Gail Becker, head of Edelman's digital entertainment practice in Los Angeles, acknowledged that the message from young adults about value was not a particular surprise. The lesson for companies that want to change that perception is not to emphasize the "money" side but the "value" side, she said.

"They need to communicate the message that their products, bought legitimately, don't give the family computers any viruses," she said, "that the sound quality is better, that you can get extras like the music video with it."

For me, the biggest revelation out of Edelman's "trust" survey was the high ranking of the technology industry around the world, where it is almost without exception ranked as the most trusted on the list.

Becker said Edelman attributed that result to three causes: Technology is a "clean" industry that is not perceived as particularly scandalous or bad for society; it is seen as making our lives better, or at least more productive; and, for investors, tech companies are seen as sources of financial rewards.

What perplexes me is how those factors overcome the fact that technology companies also continuously promise more than they deliver; that their products often seem designed for engineers rather than end-users; and that many of them are practically entertainment companies themselves - Yahoo, Apple, Google, Microsoft, etc. - which should earn them a poor trust rating.

Despite the gloom of this World Intellectual Property Day, Becker sees some signs of encouragement for the entertainment industry. For instance, 59 percent of survey respondents in France and 69 percent in Britain said they trusted entertainment companies to make content widely and legally available online.

"The message about legal availability is coming through," she said. "When I think about how recently the complaint was that they were making it hard to find legal music for sale online, this is really progress."

The next step is making a dent in the industry's trustworthiness problem, she said.

"The industry can build trust by leading, or being seen as leading, the revolution in entertainment distribution by leading the change in business models," Becker said.
http://www.iht.com/articles/2007/04/...ss/ptend26.php





The Market Function of Piracy
Jerry Kirkpatrick

In marketing the most effective way to introduce new products is the free sample. In 1978 Lever Brothers spent $15 million ($47.55 million in today's currency) delivering a free sample of Signal Mouthwash to two-thirds of all US households. The strategy was a success and the product remained on the market well into the 1990s.

The significance of the free sample is product trial; it gets the product into consumers' hands. If consumers use the sample and like it, they may go on to buy the product and buy it again and again, that is, become repeat purchasers; they may even spread the good word to others. When repeat purchasing and favorable word of mouth kick in, the product's sales will experience a shift from slow to rapid growth and management will consider the product a success.

Free sampling is the best method of introducing new products, but it is also the most expensive. Not surprisingly, then, Forbes ASAP magazine[1] reports this alternative way to practice free sampling:

One security manager for a major manufacturer, who asked not to be identified, says she is sure some companies actually view being counterfeited as a boon to their efforts to build brand awareness. After all, she says, if some companies give away merchandise to expand market share, what's not to like about having someone else take on the expense of manufacturing and distributing the goods, as long as they're high-quality copies?

Imitation is a universal trait of human behavior, ranging from the use of phrases and mannerisms of admired others to the reuse of hummable themes in music, recognizable images in paintings and well-known plots in literature and Disney movies. Imitation is a normal part of the competitive process in growth markets. As the sales of an innovative new product takes off, competitors enter the market with their own, often cheaper, versions.

If the innovative product is patented, competitors make minor design or functional changes to secure their own patents. Knock-offs are unauthorized, usually cheaper copies. And, of course, the innovative marketer often produces its own cheap version, sometimes called a fighting brand, to fend off the competition. Over time real prices in the product category decline and quality improves.

Knock-offs are pirated products. Because they are usually cheaper than the original, knock-offs tend to appeal to a more price-conscious segment of the market; that is, the buyers of pirated products are probably not legitimate prospects for the innovative new product, either because they cannot afford, or do not want to pay, the higher price. Message to the innovative marketer? Either drop the price of the new product or produce a cheaper version — or be the first to exploit a new technology, something the movie and recording industries chose not to do.[2] Many, including these two industries, would rather sue than practice good marketing.

One study found that users of pirated software sufficiently influenced — by word-of-mouth communication — eighty percent of the software's prospects to buy the legal product and another described several scenarios in which piracy can help increase the sales of legal products.[3] The pirated product functions as a free sample that the innovator does not have to fund. $30

So what about free copies? How do you compete with free, to state the battle cry of the new Luddites who fear digital technology? It's done all the time. One of the most dramatic recent instances of this was the strategy of science fiction writer Cory Doctorow who, over the course of three years, gave away 700,000 electronic copies of Down and Out in the Magic Kingdom. Sales of the hard copy went through six printings and surpassed his publisher's expectations. Many of the downloaders, Doctorow said, did not buy the hard copy and probably would not have regardless, but the giveaway created considerable buzz and a significant minority did buy the hard copy. Compare this to the experience of the Mises Institute with Omnipotent Government.

Free — no matter where it comes from — can help sell.
http://mises.org/story/2590





Fight The Justice Department's Copycrime Proposal!
Press Release

Should ordinary Americans face jail time for attempted copyright infringement? Should the sort of property forfeiture penalties applied in drug busts also threaten P2P users, mixtape makers, and mash-up artists? Of course not, but the Department of Justice (DoJ) has drafted an outrageous legislative proposal that applies these severe penalties and much more. Take action now to stop it using the form below.

Criminal copyright infringement already goes beyond situations involving large-scale commercial piracy. Thanks to laws like the No Electronic Theft (NET) Act and the Family Entertainment and Copyright Act (FECA), the federal government can now criminally charge (i.e., send to prison) people for simply uploading a single "pre-release" song (as two Ryan Adams fans discovered last year when they were brought up on federal charges for uploading tracks from pre-release promotional CDs).

Most of the DoJ's proposed changes to copyright's criminal provisions fall into two categories: (1) making it easier to convict people by eliminating the inconvenient necessity of proving that actual infringement took place, and (2) increasing the financial and confinement punishments. Law enforcement would also be allowed to use wiretaps and to spy on personal communications as part of copyright investigations. That potentially translates into wiretap authority for millions of American homes, since surveys show that 1 in 5 American Internet users downloads music and movies from P2P networks.

This guarantees one result: more costly, unnecessary, and draconian investigations and prosecutions funded by taxpayer dollars. Not only will this end up costing Americans tremendous amounts of time, money, and peace of mind, but it will also give law enforcement yet another opportunity to invade your privacy. All it takes is a single attempt to download the wrong file online.

Law enforcement already has enough tools to go after commercial pirates, and the entertainment industry has the tools to pay its own lawyers to sue infringers. Instead of wasting taxpayer dollars, Congress ought to be focusing on meaningful copyright reform that protects fans' rights to use creative material and supports new technologies.

More info:

The draft bill [PDF]
• Complete the form below with your information.
• Personalize the subject and text of the message on the right with your own words, if you wish.
• Click the Send Your Message button to send your letter to these decision makers:
• Your Senators
• Your Representative

https://secure.eff.org/site/Advocacy...rAction&id=299





Breaking Network Logjams

An approach called network coding could dramatically enhance the efficiency and reliability of communications networks. At its core is the strange notion that transmitting evidence about messages can be more useful than conveying the messages themselves Muriel Médard , Michelle Effros and Ralf Koetter

The history of modern communications systems has been marked by flashes of startling insight.

Claude E. Shannon, mathematician and engineer, launched one such revolution almost 60 years ago by laying the foundation of a new mathematical theory of communications--now known as information theory. Practical outgrowths of his work, which dealt with the compression and reliable transmission of data, can be seen today in the Internet, in landline and wireless telephone systems, and in storage devices, from hard drives to CDs, DVDs and flash memory sticks.

Shannon tackled communications over phone lines dedicated to individual calls. These days, information increasingly travels over shared networks (such as the Internet), in which multiple users simultaneously communicate through the same medium--be it a cable, an optical fiber or, in a wireless system, air. Shared networks can potentially improve the usefulness and efficiency of communications systems, but they also create competition for communal resources. Many people must vie for access to, say, a server offering downloadable songs or to a wireless hot spot.

The challenge, then, is to find ways to make the sharing go smoothly; parents of toddlers will recognize the problem. Network operators frequently try to solve the challenge by increasing resources, but that strategy is often insufficient. Copper wires, cables or fiber optics, for instance, can now provide high bandwidth for commercial and residential users yet are expensive to lay and difficult to modify and expand. Ultrawideband and multiple-antenna transmission systems can expand the number of customers served by wireless networks but may still fail to meet ever increasing demand.

Techniques for improving efficiency are therefore needed as well. On the Internet and other shared networks, information currently gets relayed by routers--switches that operate at nodes where signaling pathways, or links, intersect. The routers shunt incoming messages to links heading toward the messages' final destinations. But if one wants efficiency, are routers the best devices for these intersections? Is switching even the right operation to perform?

Until seven years ago, few thought to ask such questions. But then Rudolf Ahlswede of the University of Bielefeld in Germany, along with Ning Cai, Shuo-Yen Robert Li and Raymond W. Yeung, all then at the University of Hong Kong, published groundbreaking work that introduced a new approach to distributing information across shared networks. In this approach, called network coding, routers are replaced by coders, which transmit evidence about messages instead of sending the messages themselves. When receivers collect the evidence, they deduce the original information from the assembled clues.

Although this method may sound counterintuitive, network coding, which is still under study, has the potential to dramatically speed up and improve the reliability of all manner of communications systems and may well spark the next revolution in the field. Investigators are, of course, also exploring additional avenues for improving efficiency; as far as we know, though, those other approaches generally extend existing methods.

Bits Are Not Cars

Ahlswede and his colleagues built their proposal in part on the idea, introduced by Shannon, that transmitting evidence about data can actually be more useful than conveying the data directly. They also realized that a receiver would be able to deduce the original data once enough clues had been gathered but that the receiver would not need to obtain all of the evidence emitted. One kind of clue could be replaced by another, and all that was important was receiving some combination of clues that, together, would reveal the original message. (Receivers would be able to make sense of the evidence if they were informed in advance about the rules applied to generate it or if instructions on how to use the evidence were included in the evidence itself.)

Network coding breaks with the classic view that communications channels are analogous to roads and that bits are like the cars that travel those roads. But an understanding of the transportation model of communications is useful for grasping how the new scheme works and why it has such promise.

Shannon proved mathematically that every channel has a capacity--an amount of information it can relay during any given time frame--and that communications can proceed reliably as long as the channel's capacity is not exceeded. In the transportation analogy, a road's capacity is the number of cars per second it can handle safely. If traffic stays below capacity, a car entering the road at one end can generally be guaranteed to exit at the other end unchanged (barring the rare accident). Engineers have built increasingly complex communications systems based on the transportation model. For example, the phone systems Shannon pondered dedicate a distinct "road" to every conversation; two calls over traditional phone lines never share a single line at the same time and frequency.

Computer networks--and the Internet in particular--are essentially a maze of merging, branching and intersecting roads. Information traveling from one computer to another typically traverses several roads en route to its destination. Bits from a single message are grouped into packets (the carpools or buses of the information superhighway), each of which is labeled with its intended destination. Routers sit at the intersections of the roads, examine each packet's header and forward that packet toward its destination.

Ironically, the very transportation model that fueled today's sophisticated communications systems now stands in the way of progress. After all, bits are not cars. When two vehicles converge on the same narrow bridge, they must take turns traversing the bottleneck. When two bits arrive at a bottleneck, however, more options are possible--which is where network coding comes in.

How It Works

The hypothetical six-node digital network depicted in the box on these two pages can help clarify those options. Recall that in computers, all messages take the form of a string of binary code. Imagine that each link, or road, in this network can carry one bit--be it a 0 or a 1--per second and only in the direction designated by the corresponding arrow. Amy, a network user at node A, hopes to send information at one bit per second to Dana at node D. Meanwhile Ben at node B hopes to send, at exactly the same time and rate, information to Carl at node C. Can both Amy's and Ben's demands be satisfied simultaneously without exceeding any of the links' capacities?

In a router system [see leftmost illustration], the outlook seems bleak. Both paths, from Amy to Dana and from Ben to Carl, require traversing link 5. This link becomes the equivalent of a narrow, one-lane bridge. The router at node E, where link 5 starts, receives a total of two bits per second (one from link 2 and one from link 3), but because link 5's capacity is one, the router can send only one bit per second along it. In the transportation model, such bottlenecks cause nightmare traffic jams, with more and more bits piling up over time, waiting their turn.

In the new approach [see illustrations above], though, the plain router would be replaced by a coder, which would have more options than would be open to a traffic cop. Instead of relaying the actual bit streams collected at the bottleneck, the coder could send quite different information. It could, for example, add up the number of 1s that arrive during any given second and transmit a 0 if that sum is even. If the sum is odd, the device could transmit a 1. So, if link 5 simultaneously receives a 1 and a 0 from links 2 and 3, it carries a 1. If either two 0s or two 1s are received from links 2 and 3, link 5 carries a 0. The result then gets sent by router F down links 6 and 7 to Carl and Dana, respectively.

This approach replaces each pair of bits at node E with a hybrid of the two. Such a bit stream seems ridiculous. Our proposed coder has done the equivalent of combining one phone conversation with another in a way that obscures both. The apparent absurdity of the approach is precisely why it went uninvestigated for so long.

But sometimes apparent madness is true innovation. A hybrid bit stream may describe neither transmission perfectly, yet it can supply evidence about both. Suppose we additionally send Amy's missive to Carl along link 1 and Ben's to Dana along link 4. Sending these two messages uses network resources (links 1 and 4) that the routing system could not usefully employ for meeting Amy's and Ben's demands. Carl's node receives Amy's transmission and knows for each instant (from link 6) whether the number of 1s in the pair of messages issued by Amy and Ben is even or odd. If Carl's node is programmed to also "know" the rule used by the coders at the start of link 5 or if it can infer the rule from the evidence itself, the collected evidence will enable it to decipher the message sent by Ben. And Dana's node will similarly uncover Amy's message.

Clear Benefits

This strategy accomplishes two goals that were unthinkable given the limitations of the transportation model. First, it enables the bit leaving a node to travel two paths simultaneously, something a car cannot do. Second, it allows a pair of bit streams arriving at the head of a bottleneck to combine into a single stream, whereas two cars converging on one narrow bridge cannot become a single entity; one would have to wait for the other to pass before it could proceed across the bridge.

The data-handling approach exemplified by our six-node model (a minor variation on one first given by Ahlswede and his colleagues in 2000) can potentially increase the capacity of a network without requiring the addition of extra conduits because it avoids logjams. Using routing alone, our six-node network could sustain simultaneous transmissions averaging one half of a bit per second. (Because the two competing transmissions would have to share link 5, the effective data rate would be one bit per two seconds, or one half of a bit per second, for each of the competing demands.) With network coding, the same system supports simultaneous transmissions at one bit per second. So, here, network coding doubles capacity.

Sometimes network coding could yield even bigger capacity gains, sometimes none. But the approach would never decrease the capacity of a network because, at worst, it would precisely mimic the actions of router systems. It should also increase reliability and resistance to attacks in relatively substantial networks, because the interchangeable nature of evidence means that some packets of evidence can be lost without creating problems.

Lessons from Multicast Networks

So far much of the research into implementing network coding has focused on multicast networks--in which all receivers need to get the same information. Internet video games rely on multicast systems to update every player each time one makes a move. Webcasts of videos or live sporting events and new software released electronically to a large group of customers also travel over multicast networks. Today such networks still use routers, and a return to the transportation analogy helps to explain why designing them is usually quite difficult.

Imagine the country's highways teeming with cars. Each router is like a police officer directing traffic at a single intersection. Incoming cars join the queue behind vehicles that arrived before them. The officer reads each car's destination in turn and directs it on its way. The goal in system design is for each router to direct traffic in a way that not only speeds each subsequent car to its intended destination but also allows the nation's transportation system as a whole to satisfy as many drivers as possible.

Even a central designer with a complete map of all the nation's roads in hand would be hard put to determine the best possible strategy for every router to follow. The difficulty increases as the network changes over time: rush hours, road repairs, accidents and sporting events mean the roadways and the demands placed on them change constantly.

Intuition might suggest that designing a system reliant on network coding should be even harder, because there are more options to consider. A node could forward data unchanged, thereby mimicking a router. But it might also mix two or more incoming data streams before sending them on, and how it mixes them might also be open to consideration; further, different nodes might use different algorithms.

Luckily, this logic is flawed. Sometimes adding more options actually simplifies things. Without coding, architects of a multicast system would need to enumerate as many paths as possible from the transmitter to each receiver and then determine how many of those paths the network could support simultaneously. Even for simple networks, finding and testing all combinations of paths would be a dizzying task.

In contrast, a multicast system using network coding would be rather easy to design. The startling truth is that addition and multiplication are the only mathematical functions that coded networks need apply. Also, even if the function, or rule, programmed into each coder in a network is chosen independently of the message and the other coding functions and without any knowledge of the network layout, the system as a whole will, with extremely high probability, operate at peak performance. Even if the system changes over time, as can happen in mobile or reconfigurable networks, the network will continue to perform optimally without requiring redesign. To learn why, see the illustration.

Tomorrow's Networks

The operation of networks, then, will be very different if coders replace routers. The way our messages traverse networks will change: they will not only share "the road" with other transmissions but may become intimately entangled with traffic from a variety of other sources. Some might fear that such entanglement would compromise the security of the messages. More likely, though, traffic traversing networks would become a locally undecipherable algebraic stream. Users on the network would unwittingly collaborate to one another's mutual advantage, allowing not just higher rates or faster downloads of data but also, in the case of wireless networks, an improvement in energy efficiency. (Because each wireless transmission consumes energy, a node can reduce consumption by mixing together the information intended for several neighbors and sending only a single transmission.)

By changing how networks function, network coding may influence society in ways we cannot yet imagine.

Moreover, delays in downloading videos and lost cell phone calls will be far less common. On the Internet, routers fail or are taken down for maintenance and data packets are dropped all the time. That is why people must sometimes rerequest Web pages and why a site sometimes comes up slowly. Reliability will increase with network coding, because it does not require every single piece of evidence to get through.

And network managers will provide such benefits without having to add new communications channels, because better use will be made of existing channels. Network coding will thereby complement other communications technologies, allowing users to get as much as possible out of them.

Sometimes users will know that network coding is operating, because it may modify how some common applications, such as peer-to-peer downloads, function. Today someone seeking to download a file searches for a collaborating user on whose machine the file resides. In a system using network coding, the file would no longer be stored as a whole or in recognizable pieces.

But users would not personally have to figure out how to find the evidence needed to obtain the desired files. A request sent into a network from a user's computer or phone would cause either that individual's computer or a local server to scavenge through the network for pieces of evidence related to a file of interest. The gathered evidence, consisting of algebraically mixed pieces of information relating to the desired file, would help recover that file. Instead of putting together a puzzle whose pieces are recognizable fragments of a whole, the server or an individual's computer would solve a collection of algebraic equations. And, all the while, most people would remain blissfully unaware of these operations--just as most of us are ignorant of the complicated error-correction operations in our cell phones.

The military has recognized the robustness of network coding and is now funding research into its use in mobile ad hoc networks, which can form on the fly. Such networks are valuable in highly changeable environments, such as on the battlefield, where reliable communications are essential and establishing and maintaining an infrastructure of fiber-optic cables or cell towers is difficult. In an ad hoc network, every soldier's radio becomes a node in a communications system, and each node seeks out and establishes connections to neighboring nodes; together these connections establish a network's links. Every node can both send and receive messages and serve as an intermediary to pass along messages intended for other receivers. This technique extends communications capabilities far beyond the transmission range of a single node. It also allows enormous flexibility, because the network travels with the users, constantly reconfiguring and reestablishing connections as needed.

By changing how networks function, network coding may influence society in ways we cannot yet imagine. In the meantime, though, those of us who are studying it are considering the obstacles to implementation. Transitioning from our router-based system to a network-coded one will actually be one of the more minor hurdles. That conversion can be handled by a gradual change rather than a sudden overhaul; some routers could just be reprogrammed, and others not built to perform coding operations would be replaced little by little.

A bigger challenge will be coping with issues beyond replacing routers with coders. For instance, mixing information is a good strategy when the receiving node will gather enough evidence to recover what it desires from the mixture. This condition is always met in multicast networks but may not be the case in general. Moreover, in some circumstances, such as when multiple multicasts are transmitted, mixing information can make it difficult or impossible for users to extract the proper output. How, then, can nodes decide which information can and cannot be mixed when multiple connections share the same network? In what ways must network coding in wireless networks differ from its use in wired ones? What are the security advantages and implications of network coding? How will people be charged for communications services when one person's data are necessarily mixed with those of other users? In collaborations that span the globe, we and others are pondering how to unravel such knots even as we strive to enhance the capabilities of the communications networks that have become such an integral part of so many lives.
http://www.sciam.com/article.cfm?cha...738629167B4856





The Man Who Owns the Internet

Kevin Ham is the most powerful dotcom mogul you've never heard of, reports Business 2.0 Magazine. Here's how the master of Web domains built a $300 million empire.
Paul Sloan

Kevin Ham leans forward, sits up tall, closes his eyes, and begins to type -- into the air. He's seated along the rear wall of a packed ballroom in Las Vegas's Venetian Hotel. Up front, an auctioneer is running through a list of Internet domain names, building excitement the same way he might if vintage cars were on the block.

As names come up that interest Ham, he occasionally air-types. It's the ultimate gut check. Is the name one that people might enter directly into their Web browser, bypassing the search engine box entirely, as Ham wants? Is it better in plural or singular form? If it's a typo, is it a mistake a lot of people would make? Or does the name, like a stunning beachfront property, just feel like a winner?

When Ham wants a domain, he leans over and quietly instructs an associate to bid on his behalf. He likes wedding names, so his guy lifts the white paddle and snags Weddingcatering.com for $10,000. Greeting.com is not nearly as good as the plural Greetings.com, but Ham grabs it anyway, for $350,000.

Ham is a devout Christian, and he spends $31,000 to add Christianrock.com to his collection, which already includes God.com and Satan.com. When it's all over, Ham strolls to the table near the exit and writes a check for $650,000. It's a cheap afternoon.

Just a few years ago, most of the guys bidding in this room had never laid eyes on one another. Indeed, they rarely left their home computers. Now they find themselves in a Vegas ballroom surrounded by deep-pocketed bankers, venture-backed startups, and other investors trying to get a piece of the action.

And why not? In the past three years alone, the number of dotcom names has soared more than 130 percent to 66 million. Every two seconds, another joins the list.

But the big money is in the aftermarket, where the most valuable names -- those that draw thousands of pageviews and throw off steady cash from Google's and Yahoo's pay-per-click ads -- are driving prices to dizzying heights. People who had the guts and foresight to sweep up names shed during the dotcom bust are now landlords of some of the most valuable real estate on the Web.

The man at the top of this little-known hierarchy is Kevin Ham -- one of a handful of major-league "domainers" in the world and arguably the shrewdest and most ambitious of the lot. Even in a field filled with unusual career paths, Ham's stands out.

Trained as a family doctor, he put off medicine after discovering the riches of the Web. Since 2000 he has quietly cobbled together a portfolio of some 300,000 domains that, combined with several other ventures, generate an estimated $70 million a year in revenue. (Like all his financial details, Ham would neither confirm nor deny this figure.)

Working mostly as a solo operator, Ham has looked for every opening and exploited every angle -- even inventing a few of his own -- to expand his enterprise. Early on, he wrote software to snag expiring names on the cheap. He was one of the first to take advantage of a loophole that allows people to register a name and return it without cost after a free trial, on occasion grabbing hundreds of thousands of names in one swoop.

And what few people know is that he's also the man behind the domain world's latest scheme: profiting from traffic generated by the millions of people who mistakenly type ".cm" instead of ".com" at the end of a domain name.

Try it with almost any name you can think of -- Beer.cm, Newyorktimes.cm, even Anyname.cm -- and you'll land on a page called Agoga.com, a site filled with ads served up by Yahoo (Charts, Fortune 500).

Ham makes money every time someone clicks on an ad -- as does his partner in this venture, the West African country of Cameroon. Why Cameroon? It has the unforeseen good fortune of owning .cm as its country code -- just as Germany runs all names that end with .de.

The difference is that hardly any .cm names are registered, and the letters are just one keyboard slip away from .com, the mother lode of all domains. Ham landed connections to the Cameroon government and flew in his people to reroute the traffic. And if he gets his way, Colombia (.co), Oman (.om), Niger (.ne), and Ethiopia (.et) will be his as well.

"It's in the works," Ham says over lunch in his hometown of Vancouver, British Columbia. "That's why I can't talk about it." He's nearly as reluctant to share details about his newest company, called Reinvent Technology, into which he's investing tens of millions of dollars to build a powerhouse of Internet businesses around his most valuable properties.

Given Ham's reach on the Web -- his sites receive 30 million unique visitors a month -- it's remarkable that so few people know about him. Even in the clubby world of domainers, he's a mystery man. Until now Ham has never talked publicly about his business. You won't find his name on any domain registration, nor will you see it on the patent application for the Cameroon trick.

There are practical reasons for the low profile: For one, Ham's success has drawn enemies, many of them rivals. He once used a Vancouver post office box for domain-related mail -- until the day he opened a package that contained a note reading "You are a piece of s**t," accompanied by an actual piece of it.

Bitter domainers are one thing, lawyers another. And at the moment, Ham's biggest concern is that corporate counsels will come after him claiming that the Cameroon typo scheme is an abuse of their trademarks. He may be right, since this is the first time he's been identified as the orchestrator.

When asked about the .cm play, John Berryhill, a top domain attorney who doesn't work for Ham, practically screams into the phone, "You know who did that? Do you have any idea how many people want to know who's behind that?"

Spreading the word

Kevin Ham is a boyish-looking 37-year-old, trim from a passion for judo and a commitment to clean living. His drink of choice: grapefruit juice, no ice. His mild demeanor belies the aggressive, work-around-the-clock type that he is. Ham frequently steers conversations about business back to the Bible. Not in a preachy way; it's just who he is.

The son of Korean-born immigrants, Ham grew up on the east side of Vancouver with his three brothers. His father ran dry-cleaning stores; his mother worked graveyard shifts as a nurse. A debilitating illness at the age of 14 led Ham to dream of becoming a doctor. He cruised through high school and then undergraduate work and medical school at the University of British Columbia.

Christianity had long been a mainstay with his family, but as an undergrad, he made the Bible a focal point of his life; he joined the Evangelical Layman's Church and attended regular Bible meetings. Ham recalls that it was about this time -- 1992 or 1993 -- that he was introduced to the Web. A church friend told him about a powerful new medium that could be used to spread the gospel.

"Those words really struck me," Ham says. "It's the reason I'm still working."

After he graduated from med school in 1998, Ham and his new bride took off for London, Ontario, for a two-year residency. By the second year, Ham had become chief resident, and when he wasn't rushing to the emergency room, he indulged his growing fascination with the Net, teaching himself to create websites and to code in Perl.

Information about Web hosting at the time was so scattered that Ham began creating an online directory of providers, complete with reviews and ratings of their services. He called it Hostglobal.com.

From there it was a short step to the business of buying and selling domains. About six months after he launched Hostglobal, Ham was earning around $10,000 per month in ad sales. But when one of his advertisers -- a service that sold domain registrations -- told him that a single ad was generating business worth $1,500 a month, Ham figured he could get in on that too.

From doctor to domainer

It made sense: People shopping for hosting services were often interested in buying a catchy URL, so Ham launched a second directory, called DNSindex.com. Like similar services operating at the time, it gave customers a way to register domain names.

But Ham added the one feature that early domain hunters wanted most: weekly lists of available names, compiled using free sources he found on the Web. Some lists he gave away; others he charged as much as $50 for. In a couple of months, he had more than 5,000 customers.

By the time he finished his residency in June 2000, his two small Web ventures were pulling in more money in a month -- sometimes $40,000 -- than Ham made that year at the hospital. That was enough, he reasoned, to put off starting a medical practice for three more months, maybe six. "It just didn't make sense not to do it," he says.

With a new baby in tow, Ham and his wife moved back to Vancouver, settling into a one-bedroom apartment. Ham's timing, it turned out, was spot-on. Tech stocks were tumbling, dotcoms were folding left and right, and investors were fleeing the Web. More important to him, hundreds of thousands of valuable domain names that were suddenly considered worthless began to expire, or "drop." Ham and a handful of other trailblazers were ready to snap them up.

Figuring out when names would drop was tedious work.

At the time, Network Solutions controlled the best names; it was for a long time the only retail company, or registrar, selling .coms. It didn't say when expiring names would go back on the market, but twice a day it published the master list of all registered names -- the so-called "root zone" file (now managed by VeriSign (Charts)). It was a fat list of well over 5 million names that took hours to download and often crashed the under-powered PCs of the day.

So Ham wrote software scripts that compared one day's list with the next. Then he tracked names that vanished from the root file. Those names would be listed briefly as on hold, and Ham figured out that they would almost always drop five or six days later -- at about 3:30 a.m. on the West Coast. In the dark of night, Ham launched his attacks, firing up five PCs and multiple browsers in each. Typing furiously, he would enter his buy requests and bounce from one keyboard to the next until he snagged the names he wanted.

He missed a lot of them, of course.

Ham had no clue that there were rivals out there who were way ahead him, deploying software that purchased names at a rate that Ham's fingers couldn't match. Through registration data, he eventually traced many of those purchases to one owner: "NoName." Behind the shadowy moniker was another reclusive domain pioneer, a Chinese-born programmer named Yun Ye, who, according to people who know him, operated out of his house in Fremont, Calif.

By day Ye worked as a software developer. At night he unleashed the programs that automated domain purchases. (Ye achieved deity status among domainers in 2004 when he sold a portfolio of 100,000 names to Marchex (Charts), a Seattle-based, publicly traded search marketing firm, for $164 million. He then moved to Vancouver.)

Ham went back to the keyboard, writing scripts so that he, too, could pound at the registrars. Ham's track record began to improve, but he still wasn't satisfied. "Yun was just too good," he says.

Then Ham did something brash: He bought his way to the front of the line. Since registrars had direct connections to Network Solutions's servers, Ham's play was to cut out the middleman. He struck deals with several discount registrars, even helping them write software to ensure that they captured the names Ham wanted to buy during the drops. In exchange for the exclusivity, Ham offered to pay as much as $100 for some names that might normally go for as little as $8.

Within weeks Ham had struck so many deals that, according to rivals, he controlled most of the direct connections. "I kept telling them to hit them harder," Ham says in a rare boastful moment. "We brought down the servers many times." During one six-month period starting in late 2000, Ham registered more than 10,000 names.

Rival domainers, locked out of much of the action, didn't appreciate Ham's tactics. It was one of them, most likely, who sent him the turd. "Kevin came in and closed the door for everyone else," says Frank Schilling, a domainer who figured out what Ham had done and sealed similar deals. "There was a ton of professional jealousy."

Ham, in fact, owes a lot to Schilling. Both men lived in Vancouver at the time, and after Ham sought out Schilling in November 2000, the two met at a restaurant to compare notes.

"How much traffic do you have?" Schilling asked. An embarrassed Ham replied that he had no idea. Schilling mentioned that he was experimenting with a new service, GoTo.com, that would populate his domains with ads. Ham spent the next week figuring out how much traffic his sites were generating, and he was amazed by the initial tally: 8,000 unique visitors per day from the 375 names he owned at the time.

"From then on," Ham says, "I knew that what I was building would be very, very valuable." He soon signed up with GoTo (which was later purchased by Yahoo). On his first day, Ham made $1,500.

The system worked then as it does now: People don't always use Google (Charts, Fortune 500) or Yahoo to find something on the Web; they'll often type what they're looking for into a browser's address bar and add ".com."

It's a practice known as "direct navigation," or type-in traffic, and millions do it. Need wedding shoes? Type in "weddingshoes.com" -- a site that Ham happens to own -- and you'll land on what looks like a shoe-shopping portal, filled with links from dozens of retailers.

Click on any one of those links, and the advertiser that placed it pays Yahoo, which in turn pays a cut to Ham. That single site, Ham says, brings in $9,100 a year. Small change, maybe, but the name cost him $8, and his annual overhead for it is about $7. Multiply that model several thousand times over, and you get a quick idea of the kind of cash machine that Ham was creating from his living room.

By early 2002, roughly $1 million a year was pouring into Ham's operation, which he ran with the help of his high school friend and current partner, Colin Yu. But again he felt the tug of his conscience. He occasionally left Vancouver to do medical missionary stints, helping patients in Mexico, the Philippines, and China. He found the experience rewarding, but the development boom he saw taking off in China just reminded him of the virtual real estate boom he was leading back home.

Soon Ham was back working full-time on the Web. "There was just too much more to do," he says.

A little taste

There was no looking back. The next few years were among Ham's most aggressive. One of his most valuable tricks was one he had experimented with in the early days, a practice called domain "tasting." Tasting takes advantage of a provision that allows domain-name buyers a free five-day trial period. Intended to protect customers who mistakenly purchase the wrong name, it handed aggressive domainers another means with which to expand -- and exploit -- their portfolios.

Ham cobbled together new lists of domain words in every combination, registering hundreds of thousands of new names for free, monitoring the traffic, and then returning the duds. By 2004, Ham had amassed such a deep portfolio that he pulled his names from third-party registrars, launched his own registrar, and then created another company, appropriately named Hitfarm, that could do a better job than Yahoo of matching ads with domain names -- for himself and 100 or so other domainers.

Like any shopping spree, though, Ham's tasting binge didn't last. It brought in so many names -- offbeat strings of letters, names with too many dashes, and other variations that humans would be hard-pressed to think of -- that Ham saw the quality of his portfolio dropping in proportion to its growing size. For every few thousand names he'd register, he'd toss back all but a hundred or so.

Tasting exacerbated another problem too: Ham's software grabbed all kinds of typographical variations of trademarked names. Called typo-squatting, it's a practice now coming under the same intense scrutiny long faced by cybersquatters. Microsoft (Charts, Fortune 500) and Neiman Marcus are just two companies whose lawyers have brought anti-cybersquatting lawsuits, charging domainers with intentionally profiting from variations of their trademarks.

"Tasting changed everything," says Ham, who has since abandoned the practice, though he concedes that Hitfarm still holds some problematic names. "I said, forget it," he says. "Generic names are already too hard to come by. And the legal risks are too great."

The legal risks should diminish, however, if you don't own the domain names at all -- and that's the secret behind the Cameroon play.

New world order

The domain confab in Vegas is like any other trade conference: The real intrigue happens at cocktail hour. One subject in the air is Cameroon. Late last summer, domainers began noticing that something odd happens to .cm traffic: It all winds up at a site called Agoga.com. Domainers know, of course, that .cm belongs to Cameroon. And they know that whoever controls Agoga.com has created a potential gold mine.

What they don't know is who's behind it all.

At one of the meet-and-greets, Ham is standing drinkless, as usual, sporting a polo shirt, chatting with a few people he knows and some he's just met. In this crowd, it seems, everyone wants to know Ham. Finally, he is alone.

"I hear you're the guy behind .cm?"

Ham looks surprised by the reporter's question, then flashes a big smile and says, "I had help."

Over a series of conversations a few weeks later in Vancouver, Ham shares some details about a deal that, despite his innate reticence, he's clearly proud of. About a year ago, he says, he worked his contacts to gain connections to government officials in Cameroon. Then he flew several confidantes to Yaoundé, the capital, to make their pitch. His key programmer went along to handle the technical details.

"Hey," Ham says, flagging his techie down near the office elevator. "Didn't you meet with the president of Cameroon?"

"Nah," the programmer says. "We met with the prime minister. But we did see the president's compound."

It's an odd scene to picture: a domainer's reps in a sit-down with Ephraim Inoni, the prime minister of Cameroon, to discuss the power of type-in typo traffic and pay-per-click ads. And yet, as with most of the angles Ham has played, the Cameroon scheme is ingeniously straightforward.

Ham's people installed a line of software, called a "wildcard," that reroutes traffic addressed to any .cm domain name that isn't registered. In the case of Cameroon, a country of 18 million with just 167,000 computers connected to the Internet, that means hundreds of millions of names. Type in "paper.cm" and servers owned by Camtel, the state-owned company that runs Cameroon's domain registry, redirect the query to Ham's Agoga.com servers in Vancouver.

The servers fill the page with ads for paper and office-supply merchants. (Officials at Yahoo confirm that the company serves ads for Ham's .cm play.) It all happens in a flash, and since Ham doesn't own or register the names, he's not technically typo-squatting, according to several lawyers who handle Internet issues.

The method is spelled out in a patent application filed by a Vancouver businessman named Robert Seeman, who Ham says is his partner in the venture and who also serves as chief adviser at Reinvent Technology. (Seeman declined to be interviewed for this story.)

Ham won't reveal specifics but says Agoga receives "in the ballpark" of 8 million unique visitors per month. Fellow domainers, naturally, are envious.

"As soon as it started happening, there was a huge sense of 'Why didn't I think of that?'" says attorney Berryhill, who represents Schilling and other domainers.

Still, several companies have already tracked down Ham's attorneys, claiming trademark infringement. Ham argues that his system is legally in the clear because it treats every.cm typo equally and doesn't filter out trademarked names.

Berryhill concurs. "You can't really say that [wildcarding] is targeting trade-marks," he says. "It captures all the traffic, not just trademark traffic." Moreover, the anti-cybersquatting statute applies only to people who register a trademarked domain; using a wildcard doesn't require registering names.

Clever though it may be, .cm is "a very small part of our operations," Ham says. He won't disclose how much he pays to the government of Cameroon, whose officials could not be reached for comment.

The partnership has been a rocky one so far, and the system has sporadically shut down. But .cm is only one of several country domains where the typo play can work. According to Ham, he and his team are working with other governments. The dream typo play -- .co -- belongs to Colombia, to which Ham says Seeman paid several visits long before they began working on Cameroon. (Citing safety concerns, Ham hasn't yet made the trip. "I would only go if the president requests to meet me," he says.)

As for other countries he might soon invade, Oman (.om) is an obvious target. Niger and Ethiopia are out there too, but since they would play off less lucrative .net typos, they might not be worth the trouble.

As for Colombia, Ham says, "we're making progress."

The long view

Ham leans over his office PC to check on a domain auction. Steven Sacks, a domainer based in Indianapolis who works for Ham, is telling him about some names up for sale. Ham shoots back an instant message: "I like doctordegree.com ... and rockquarry.com ... sunblinds.com."

The days of figuring out the drop are long over. Everything's open now. Lists are easy to obtain. You can preorder a name before it drops and hope to get it. Or, like Ham, you can shell out five or six figures in online auctions. The only great deals, at least for .com names, tend to happen privately, when a domainer manages to find an eager or naive seller.

Ham still buys 30 to 100 names a day, but he's no longer getting them on the cheap. In fact, he and Schilling, who today maintains a $20 million-a-year portfolio from his home in the Cayman Islands, are often accused of driving up prices.

Take, for example, the $26,250 Ham paid for Fruitgiftbaskets.com, or the $171,250 for Hoteldeals.com. "The amount he will pay is crazy," says Bob Martin, president of Internet REIT, a domain investment firm that has raised more than $125 million from private investors, including Maveron, the venture firm backed by Starbucks founder Howard Schultz.

Nonsense, Ham says. The names are expensive only if you value them the way people like Martin do. The VCs and bankers, who were late to the domain gold rush, assess names by calculating the pay-per-click ad revenue and attaching a multiple based on how long it would take to pay off the investment.

Viewed that way, Ham's personal portfolio alone is worth roughly $300 million. But some of Ham's recent domain purchases would also look silly: They'd take 15 or 20 years just to justify the price, and that assumes continuation of the pay-per-click model.
But Ham is taking a longer view. The Web, he says, is becoming cluttered with parked pages. The model is amazingly efficient -- lots of money for little work --but Ham argues that Internet users will soon grow weary of it all.

He also expects Google, Microsoft, and Yahoo to find ways to effectively combat typo-squatting. Some browsers can already fix typos; Internet Explorer catches unregistered domains and redirects visitors to a Microsoft page -- in effect controlling traffic the same way that Ham is doing with .cm. "The heat is rising," Ham says.

When Ham buys a domain now, he's not doing pay-per-click math but rather sizing it up as a potential business. Reinvent Technology aims to turn his most valuable names into mini media companies, based on hundreds of niche categories.

Among the first he'd like to launch, not surprisingly, is Religion.com. Ham recently leased the entire 27th floor in his Vancouver building and is now hiring more than 150 designers, engineers, salespeople, and editorial folks.

Much of that effort is going into developing search tools based more on meaning and less on keywords. "Google is only so useful," Ham says.

The aim is to apply a meaning-based, or "semantic," system across swaths of sites, luring customers from direct navigation and search engines alike. Religion.com would then become an anchor to which scores of other sites would be tied.

"It's time to build out the virtual real estate," Ham says. "There's so much more value in these names than pay-per-click." Seeman's patent application even mentions the possibility of turning Web traffic from Cameroon and other future foreign partners into full-fledged portals.

It's all part of the master plan, as Ham aims to become the first domainer to move from the ranks of at-home name hunter to Internet titan. Smaller players have been selling out to VC-backed groups, and Ham expects that the best names will eventually be owned by just a handful of companies.

If he bets right, he might very well be one of them. "If you control all the domains," he says, "then you control the Internet."
http://money.cnn.com/magazines/busin...0989/index.htm





Net Taxes Could Arrive by This Fall
Declan McCullagh

The era of tax-free e-mail, Internet shopping and broadband connections could end this fall, if recent proposals in the U.S. Congress prove successful.

State and local governments this week resumed a push to lobby Congress for far-reaching changes on two different fronts: gaining the ability to impose sales taxes on Net shopping, and being able to levy new monthly taxes on DSL and other connections. One senator is even predicting taxes on e-mail.

At the moment, states and municipalities are frequently barred by federal law from collecting both access and sales taxes. But they're hoping that their new lobbying effort, coordinated by groups including the National Governors Association, will pay off by permitting them to collect billions of dollars in new revenue by next year.

If that doesn't happen, other taxes may zoom upward instead, warned Sen. Michael Enzi, a Wyoming Republican, at a Senate hearing on Wednesday. "Are we implicitly blessing a situation where states are forced to raise other taxes, such as income or property taxes, to offset the growing loss of sales tax revenue?" Enzi said. "I want to avoid that."

A flurry of proposals that pro-tax advocates advanced this week push in that direction. On Tuesday, Enzi introduced a bill that would usher in mandatory sales tax collection for Internet purchases. Second, during a House of Representatives hearing the same day, politicians weighed whether to let a temporary ban on Net access taxes lapse when it expires on November 1. A House backer of another pro-sales tax bill said this week to expect a final version by July.

"The independent and sovereign authority of states to develop their own revenue systems is a basic tenet of self government and our federal system," said David Quam, director of federal relations at the National Governors Association, during a Senate Commerce committee hearing on Wednesday.

Internet sales taxes
At the moment, for instance, Seattle-based Amazon.com is not required to collect sales taxes on shipments to millions of its customers in states like California, where Amazon has no offices. (Californians are supposed to voluntarily pay the tax owed when filing annual state tax returns, but few do.)

Ideas to alter this situation hardly represent a new debate: officials from the governors' association have been pressing Congress to enact such a law for at least six years. They invoke arguments--unsuccessful so far--like saying that reduced sales tax revenue threatens budgets for schools and police.

But with Democrats now in control of both chambers of Congress, the political dynamic appears to have shifted in favor of the pro-tax advocates and their allies on Capitol Hill. The NetChoice coalition, which counts as members eBay, Yahoo and the Electronic Retailing Association and opposes the sales tax plan, fears that the partisan shift will spell trouble.

One long-standing objection to mandatory sales tax collection, which the Supreme Court in a 1992 case left up to Congress to decide, is the complexity of more than 7,500 different tax agencies that each have their own (and frequently bizarre) rules. Some legal definitions (PDF) tax Milky Way Midnight candy bars as candy and treat the original Milky Way bar as food. Peanut butter Girl Scout cookies are candy, but Thin Mints or Caramel deLites are classified as food.

The pro-tax forces say that a concept called the Streamlined Sales Tax Agreement will straighten out some of the notorious convolutions of state tax laws. Enzi's bill, introduced this week, relies on the agreement when providing "federal authorization" to require out-of-state retailers "to collect and remit the sales and use taxes" due on the purchase. (Small businesses with less than $5 million in out-of-state sales are exempted.)

It's "important to level the playing field for all retailers," Enzi said during Wednesday's hearing.

While it's too early to know how much support Enzi's bill will receive, foes of higher taxation are marshaling their allies. Sen. Ted Stevens, an Alaska Republican, said Wednesday that he'd like "to see an impregnable ban on taxes on the Internet."

A taxing question

Pro-tax and antitax forces are jockeying for position before a Net access tax moratorium expires in November. Also on the table: a proposal to usher in mandatory online sales taxes.

Enzi bill: Ushers in mandatory sales taxes on Internet purchases.

S. 156: Renews expiring access tax moratorium permanently.

H.R. 1077: Renews expiring access tax moratorium permanently and eliminates grandfather provision permitting nine states to collect taxes.

H.R. 763: Renews expiring access tax moratorium permanently.

Jeff Dircksen, the director of congressional analysis at the National Taxpayers Union in Alexandria, Va., said in written testimony prepared for the hearing: "If such a system of extraterritorial collection is allowed, Congress will have opened the door to any number of potential tax cartels that will eventually harm rather than help taxpayers."

Internet access taxes
A second category of higher Net taxes is technically unrelated, but is increasingly likely to be linked when legislation is debated in Congress later this year. That category involves access taxes, meaning taxes that local and state governments levy to single out broadband or dial-up connections. (See CNET News.com's Tech Politics podcast this week with former House Majority Leader Dick Armey on this point.)

If the temporary federal moratorium is allowed to expire in November, states and municipalities will be allowed to levy a dizzying array of Net access taxes--meaning a monthly Internet connection bill could begin to resemble a telephone bill or airline ticket with innumerable and confusing fees tacked on at the end. In some states, telephone fees, taxes and surcharges run as high as 20 percent of the bill.

These fees that states levy on mobile phones, cable TV and landlines run far higher than state sales taxes at an average of 13.3 percent, cost the average household $264 a year, and total $41 billion annually, according to a report published by the Chicago-based Heartland Institute this month. Landlines are taxed at the highest rate, 17.23 percent, with Internet access being virtually tax free, with the exception of a few states that were grandfathered in a decade ago.

Dircksen, from the National Taxpayers Union, urged the Senate on Wednesday to "encourage economic growth and innovation in the telecommunications sector--in contrast to higher taxes, fees and additional regulation" by at least renewing the expiring moratorium, and preferably making it permanent. Broadband providers like Verizon Communications also want to make the ban permanent.

But state tax collectors are steadfastly opposed to any effort to renew the ban, let alone impose a permanent extension. Harley Duncan, the executive director of the Federation of Tax Administrators, said Wednesday that higher taxes will not discourage broadband adoption and his group "urges Congress not to extend the Act because it is disruptive of and poses long-term dangers for state and local fiscal systems."

Sen. Daniel Inouye, the influential Democratic chairman of the Senate Commerce committee, said: "Listening to the testimony, I would opt for a temporary extension, if at all."

If the moratorium expires, one ardent tax foe is predicting taxes on e-mail. A United Nations agency proposed in 1999 the idea of a 1-cent-per-100-message tax, but retreated after criticism. (A similar proposal, called bill "602P," is, however, actually an urban legend.)

"They might say, 'We have no interest in having taxes on e-mail,' but if we allow the prohibition on Internet taxes to expire, then you open the door on cities and towns and states to tax e-mail or other aspects of Internet access," said Sen. John Sununu, a New Hampshire Republican. "We need to be honest about what we're endorsing and what we're opposing."
http://news.com.com/Net+taxes+could+...3-6186193.html





As the Grapevine Withers, Spam Filters Take Root
John Tierney

Thanks to the sociologist Dan Ryan, I’m coming to terms with my need for spam filters against my friends’ e-mail.

It’s not that I’ve lost interest in them. I still want to know how they’re doing, but I can survive without their vacation itinerary or last weekend’s golf scores. I’d like to keep up with their work, but I don’t need all their blog posts or their deep thoughts on the Iowa caucuses.

I’m glad to see a joke or an article that they picked out for me, but not one that they blasted to everyone in their address book. Did they really imagine I wanted to drop everything this second to contemplate the future of NATO? Are they writing personal notes to their A-list friends and relegating me to the @-list? What am I, chopped Spam?

What we have here is obviously not a failure to communicate, but it’s not quite the opposite either. It’s not a simple case of information overload, according to a seminal article in the journal Sociological Theory by Dr. Ryan, a professor at Mills College in Oakland, Calif. He defines it — with all the flair we’ve come to expect from that journal — as a violation of the “notification norms” that “constrain the behavior of nodes in social networks.”

Technology now lets us tell everyone everything at once, but we still value a network that existed before the Web: the grapevine. When you pass along gossip to a friend or colleague, you’re doing more than just relaying news. You’re defining a social circle. You’re reassuring the listeners that they’re in the loop — and subtly obliging them to remember that you are, too.

The golden rule of this “information order,” as Dr. Ryan calls it, is to tell unto others as you would have them tell unto you. You shouldn’t leave your trusted colleagues at the office in the dark about a coming shake-up, but you shouldn’t be an electronic font of trivia, either. You filter the news for them and expect them to do the same for you. You tell them what they need to know in the way they expect to hear it.

“Even though we all claim to hate gossip and being in or out of the loop, there’s an emotional benefit to grapevines,” Dr. Ryan says. “I think of it as informational grooming, like primates picking bugs off each other. We don’t want to get information all at once. Some you want to get as an insider: ‘I talked to Bob yesterday and he wanted me to tell you...’ Telling everyone violates our sense that we live in a rich array of social relationships.”

Technology hasn’t eliminated the desire for rules about who tells what, when and how. You don’t want your wife or girlfriend to tell you she’s pregnant by sending an e-mail message. A close friend could be miffed if he found about your hot date on Friday not from you, but from a casual acquaintance who had already seen pictures of it on your Facebook page.

A host may think it’s a friendly gesture to e-mail invitations to a party with all the recipients’ names in the address line, but if the names aren’t in alphabetical order and yours is near the end, the message may not seem so friendly. You could have the same out-of-the-loop feeling as a manager who learns big news about his department in the same e-mail message sent to everyone else in the company.

Every message incorporates another message in the way it is delivered, whether it’s an e-mail or a ransom note pinned to an ear. Dr. Ryan calls this metanotification. The metamessage is usually less gruesome than a body part, although once a CC: list reaches critical mass it has a horror all its own. Dr. Ryan said that in barraging me with “friendly-fire spam,” my correspondents were also telling me:

“I’m too busy to be bothered thinking much about whether and why you, recipient, might actually want to know this.”

“At this moment I’m treating you just like everyone else in my address book.”

“I have this category for you — journalist — and some really crude and naïve sense of what you must be interested in, and I think that I’m plugged into the stuff that’s going on in the world a lot better than you are, so you’re lucky that I’m your eyes and ears out here.”

Yes, those messages came through pretty clearly, although I like to think my friends didn’t mean to do all that metanotifying. They presumably figured I might be interested in what they were thinking — and often I am. Compared with all the spam I get from strangers, their stuff is riveting — even when they launch into their Middle East peace plans.

But it’s still spam, and you don’t expect that from friends. It’s the equivalent of the holiday cards with the what-our-family-did-last-year letter. We recognize that these letters serve a purpose — they can even be entertaining, intentionally or not — and we realize that the writers don’t have time to send personal letters to all their friends. But the mass-produced pseudo-intimacy still seems dorky.

That’s why, as Dr. Ryan pointed out, so many of these holiday dispatches begin with an apology like, “We hate these photocopied letters, too.” The writers know they must acknowledge that a notification norm has been broken. The most diligent will scribble a brief personal note on the letter to send a further message: See, you’re not like all the others. We have a relationship!

That’s the kind of signal I’ve started looking for in the e-mail messages from my friends-turned-spammers — some recognition that I’m more than just a Contact. I’m trying filters that distinguish letters with a small display of netiquette, like having my name somewhere besides the address line. I’m not looking for a long personal note. It’s the metanotification that counts.
http://www.iht.com/articles/2007/05/...gy/ptend24.php





Google Co-Founder's Bride Search Gets Result

Google Inc. billionaire co-founder Sergey Brin married his longtime girlfriend recently in a ceremony in the Bahamas, a relative said on Wednesday.

The relative, who asked to remain anonymous, said about 60 people attended the wedding to Anne Wojcicki, which was held on a sandbar on May 5 and mixed Jewish traditions with unconventional elements.

"In most Jewish ceremonies you don't wear your swimsuit," the person said. "Everyone took a boat, but some people got off the boat early" to swim.

But Brin himself was not talking.

During a media question-and-answer session about Google's new unified Web search service at the company's "Googleplex" headquarters in Mountain View, California, on Wednesday, a reporter congratulated Brin.

A long, uncomfortable silence followed before Brin offered a faint smile and said, "Let's keep it to search," triggering an outburst of laughter from the audience.

Brin, 33, started Google in 1998 with Stanford University classmate Larry Page. The company went public in 2004, and the stock's stratospheric rise has driven Brin's net worth up to an estimated $14 billion, placing him at No. 12 on Forbes magazine list of richest Americans last year.

Wojcicki met Brin through her sister Susan, who sublet the garage in the house she was renting to Brin and Page as they were getting Google off the ground, according to the San Jose Mercury News.
http://www.reuters.com/article/peopl...28103520070518





Ka-ching

Google Invests $3.9M in Start-Up of Co-Founder's Wife
MIichael Liedtke

Fresh off her marriage to Google co-founder Sergey Brin, biotechnology entrepreneur Anne Wojcicki is now wedded to the company too.

In Securities and Exchange Commission documents filed Tuesday, Google revealed that it invested $3.9 million to obtain a minority stake in Wojcicki's biotech start-up, 23andMe.

Some of the money that Google staked this month was used to repay $2.6 million in financing previously provided to 23andMe by Brin, one of the world's wealthiest men with an estimated $16 billion fortune.

The disclosure, which marked Google's first confirmation of a secretive marriage consummated in the Bahamas earlier this month, could pose nettlesome questions of nepotism for the Internet search leader, which ranks among the world's most scrutinized publicly held companies.

But any criticism of Google's ties to 23andMe is likely to be tempered by the investment's relatively small size. Google earned $1 billion during the first quarter, or about $11 million per day, and ended March with $11.9 billion in cash.

Tuesday's filing didn't explain the rationale for Google's investment in 23andMe, but it said the company's audit committee had consulted an independent adviser to assess the start-up's value.

Google invested in 23andMe as part of its goal of developing new ways to help people make sense of their genetic information, spokesman Jon Murchinson said in a statement. He said Brin recused himself from all management and board discussions about the 23andMe investment to avoid a conflict of interest.

Formerly a biotech investor herself, Wojcicki co-founded 23andMe last year with biopharmaceutical industry veteran Linda Avey. The start-up, located near Google's Mountain View headquarters, is trying to "allow individuals to gain deeper insights into their ancestry, genealogy and inherited traits," Wojcicki said in a statement.

The start-up, whose name refers the 23 pairs of chromosomes in humans, plans to officially launch by the end of this year, according to its website.

23andMe also has attracted investments from venture capital firms New Enterprise Associates and MDV-Mohr Davidow Ventures as well as biotech bellwether Genentech, whose chief executive, Arthur Levinson, sits on Google's board. The amount contributed by the other investors wasn't disclosed.

Brin, 33, might not have met his wife if he and his partner Larry Page hadn't decided to incorporate Google in September 1998 and move their work from their Stanford University dorm rooms.

Google subsequently leased the garage of a Menlo Park home of Susan Wojcicki, who introduced her sister Anne to Brin. Susan Wojcicki now works as a vice president of development for Google, which now owns the home where Brin and Page launched the company.
http://www.usatoday.com/tech/techinv...-startup_N.htm





What's Hot? Google Offers Daily Updates on Trends
Eric Auchard

The art of trend-spotting is set to take a more scientific turn as Google Inc., the world's top Web search company, on Tuesday unveils a service to track the fastest-rising search queries.

Google Hot Trends combines elements of Zeitgeist and Trends -- two existing Google products that give a glimpse into Web search habits, but only in retrospect based on weeks-old data.

Hot Trends, a list of the current top-100 fastest-rising search trends, will be refreshed several times daily, using data from millions of Google Web searches conducted up to an hour before each update, the company said.

What's hot and what's not will be knowable to the masses in ways pioneering social philosophers could never have imagined.

"There are events going on all the time that most of us aren't aware of happening," Amit Patel, a Hot Trends software engineer and an early Google employee, said in an interview.

From news to gossip, the profound to the truly inane: baffled Google users seek the meaning of the phrase "motion to recommit" in the latest congressional debate, or search the phrase "I who have nothing" -- the title of a song sung by a recent contestant on televised competition "American Idol."

And watch how the Web generation cuts corners: Each night before a national college entrance examination, Google sees heavy searches from what appears to be high-school students making last-minute preparations ahead of the test, Patel said.

Top Of The Top Of The Pops

For years, Google has compiled a list of popular searches it calls Google Zeitgeist, offering a weekly, monthly or annual retrospective look back at what its users wanted to know.

Hot Trends updates and automates this process by giving a contemporary snapshot of what is on people's minds -- at least as reflected by what goes through Google Web search each day.

Each Hot Trends response shows not just links to potentially related sites, but also links to associated Google News stories and blog searches, providing added context.

"After we find what trends that are interesting, users will want to know why are they important?" Patel said. "We are helping you find an explanation: There is some investigation that has to be done by the user."

The experimental service also allows users to select specific dates to see what the top-rising searches were at a given point in the recent past, starting in mid-May.

The Mountain View, California-based company is also introducing changes to its existing Google Trends service, which offers charts and other data to see how a trend evolves over time or how it compares to other trends over time.

Now, in addition to viewing the top countries and cities that searched for a term, users can see how search habits around a particular trend vary from region to region in the United States, as well as across 70 different countries.

For example, political junkies can track Google search patterns for particular U.S. presidential candidates by state.

Hot Trends, at http://www.google.com/trends/, finds the fastest-rising trends instead of the most-popular topics, which search experts say still centers around sex, sex and more sex. Hot Trends screens "inappropriate language" and pornography.
http://www.reuters.com/article/techn...36402620070522





MySpace Agrees to Share Sex Offender Data With States
AP

MySpace.com will provide law enforcement officials with data on registered sex offenders who use the popular social networking Web site, the company said today.

Attorneys general from eight states demanded last week that the company provide data on how many registered sex offenders are using the site and where they live. MySpace initially refused, citing federal privacy laws.

Connecticut Attorney General Richard Blumenthal said today the company agreed to comply with subpoenas from at least 14 states.

"Our subpoena compels this information right away within hours not weeks, without delay because it is vital to protecting children," Blumenthal said. "Many of these sex offenders may have violated their parole or probation by contacting or soliciting children on MySpace."

Blumenthal said along with names and addresses, his office also will be looking for detailed information about how each sex offender used MySpace. That information will be cross-reference against the terms of probation and parole for each of those MySpace members, he said.

"Contact with children, is likely to be prohibited in many of these cases," Blumenthal said.

MySpace obtained the data from Sentinel Tech Holding Corp., which the company partnered with in December to build a database with information on sex offenders.

"We developed ’Sentinel Safe’ from scratch because there was no means to weed them out and get them off of our site," said Mike Angus, MySpace’s executive vice president and general counsel.

Angus said the company, owned by media conglomerate News Corp., had always planned to share information on sex offenders it identified and has already removed about 7,000 profiles out of a total of about 180 million.

"This is no different than an offline community," he said. "We’re trying to keep it safe."

Angus said the company had also made arrangements to allow law enforcement to use the Sentinel software directly.
http://news.newstimes.com/news/updat...e=news_updates





Lawmakers Want Sex Offenders to Register e-Mail Addresses
AP

Connecticut and more than a dozen other states are considering whether to require convicted sex offenders to register their e-mail addresses as part of efforts to combat online sexual predators.

Three states – Virginia, Arizona and Kentucky – already require sex offenders to provide law enforcement with their e-mail addresses as well as their home addresses.

The bills have support from the popular social networking site MySpace.com, which has been under increasing pressure to ferret out convicted child molesters and stop them from creating online profiles.

Connecticut’s proposal, which passed the state House of Representatives 149-0 on Thursday, would expand the state’s version of Megan’s Law, named after Megan Kanka, a 7-year-old New Jersey girl who was raped and murdered in 1994 by a sex offender who lived across the street.

"Megan’s Law is based on keeping track of where sex offenders reside. So it makes sense to track their location in cyberspace," said Connecticut House Speaker James Amann, D-Milford, who championed a 1995 bill requiring sex offenders to register their home addresses. "The Internet represents a new frontier of sex predators."

Connecticut’s bill would require sex offenders to register any e-mail addresses, instant message addresses or other Internet identifiers with the state police. Those who don’t report the information would face up to five years in prison.

It also makes it a Class C felony, punishable by up to 10 years in prison, for any person to misrepresent his or her age to entice a minor on the Internet to engage in sexual activity.

The bill awaits action in the Senate.

MySpace is lobbying for similar legislation on both state and national levels. The company’s chief security officer, Hemanshu Nigam, appeared at a state Capitol news conference with Amman on Thursday.

"Our laws need to change with the times," he said. "We can no longer unwittingly provide an advantage to predators online."

Besides Connecticut, MySpace said California, Colorado, Florida, Illinois, Louisiana, Massachusetts, Minnesota, New Jersey, New Mexico, New York, North Carolina, Oklahoma, Oregon, Pennsylvania, Tennessee, and Texas have considered or are considering legislation that requires registered sex offenders to report their e-mail addresses.

Typical MySpace profiles include photos, music and personal information, including hometowns and education. Users can send messages to one another and, in many cases, browse other profiles.

The company said Thursday that it has removed 7,000 registered sex offenders’ profiles from its site after hiring a software company to identify them. It is providing the information to law enforcement and state attorneys general.

"Mandatory sex offender e-mail registration legislation would significantly expedite this process and help keep sex offenders off our sites," Nigam said.

Under the bill, whenever MySpace determines that one of Connecticut’s approximately 4,100 registered sex offenders is using the site, it must contact state police.

The bill has support from Connecticut Attorney General Richard Blumenthal, who, along with six of his counterparts in other states, demanded this month that MySpace turn over the names and information of the identified sex offenders on its site. But he said more needs to be done.

"This one step alone is insufficient," Blumenthal said. "Many predators have never been convicted of any sexual offense, and many more use aliases and fake information. Against this threat, we need age verification, identity checks and other measures to protect children on social networking sites."

Connecticut’s bill also adds people paid to repair computers to the list of those required by law to report suspicions of child abuse they come across while on the job.

Thirty-seven professions in the state are required to report child abuse, including teachers, medical personnel and counselors. People in those professions who report abuse in good faith are immune from civil and criminal liabilities. Failure to notify the state Department of Children and Families could lead to fines up to $500.
http://www.newstimeslive.com/news/story.php?id=1054741





Web Sites Listing Informants Concern Justice Dept.
Adam Liptak

There are three “rats of the week” on the home page of whosarat.com, a Web site devoted to exposing the identities of witnesses cooperating with the government. The site posts their names and mug shots, along with court documents detailing what they have agreed to do in exchange for lenient sentences.

Last week, for instance, the site featured a Florida man who agreed in September to plead guilty to cocaine possession but not gun charges in exchange for his commitment to work “in an undercover role to contact and negotiate with sources of controlled substances.” The site says it has identified 4,300 informers and 400 undercover agents, many of them from documents obtained from court files available on the Internet.

“The reality is this,” said a spokesman for the site, who identified himself as Anthony Capone. “Everybody has a choice in life about what they want to do for a living. Nobody likes a tattletale.”

Federal prosecutors are furious, and the Justice Department has begun urging the federal courts to make fundamental changes in public access to electronic court files by removing all plea agreements from them — whether involving cooperating witnesses or not.

“We are witnessing the rise of a new cottage industry engaged in republishing court filings about cooperators on Web sites such as www.whosarat.com for the clear purpose of witness intimidation, retaliation and harassment,” a Justice Department official wrote in a December letter to the Judicial Conference of the United States, the administrative and policy-making body of the federal court system.

“The posting of sensitive witness information,” the letter continued, “poses a grave risk of harm to cooperating witnesses and defendants.”

In one case described in the letter, a witness in Philadelphia was moved and the F.B.I. was asked to investigate after material from whosarat.com was mailed to his neighbors and posted on utility poles and cars in the area.

The federal court in Miami has provisionally adopted the department’s recommendation to remove plea agreements from electronic files, and other courts are considering it and experimenting with alternative approaches.

Judge John R. Tunheim, a federal judge in Minneapolis and the chairman of a Judicial Conference committee studying the issue, acknowledged the gravity of the safety threat posed by the Web sites but said it would be better addressed through case-by-case actions.

“We are getting a pretty significant push from the Justice Department to take plea agreements off the electronic file entirely,” Judge Tunheim said. “But it is important to have our files accessible. I really do not want to see a situation in which plea agreements are routinely sealed or kept out of the electronic record.”

Judge Tunheim said his committee was working on recommendations for a nationwide approach to the issue. He said he favored putting the details of a witness’s cooperation into a separate document and sealing only that document, or withholding it from the court file entirely.

For those who want to read the details on cooperating witnesses, whosarat.com charges between $7.99 for a week and $89.99 for life. The latter option comes with a free “Stop Snitching” T-shirt.

The site was started by Sean Bucci in 2004, after he was indicted in federal court in Boston on marijuana charges based on information from an informant. The site was initially modest and free, the seeming product of a drug defendant’s fit of pique.

Over time, it attracted thousands of postings, many backed by court documents.

Mr. Bucci was convicted in February and will be sentenced next month. Stylianus Sinnis, a lawyer for Mr. Bucci, who is incarcerated, would not say whether Mr. Bucci was still affiliated with the site.

Contacted by e-mail, Mr. Capone called a reporter at an arranged time. He would not provide his phone number but insisted that his name was authentic. He said Mr. Bucci was no longer associated with the site.

The site itself says it is “designed to assist attorneys and criminal defendants with few resources.”

Defense lawyers are, in fact, hungry for any information about the nature of the case against their clients. “The more information out there, the easier it is for the truth to come out at trial,” said David O. Markus, a criminal defense lawyer in Miami.

Lawyers and their investigators can, of course, check court files and gather other material featured on the site themselves. But the site makes it easier, cheaper and quicker to find information about informants who may be involved in several cases in several jurisdictions, the site’s spokesman said.

Eliminating electronic access to plea agreements and related documents would represent a real hardship, Mr. Markus said.

“It doesn’t advance any of the stated safety goals, and it just serves as a roadblock to the public’s constitutional right to access to their court,” Mr. Markus said. “If there is an issue in a particular case, then let’s address it, but to sweep everything under the rug isn’t right.”

The site says that it “does not promote or condone violence or illegal activity against informants or law enforcement officers.”

Frank O. Bowman, a former federal prosecutor who teaches law at the University of Missouri, disputed that. “It’s reprehensible and very dangerous,” Professor Bowman said of the site. “People are going to die as a result of this.”

Defendants who choose to go to trial will, of course, eventually learn the identities of the witnesses who testify against them. But the site also discloses the identities of people engaged in undercover operations and those whose information is merely used to build a case. The widespread dissemination of informants’ identities, moreover, may subject them to retribution from friends and associates of the defendant.

Still, Professor Bowman, an authority on federal sentencing law, said he would hate to see the routine sealing of plea agreements. “It certainly is terribly important for the public ultimately to know who’s flipped,” he said.

Professor Bowman added that he was studying the deals prosecutors made in the aftermath of the collapse of Enron, the energy company. “To do that effectively,” he said, “I really need to know who flipped and the nature of their plea agreements.”

Judge William J. Zloch, the chief judge of the Federal District Court in Miami, said the move to bar electronic access to plea agreements there was supported by prosecutors and some defense lawyers. “It’s available to the public,” he said of the documents. “It’s just that you have to go the courthouse.”

Judge Zloch added that his court would discuss whether to make the change permanent in the coming months.

The existence of the site raises a First Amendment issue for its founder, Mr. Bucci. After his conviction, he filed a motion last month seeking a new trial, saying the government’s true purpose in prosecuting him was to shut down the site because “he dared to assert his First Amendment right” to post the information.

In a response filed Thursday, prosecutors conceded that “various levels of government have long expressed concern that the Web site endangers the lives of informants and undercover agents, and compromises investigations.” But they denied that the government’s dismay about the site influenced their decision to prosecute Mr. Bucci.

Most legal experts agreed that whosarat.com is protected by the First Amendment. In 2004, a federal judge in Alabama refused to block a similar site created by a criminal defendant, Leon Carmichael Sr., who has since been convicted of drug trafficking and money laundering.

“While the Web site certainly imposes discomfort on some individuals,” Judge Myron H. Thompson wrote, “it is not a serious threat sufficient to warrant a prior restraint on Carmichael’s speech or an imposition on his constitutional right to investigate his case.”

But Judge Thompson’s ruling was not categorical. “A few differences in Carmichael’s site could have changed the court’s calculus,” he wrote. And some law professors said that sites like whosarat.com might be subject to prosecution for obstruction of justice or aiding and abetting crimes.

In its December letter, from Michael A. Battle, then the director of the Executive Office for United States Attorneys, the Justice Department urged courts to put a statement on their Internet sites “warning against the republishing or the other use of official court records for illicit purposes such as witness intimidation.” Judge Tunheim said his Judicial Conference committee was awaiting legal advice on that possibility.

For now at least, the Justice Department and the federal judiciary appear to be focused on keeping information from the sites rather than trying to stop the sites from publishing what they learn.

Government secrecy, said Eugene Volokh, a law professor at the University of California, Los Angeles, “ends up being part of the price you pay for having broad speech protection.”
http://www.nytimes.com/2007/05/22/wa...on/22plea.html





Microsoft is not the Real Threat
Mark Shuttleworth

Much has been written about Microsoft’s allegation of patent infringements in Linux (by which I’m sure they mean GNU/Linux ). I don’t think Microsoft is the real threat, and in fact, I think Microsoft and the Linux community will actually end up fighting on the same side of this issue.

I’m in favour of patents in general, but not software or business method patents. I’ll blog separately some day about why that’s the case, but for the moment I’ll just state for the record my view that software patents hinder, rather than help, innovation in the software industry.

And I’m pretty certain that, within a few years, Microsoft themselves will be strong advocates against software patents. Why? Because Microsoft is irrevocably committed to shipping new software every year, and software patents represent landmines in their roadmap which they are going to step on, like it or not, with increasing regularity. They can’t sit on the sidelines of the software game - they actually have to ship new products. And every time they do that, they risk stepping on a patent landmine.

They are a perfect target - they have deep pockets, and they have no option but to negotiate a settlement, or go to court, when confronted with a patent suit.

Microsoft already spends a huge amount of money on patent settlements (far, far more than they could hope to realise through patent licensing of their own portfolio). That number will creep upwards until it’s abundantly clear to them that they would be better off if software patents were history.

In short, Microsoft will lose a patent trench war if they start one, and I’m sure that cooler heads in Redmond know that.

But let’s step back from the coal-face for a second. I have high regard for Microsoft. They produce some amazing software, and they made software much cheaper than it ever was before they were around. Many people at Microsoft are motivated by a similar ideal to one we have in Ubuntu: to empower people for the digital era. Of course, we differ widely on many aspects of the implementation of that ideal, but my point is that Microsoft is actually committed to the same game that we free software people are committed to: building things which people use every day.

So, Microsoft is not the real patent threat to Linux. The real threat to Linux is the same as the real threat to Microsoft, and that is a patent suit from a person or company that is NOT actually building software, but has filed patents on ideas that the GNU project and Microsoft are equally likely to be implementing.

Yes, Nathan, I’m looking at you!

As they say in Hollywood, where there’s a hit there’s a writ. And Linux is a hit. We should expect a patent lawsuit against Linux, some time in the next decade.

There are three legs to IP law: copyright, trademark and patents. I expect a definitive suit associated with each of them. SCO stepped up on the copyright front, and that’s nearly dealt with now. A trademark-based suit is harder to envisage, because Linus and others did the smart thing and established clear ownership of the “Linux” trademark a while ago. The best-practice trademark framework for free software is still evolving, and there will probably be a suit or two, but none that could threaten the continued development of free software. And the third leg is patent law. I’m certain someone will sue somebody else about Linux on patent grounds, but it’s less likely to be Microsoft (starting a trench war) and more likely to be a litigant who only holds IP and doesn’t actually get involved in the business of software.

It will be a small company, possibly just a holding company, that has a single patent or small portfolio, and goes after people selling Linux-based devices.

Now, the wrong response to this problem is to label pure IP holders as “patent trolls”. While I dislike software patents, I deeply dislike the characterisation of pure IP holders as “patent trolls”. They are only following the rules laid out in law, and making the most of a bad system; they are not intrinsically bad themselves. Yes, Nathan, all is forgiven . One of the high ideals of the patent system is to provide a way for eccentric genius inventors to have brilliant insights in industries where they don’t have any market power, but where their outsider-perspective leads them to some important innovation that escaped the insiders. The Week in Review is edited and published by Jack Spratts. Ask anyone on the street if they think patents are good, and they will say, in pretty much any language, “yes, inventors should be compensated for their insights”. The so-called “trolls” are nothing more than inventors with VC funding. Good for them. The people who call them trolls are usually large, incumbent players who cross-license their patent portfolios with other incumbents to form a nice, cosy oligopoly. “Trolling” is the practice of interrupting that comfortable and predictably profitable arrangement. It’s hard to feel any sympathy for the incumbents at all when you look at it that way.

So it’s not the patent-holders who are the problem, it’s the patent system.

What to do about it?

Well, there are lots of groups that are actively engaged in education and policy discussion around patent reform. Get involved! I recently joined the FFII: Foundation for a Free Information Infrastructure, which is doing excellent work in Europe in this regard. Canonical sponsored the EUPACO II conference, which brought together folks from across the spectrum to discuss patent reform. And Canonical also recently joined the Open Invention Network, which establishes a Linux patent pool as a defensive measure against an attack from an incumbent player. You can find a way to become part of the conversation, too. Help to build better understanding about the real dynamics of software innovation and competition. We need to get consensus from the industry - including Microsoft, though it may be a bit soon for them - that software patents are a bad thing for society.
http://www.markshuttleworth.com/archives/118





Microsoft Will Not Sue Over Linux Patents
Tom Espiner

Microsoft has said it has no immediate plans to sue after alleging patent infringements by open-source vendors.

In an official statement emailed to ZDNet UK, Microsoft confirmed that it would not litigate for now.

"If we wanted to go down that road we could have done that three years ago," said a Microsoft spokesperson. "Rather than litigate, Microsoft has spent the last three years building an intellectual property bridge that works for all parties--including open source--and the customer response has been tremendously positive. Our focus is on continuing to build bridges."

The infringement allegations, made by Microsoft in a Fortune magazine article, were that free and open-source software violated more than 230 of its patents.

In the interview, Microsoft counsel Brad Smith alleged that the Linux kernel violated 42 Microsoft patents, while its user interface and other design elements infringed on a further 65. OpenOffice.org was accused of infringing 45 patents, along with 83 more in other free and open-source programs, according to Fortune.

Microsoft has so far refused to specify which patents are allegedly being infringed by open-source vendors, leading some experts to assert that its threats are empty.

According to John McCreesh, OpenOffice.org marketing project lead, the open-source world is convinced that Microsoft would not substantiate its allegations. "[Patent litigation] is not an issue, but the Microsoft statements turn a non-issue into an issue in the minds of some corporate buyers," said McCreesh.

McCreesh added that while Microsoft may not have plans to sue, it could be using the threat of litigation to try to encourage corporate customers to move to those open-source product vendors with whom it had signed licensing agreements, such as Novell. "Microsoft has spent time and money accumulating patents. Maybe it has started using that armory to move corporate customers to open-source software that Microsoft approves of," McCreesh told ZDNet UK. "The patent covenant with Novell covers OpenOffice.org, and guarantees corporate customers will not be pursued by Microsoft."

McCreesh said that he suspected Microsoft was also trying to encourage more open-source vendors to enter into a commercial agreement such as the one with Novell.

Nick McGrath, Microsoft's UK director of platform strategy, told ZDNet UK last week that some customers were worried about the possibility of patent litigation. "We conducted research into the best way to give customers peace of mind," said McGrath. "For patent violation we give unlimited indemnification to customers [using Novell]."

Senior analysts said that while the threat of patent litigation might have caused a furore in the open-source community, actual litigation could cause damage to Microsoft similar to the damage suffered by SCO. "I hope it doesn't turn into another SCO," said Jon Collins, service director of Freeform Dynamics. "Microsoft is trying to play nice with the open-source community, but it has to do the Republican stance for its shareholders. There's a massive tension between the two positions."

"The danger is that it makes its stance too strong. SCO came away with egg on its face and damaged share price. The danger is Microsoft might respond to a situation to try to make an example, and that action could damage the brand," Collins added.
http://www.zdnetasia.com/news/softwa...2014865,00.htm





Apple TV and the Origin of Home Theater

Will Apple TV take off like the iPod? Just like the original iPod, Apple TV isn't designed to impress analysts doing reviews. There are few buzzwords to drop and little need for high priests to explain how it works. In typical Apple fashion, it just works, without really even requiring a manual. Whether it sells or not will have little to do with analysts' opinions, and more to do with how useful it is to consumers.

Here's a historical overview of the origins of home theater, leading toward a comparison of how Apple TV stacks up against previous and current generations of consumer home theater products.

Before Home Theater: Theater through the 30s.
Before home theater we just had the theater. At the turn of the century, the fledgling film industry quickly spawned movie theaters as a cheap form of entertainment. Not even Daddy Warbucks was flush enough to have a home theater; he simply rented out an entire show to watch a movie.

Early movies up into the 20's were filmed with hand cranked cameras without any sound, and delivered to theaters with sheet music for an organist to play during the film. New technology eventually allowed for both automated cameras and projectors as well as synchronized sound recording, ushering in a golden age of cinema.

There wasn't a huge choice in what to watch because of the overhead required to project a film and maintain a theater, but there also wasn't much competition in entertainment.

Television in the 40s.
The arrival of television would change that, but not immediately. TV was introduced to consumers during the Depression at a time when many Americans didn't even have electricity, let alone the money to buy an expensive entertainment novelty. Home entertainment was commonly limited to radio until World War II.

In the 40s, just as TV began to gain in popularity, the US War Production Board halted the manufacturing of consumer TVs for over three years. After that, pent up demand combined with the new post-war prosperity quickly increased the installed base of television, which in turn prompted the creation of new content to watch.

New TV content followed the existing pattern of radio: local affiliate stations broadcast television programming created by major networks, and supported their operations with local sponsors.

TV and Movies Fight For Attention in the 50s and 60s.
As TV became affordable, the movie industry scrambled to maintain customers. TV networks didn't commonly play movies; the TV networks developed their own content, acting as radio stations with a picture rather than a home version of the movie theater.

Still, the new availability of home entertainment ate into theaters' business. To attract customers, the movie industry invented a variety of new technologies that distinguished movies with:

•color picture
•widescreen display
•high definition picture
•stereophonic sound

Color picture: Adding color to movies was straightforward: use color film. Color production was more expensive, but it didn't cost theaters extra to show color movies.

Before color movie film became common in the early 50s, systems like Technicolor required capturing multiple reels of black and white film using color filters. Each reel of film was then soaked in a primary color dye and used to print a color version of the film.

Adding color to TV was more difficult. Early designs uses complex mechanical systems that projected images through spinning color wheels and aligned the picture using mirrors. Even after a simpler and backwardly compatible system for broadcasting color TV was standardized upon, it didn't magically upgrade the networks’ broadcasts.

Color support was not only more expensive to produce and broadcast, but also required far more expensive sets to watch it. Without an installed base of color TVs, it made little sense for broadcasters to upgrade all their equipment to support color. In fact, half of the American TV networks actually resisted the move to color.

Only two of the four major TV networks in the US were owned by TV makers, and one of them--the DuMont Network--was already going out of business in the early 50s.

NBC survived, and was left the only network with a real reason to produce color TV broadcasts; after all, NBC was owned by RCA, which had color TVs to sell. CBS and ABC had nothing to gain from selling Americans on color broadcasts, apart from enriching NBC's parent company and having to invest in expensive new equipment.

The high cost of color equipment and the lack of color content prevented color TV from gaining much popularity well into the late 60s, despite the technology having been invented back in the 40s.

That helped to make color in the movies a compelling reason to go to the theater. Another movie theater feature unavailable at home on TV was:

Widescreen display: Silent movies were originally shot at the standard 1.33:1 aspect ratio (4:3) commonly used by television. When sound was added, the Academy standardized on a compatible full screen ratio of 1.37:1, which allowed for a vertical soundtrack to be printed on the film next to the picture without creating a tall picture.

This standard aspect ratio was widened in three ways in the 50s and 60s to help differentiate movies from TV:

•anamorphic films are shot with a lens that compresses the picture on film. A anamorphic projector presents the movie at around 2.40:1. Fox developed CinemaScope as one of the first anamorphic systems and licensed it to other studios. One of the better examples of CinemaScope was Disney’s 2000 Leagues Under the Sea.

Panavision later replaced CinemaScope with an improved anamorphic system that is commonly used today.

•wide film Instead of using Fox’ CinemaScope, Paramount developed its own system called VistaVision, which ran film through the camera to actually capture a wider frame using more film for a better picture.

•matted films simply block out portions of the original film to present a wider aspect ratio, creating a widescreen display by simply cutting the top and bottom and blowing up the picture.

In the frame below, the yellow box represents the widescreen version seen in theaters, and the red box shows what would be displayed on TV in a open matte version.

The shot has to protect for full screen display, as this scene from A Fish Called Wanda illustrates. The alternative is to pan-and-scan: crop a square area of the movie within the widescreen version--with the intent of capturing most of the action--and blow it up.

•multiple display films use more than one camera to capture extremely wide films. Cinerama originally projected three films from three projectors on a special wide screen. The shot from How the West Was Won shows two seams from the three projector system.

The widest films presented multiple cameras around a room to create a circle. Circlorama used eleven cameras; Disney built similar Circle-Vision theaters with nine screens in its theme parks--although most are now closed.

High definition picture: larger format films used larger film, commonly 70 mm, to deliver a clearer picture. Todd-AO filmed movies in 65 mm and distributed them used 70 mm film, providing extra space for soundtracks on the film. A few early Todd-AO films also used faster frame rates, which made action smoother on the screen.

This frame of a 70 mm print of 2001: A Space Odyssey shows how much more film area is devoted to recording each frame of the movie compared to the 35 mm clips above. In addition, the 70 mm film also had room for extra magnetic tape audio recordings on both sides of the sprocket holes.

Films shot in 3D can also be included with high definition. Like color and widescreen, 70 mm and 3D films offered an experience that couldn't be found at home.

However, the expense of filming in these formats, as well as the technical issues they created, limited their use. A film shot in Cinerama using three different cameras couldn't zoom in, for example.

3D films and other widescreen formats also required special projectors and screens in theaters. Without an audience of theaters, it made little sense to churn out movies using those features, and without the special content, there was little reason to build theaters that supported them.

A parallel example applies to audio reproduction.

Stereophonic sound: Stereo is typically thought of as two channels of sound using two speakers, but 'stereo' doesn't mean two, it refers to multi-dimensional depth.

Stereo sound relates to any system that plays back more than one channel of sound to create a surrounding, immersive reproduction of audio. Since wide, surrounding effects can be created with just two speakers, common stereo systems involve two recordings played back in concert.

This was another technology theaters could afford to add, while home users couldn't. Using analog tubes in radio or TV, the cost of playing back stereo sound doubled the price of equipment. It literally required a secondary receiver circuit, a second set of preamps, and so on, neatly doubling the cost of the set. Until electronic circuits could reduce the cost of all those components, it was simply too expensive. Stereo TV didn't become widely available until the 80s.

In the theater, the use of multiple sound tracks was a big draw. Disney's Fantasia used four sound tracks, and later 70 mm releases in the 60's put as many as 6 soundtracks on the film.

In the 70s, several movies presented “in 70 mm,” including Logan’s Run, weren’t even originally shot on wide film; they were only printed to 70 mm to carry the extra magnetic sound tracks that couldn’t fit on 35 mm prints, where there was only room for four.

Theaters had to be outfitted to support playback of multiple channels of sound, and movies had to provide compatible, multichannel sound content in order to differentiate the theater from TV; the downfall of both factors helped to erase that advantage in the 70s.

The Decline of Movie Theaters in the 70s.
As competition from color TV began to eat into movie theaters' revenues, any remaining interest in rolling out expensive new technology in theaters began to collapse, killing much of the unique experience movies offered.

For example, many theaters were reluctant to invest in fancy sound systems to take advantage of the multichannel magnetic tape soundtracks glued directly on the film on some blockbuster movies, and commonly just played the movie's soundtrack in mono instead.

Theaters that had upgraded to fancier sound systems found the reverse problem: the magnetic tape soundtracks commonly wore right off the film after several showings, leaving little high quality content to play back using their fancy sound systems.

Increasingly, the multichannel magnetic soundtracks were replaced by two track optical sound, printed directly on the film. Below is a 35 mm print using 4 track magnetic sound, and a print with optical sound. The black graphic exaggerates the optical soundtrack for heightened dramatic effect.

As existing content degraded and fewer films were made using high definition prints or in true widescreen, the palace theaters of the golden age of movies began to disassemble themselves into multiplex outlets in order to at least offer more variety in the lower quality movies available.

The Rise of Home Theater Entertainment.
Many of the advancements pioneered in theaters had made it into the home by the 70s. Color TVs and color broadcasts was both commonly available, and FM radio had begun broadcasting in stereo. Both systems bent backward to support existing equipment.

Inside Apple TV described how color TV signals were designed to be backwardly compatible with black and white TVs.

In the case of stereo FM radio, a similar method was used. Rather than sending the new information as a separate subcarrier signal, FM radio takes the right and left audio channels and combines them into one sum or "mid" signal, and then subtracts the right and left signals to create a difference or "side" channel, and then modulates them together for broadcast.

This allowed older FM radios to play the sum channel in mono sound without losing any part of the broadcast, but enables stereo FM radios to demodulate both signals; they then add both signals to obtain the left channel, and subtracts the two signals to get the right channel. This form of matrixing turns up repeatedly as a strategy for delivering lots of information through a narrow pipe.
http://www.roughlydrafted.com/RD/RDM...5DA1A4A17.html





Hard Drive Shifts Movie Viewing From the Desk to the Couch
John Biggs

Watching digital movies stored on a computer’s hard drive on a TV set is still a relatively new idea, which is why hybrid products like the TrekStor MovieStation maxi t.u are popping up. This external hard drive with a built-in audio and video player puts the best of many technologies into one package.

The MovieStation, available at J&R Computer World and Amazon.com, starts at $300 for 250 gigabytes. The largest version, a 500-gigabyte model that costs about $400, can store about 128,000 MP3 files or 125 movies. The drive has optical and analog audio outputs along with composite video outputs for playing that content on any stereo or compatible television.

The MovieStation has a U.S.B. port to connect it to PCs or Macintosh computers without the need for software drivers. The onboard software automatically reads MP3, WMA and WAV files along with MPEG-formatted video files and displays an on-screen menu for browsing. The remote control and front control buttons allow selection of tracks and movies, and the composite output supports 1080i HD video. The drive also stores and displays digital photos.
http://www.nytimes.com/2007/05/24/te...y/24drive.html





Fujitsu's H.264 Chip Encodes/Decodes in Full HD -- a World's First
Thomas Ricker

Fujitsu just announced a world's first H.264 chip capable of encoding/decoding 1920 x 1080 (60i/50i) video in real time. The chip features 256MB of onboard FCRAM and ultra low 750mW power draw when encoding video. That means lickity quick, MPEG-2 quality processing with only a third, or half the required storage. The ¥30,000 ($247) MB86H51 chip is available to OEMs starting July 1st after which you'll find it bunged into the latest up-scale, consumer-class video recorders.
http://www.engadget.com/2007/05/21/f...-a-worlds-fir/





Ritek Set to Mass Produce Rewritable Blu-Ray Discs

The company will also mass produce HD DVD-RE
Dan Nystedt

Taiwanese disc maker Ritek Corp. plans to start mass producing BD-RE (Blu-Ray Disc Rewritable) discs as well as HD DVD-RE (high definition) discs in the third quarter of this year, a small but important step to helping reduce the cost of such discs for users.

A handful of Taiwanese companies dominate the disc mass production business, including Ritek and rival CMC Magnetics Corp. These companies license disc technology from developers and then spin out as many discs as they can in a bid to drive down the cost of each disc and earn as much revenue as possible.

Initially, however, BD-RE and HD DVD-RE discs will be pricey. The average cost per disc will remain around $10 in retail outlets, despite production costs of around $5 per disc, said Eric Ai, a Ritek representative. Prices won't likely come down until other mass disc producers in Taiwan win accreditation to make the discs, and ramp up volumes.

Each single-layer BD-RE disc has a capacity of 25GB, enough to hold three hours of terrestrial digital high-definition TV, or six hours of standard TV. HD DVD-RE discs can hold around 20GB of data, while DVD discs hold 4.7GB.
http://www.computerworld.com/action/...&intsrc=kc_top





Copying HD DVD and Blu-Ray Discs May Become Legal
Jeremy Kirk

Under a licensing agreement in its final stages, consumers may get the right to make several legal copies of HD DVD and Blu-ray Disc movies they’ve purchased, a concession by the movie industry that may quell criticism that DRM (digital rights management) technologies are too restrictive.

The agreement, if supported by movie studios and film companies, could allow a consumer to make a backup copy in case their original disc is damaged and another copy for their home media server, said Michael Ayers, a representative of an industry group that licenses the AACS (Advanced Access Content System) copy-prevention system.

AACS is used on HD DVD and Blu-ray discs, the new high-definition DVD formats, to prevent unauthorized copying of the discs.

The concept, called “managed copy,” would undercut one the strongest arguments against DRM technology, which critics say deprives buyers of their legal right to fair uses such as moving their content to other digital systems and devices.

The licensing agreement is under negotiation between the AACS Licensing Adminstrator, which Ayers represents, and companies using AACS technology, including film makers. AACS LA members include Sony, IBM, The Walt Disney Co., Warner Bros. and Microsoft.

AACS LA is pushing the studios to support managed copy and offer consumers the option of making at least one copy, Ayers said.

“We want to be able to maximize the number of movies that are able to be offered,” he said.

The idea is that the content companies could charge a premium according to how many copies are allowed, Ayers said. It remains a possibility that consumers, if given the chance to make three copies of “Spider-man 2” could give those copies to their neighbors, which technically would qualify as low-volume piracy.

But AACS LA believes that movie studios will see higher sales with the managed copy option, even with the chance it could be abused, Ayers said. “Studios will have to take that into account when they select pricing,” Ayers said.

On the technology side, a system of servers, run by the studios or third parties, could enable the authorization of copies. Newly-minted discs could be prevented from further copying by employing DRM technology from companies such as Microsoft, Ayers said.

AACS LA is now working out what rights studios and film companies would have under the complex licensing agreement. “We are optimistic that the studios will see this as a benefit that will drive sales,” Ayers said.
http://www.macworld.com/news/2007/05...php?lsrc=mwrss





Sony Sued Over Blu-Ray Disc-Coating Patent
GamePro Staff

A California company has filed a lawsuit alleging that Sony's method of coating optical discs infringes upon a patent involving silver-based metal alloys.
Eugene Huang

It seems that Sony is once again on the receiving end of a lawsuit involving patent infringement. According to a recent report from IPLaw360, three divisions of Sony Corp. were named as defendants in a suit filed last Wednesday, and will be brought to court over the technology behind the creation of the company's Blu-ray discs.

Irvine, CA-based Target Technology Co. LLC alleges that Sony has directly infringed on a patent for an invention originally issued on March 28th, 2006. The technology behind this patent involves coating optical discs with a thin film of silver-based alloy, which makes them more reflective and less susceptible to corrosion. Target Technology founder Han Nee claims to have personally developed this technology, which is currently in use with the majority of DVDs in production today.

Target's patent in this particular case, entitled "metal alloys for the reflective or the semi-reflective layer of an optical storage medium", has been filed under U.S. Patent Number 7,018,696.

The suit, which asks for damages and an injunction to prevent Sony from infringing on the patent further, has so far named three Sony subsidiaries as defendants: Sony DADC U.S. Inc., Sony Computer Entertainment America Inc., and Sony Pictures Entertainment Inc. A Sony spokeswoman declined to offer a statement on the grounds that the company does not comment on pending litigation.

This is the second high-profile case in recent memory involving patent infringement related to gaming technology. In 2002, Immersion Corp. brought both Sony and Microsoft to court due to infringements on the company's touch-feedback technology. Sony and Immersion have since agreed to a settlement earlier this year.

Target is also involved with two other pending lawsuits in the federal court systems in New York and California. According to IPLaw360's report, the defendants in those suits are Williams Advanced Materials Inc. and its disc-manufacturing customers, who have allegedly infringed on ten different patents.
http://www.gamepro.com/news.cfm?article_id=113708





CSS of DVDs Ruled 'Ineffective' by Finnish Courts

The CSS protection used in DVDs has been ruled "ineffective" by Helsinki District Court. This means that CSS is not covered by the Finnish copyright law amendment of 2005 (based on EU Copyright Directive from 2001), allowing it to be freely circumvented. Quoting the press release: "The conclusions of the court can be applied all over Europe since the word effective comes directly from the directive ... A protection measure is no longer effective, when there is widely available end-user software implementing a circumvention method. My understanding is that this is not technology-dependent. The decision can therefore be applied to Blu-Ray and HD-DVD as well in the future."
http://yro.slashdot.org/article.pl?sid=07/05/25/1653209





E.U. Probes Google Over Data Retention Policy
Kevin J. O’Brien and Thomas Crampton

Google has been warned that it may be violating European Union privacy laws by storing search data from its users for up to two years, the latest example of United States technology giant whose practices face a collision with European standards.

An advisory panel of data protection chiefs from the 27 countries in the European Union sent a letter last week to Google asking it to justify its policy of retaining data on Internet addresses and individual search habits, Friso Roscam Abbing, a spokesman for the European Union’s justice commissioner, Franco Frattini said today.

Privacy experts said the letter was the first salvo in what could become a determined effort by the European Commission to force Google to change how it does business in Europe, whose 400 million consumers outnumber those in the United States.

Any effort to impose limits on Google, which operates under United States law, would be the latest in a series of increasingly aggressive actions taken by European policy makers to rein in global technology companies.

Mr. Frattini called the working group’s query to Google “pertinent, appropriate and legitimate,” Mr. Abbing said.

According to one member, who spoke on the condition of anonymity because he was not authorized to speak for the group, the panel is concerned that Google’s retention period is too long and is designed to serve commercial interests. The data is often used to direct advertising to users.

“The discussion is only just beginning,” said Christoph Gusy, a privacy law expert at the University of Bielefeld in Germany. “The pressure to regulate this type of business activity, which is still in its infancy, is building, and what you are seeing is the beginning of a serious effort in Europe.”

Google described the committee’s request as reasonable. It noted that the company itself raised the issue with European officials in March by announcing that it was shortening the retention of customer data, which had previously been unlimited, to up to two years. Other large search engines like Yahoo and MSN Search have not disclosed how long they keep data.

“There can be reasonable arguments for and against keeping server logs for this length of time,” said Peter Fleischer, Google’s global privacy counsel. “But we believe that between 18 and 24 months is a reasonable length of time to balance privacy issues with business concerns.”

In a letter to be sent to the E.U. panel, Google will argue that the retention periods are necessary to ward off hackers and prevent Internet advertising fraud, and to improve Google’s search algorithm, Mr. Fleischer said.

The panel plans to meet on June 19 in Brussels to consider Google’s response.

The most prominent E.U. case against a United States technology giant focused on Microsoft, which the European Commission found in 2004 to have violated antitrust laws for using its de facto monopoly Windows operating system to promote its own server software and desktop media players. The company settled a similar case with American antitrust regulators in 1994.

Microsoft is appealing the commission’s decision. In the meantime, the commission and European competition officials have fined the company nearly $1 billion for failing to comply with the terms of its original order.

Some European nations are also challenging the operating practices of global technology companies, with success. In Britain, eBay this year agreed to modify its servers after Britain’s information commissioner, Richard Thomas, complained that customers were not able to easily close accounts and wipe out trading logs.

Simon Davies, the director of Privacy International, a London-based advocacy group, said that Google was a leading target of complaints received by his organization last year.

Of the 10,000 complaints made to the group in 2006, 2,000 involved Internet-related activities. And of those, 96 percent were about Google and its practice of retaining customer data, Mr. Davies said.

“The E.U.’s action is the first shot in a long, potentially bloody battle with Google,” said he added. “There is definitely a perception that something is amiss with Google.”

How far European officials can go to force Google to change its data retention policies remains unclear. The European Union’s Data Retention Directive, which takes effect on Sept. 1, requires all telecommunications companies and Internet service providers to retain traffic data on users for up to two years.

But E.U. law, according to Mr. Gusy, the German data retention expert, is silent on whether to apply the same limits, which are intended to combat terrorism, to content providers or search engines.

Mr. Abbing, the justice commissioner’s spokesma, said the commission could compel E.U. members to enforce the law.

How that could be used to challenge Google, whose business takes place largely in the Internet universe, was unclear. The data retention panel member said his group was hoping with its letter to persuade Google to voluntarily narrow its retention periods.
But François Bourdoncle, the chief executive of the French search engine Exalead, said European countries have the right to impose their own standards on the collection of Internet data.

“I think it is fair for the state to place boundaries around what a company may do with your private data,” Mr. Bourdoncle said. “We follow the very strict French privacy law that prevents us from storing any personal information that can be traced back to the individual.”

Mr. Bourdoncle said that the growing range of services offered by Google and other search portals posed an increasing threat to privacy. “By offering services from e-mail to search, they can easily build a complete profile of your entire digital life,” he said. “It is worrisome how much they can know simply by correlating all the information they collect on you.”

Yahoo declined to comment on the working group but said user trust is of high importance to the company.

Alex Laity, a spokesman, said Yahoo does not have a single policy on the issue. “Our data retention practices vary according to the diverse nature of our services,” he said in an e-mail.

Mr. Fleischer of Google noted that his company was the first of its kind to voluntarily cut retention periods, not just in Europe but around the world.

“We started this privacy dialogue precisely because we think it is something that needs to be further discussed,” he said.
http://www.iht.com/articles/2007/05/...ess/google.php





Google Proposes Innovation in Radio Spectrum Auction
John Markoff

Google filed a proposal on Monday with the Federal Communications Commission calling on the agency to let companies allocate radio spectrum using the same kind of real-time auction that the search engine company now uses to sell advertisements.

Executives at Google, based in the Mountain View, Calif., said that the company had no plans to bid in the closely watched sale of a swath of broadcast spectrum scheduled for February 2009 as part of the nation’s transition to digital broadcast television.

The company, the world’s dominant search engine, has, however, become an active participant in the debate over the control of access to broadband digital networks because it wants to create more competition among digital network providers like cable companies and Internet service providers.

The Google filing comes two days before a deadline for public comments set in an F.C.C. rule-making procedure for the sale of spectrum in the 700 MHz band, now largely used by UHF television broadcasters.

The agency is planning to set the rules for its auction this year as potential bidders, including telephone, cable and satellite operators — as well as potential consortiums interested in creating new next-generation digital wireless networks — jockey for position. Several groups of bidders hope to use the spectrum to create a new nationwide digital wireless network that would serve as an alternative broadband channel to businesses and consumers, competing with existing telephone and cable providers.

“The driving reason we’re doing this is that there are not enough broadband options for consumers,” said Adam Kovacevich, a spokesman for Google’s policy office in Washington. “In general, it’s the belief of a lot of people in the company that spectrum is allocated in an inefficient manner.”

In their proposal, Google executives argue that by permitting companies to resell the airwaves in a real-time auction would make it possible to greatly improve spectrum use and simultaneously create a robust market for innovative digital services. For instance, a company could resell its spectrum on an as-needed basis to other providers, the executives said in their formal proposal to the federal agency.

F.C.C. auction methods used in the past have been criticized because they required advance payments, leaving companies with less money needed to build infrastructure, resulting in fewer benefits to consumers in the way of advanced telecommunications services.

“In Google’s view, many of these thorny problems would be alleviated by a more open and market-driven spectrum access policy,” they wrote.

The Google proposal will be endorsed this week by one of the consortiums that is planning to bid in the spectrum auction: Frontline Wireless, an investor group founded by Reed E. Hundt, a former F.C.C. commissioner, with a number of Silicon Valley venture capitalists including the Google investors L. John Doerr and Ram Shriram.

“I’m hoping we treat spectrum as a scarce renewable resource which should be used for the common good of the consumer and to make available the most innovative devices that can connect to those consumers,” Mr. Shriram said.

Mr. Hundt said in an e-mail message: “We propose that one quarter of the capacity of the network that uses this spectrum must be sold not in a long-term service contract but instead in ongoing open auctions to any and all comers.”

The proposal is for the wholesale auction of spectrum. However, in the future such a system might require that advanced computing technology be built into wireless handsets and computers to automate the auction bidding process and permit it to take place without users noticing. The Google proposal states that such a system would reduce retail prices for wireless spectrum and extend Internet access into rural areas not now served by existing providers.

One significant issue in the debate is whether the F.C.C. will be able to meet a mandate in the digital television law calling for reallocation of the frequencies to public safety organizations while simultaneously making spectrum available for commercial applications.
http://www.nytimes.com/2007/05/22/te.../22google.html





New Superfast Wireless Broadband Device Prototype Submitted to FCC
Eric Bangeman

While the Federal Communications Commission moves ahead with planning for the upcoming 700MHz spectrum auction, the White Space Coalition has submitted a second prototype white space wireless broadband device to the FCC for testing. White space devices could use the so-called white space in the current analog television spectrum (2MHz to 698MHz) to deliver wireless broadband service. Former FCC chief engineer Edmond Thomas (and current technology policy advisor for the law firm of Harris, Wiltshire & Grannis, which is representing the Coalition) told Ars that he believes white space broadband could deliver download speeds of up to 80Mbps, which would make it extremely competitive with fiber-to-the-premises solutions like Verizon's FiOS networks.

The newest white space prototype is manufactured by Philips Electronics of North America and consists of a TV tuner, a digital processing board, and a PC which provides the UI, control, and signal processing. It's proof-of-concept hardware intended to demonstrate that it's possible to sense the presence of TV signals and transmit wireless IP data in a way that does not interfere with TV. According to an FCC filing seen by Ars Technica, the new prototype is capable of picking up analog and digital television signals as well as wireless microphone signals (which operate in the same part of the spectrum). It works similarly to the Microsoft-manufactured spectrum sensing device submitted earlier this year. Microsoft also submitted a transmission device to the FCC for testing which will be used to show that white space broadband transmissions won't interfere with TV signals.

There are a few screenshots in the FCC submission, one of which is reproduced above. It's quite simple: the user selects a type of signal to scan for, and the application shows the results. If the sensing module picks up television transmission on a particular channel, then that part of the spectrum will not be used for white spaces broadband in that particular area.

The goal of the White Space Coalition is simple: take advantage of unused television spectrum to provide wireless broadband. Although analog television transmissions will cease in February 2009, digital TV signals will continue to use the spectrum between 54MHz and 698MHz. That is a highly desirable chunk of spectrum because the signals can easily pass through walls and other solid objects, giving them a much greater reach than WiFi or even WiMAX, both of which operate in higher frequency bands.

Television broadcasters have vigorously opposed the usage of the white spaces, citing fears that wireless broadband will interfere with TV signals. The current round of FCC testing is designed to ensure that the prototype white space broadband devices don't cause any interference problems at all. "Like the personal/portable prototype devices previously submitted by Microsoft on the Coalition's behalf, the Philips prototype is designed to demonstrate that operating parameters set forth by the Coalition... will provide incumbent licensees in the television bands with the interference protection to which they are entitled," reads the FCC filing.

The White Space Coalition is comprised of Dell, EarthLink, Google, HP, Intel, Microsoft, and Philips Electronics. The FCC should conclude its testing of the white space broadband prototypes in July and the first rules governing the use of the spectrum by wireless broadband devices should be released in October 2007. Once that happens, the IEEE will likely begin the work of standardizing the tech. If all goes as planned, white space broadband service could begin in the US as soon as February 2009.
http://arstechnica.com/news.ars/post...ed-to-fcc.html





Michigan Man Arrested for Using Cafe's Free WiFi From His Car
Jacqui Cheng

A Michigan man is being prosecuted for using a cafe's free WiFi... from his car. Sam Peterson was arrested under a Michigan law barring access to anyone else's network without authorization, according to Michigan TV station WOOD. Since the cafe's WiFi network was reserved for customers, and Peterson never came into the cafe, he was essentially piggybacking off of the open network without authorization.

The arrest came about because Peterson apparently showed up to the Union Street Cafe to use its free WiFi from the comfort of his car, and he did so every single day. A police officer grew suspicious of Peterson and eventually questioned him as to what he was up to. Peterson, not realizing that what he was doing was (at least) ethically questionable, told the officer exactly what he was doing. "I knew that the Union Street had WiFi. I just went down and checked my e-mail and didn't see a problem with that," Peterson told a reporter.

Under Michigan's "Fraudulent access to computers, computer systems, and computer networks" law, Peterson's actions could result in a five-year felony and a $10,000 fine. However, prosecutors do not plan to throw the book at him, as they don't believe that Peterson was aware he was even breaking the law. Instead, he will pay a $400 fine and do 40 hours of community service, and the arrest will not go on his record.

Coincidentally, the cafe owner that Peterson was leeching WiFi off of didn't even realize that what Peterson was doing was a crime at the time. Neither did the police officer. "I had a feeling a law was being broken, but I didn't know exactly what," Sparta police chief Andrew Milanowski told the TV station.

This is not the first time someone has been arrested for piggybacking on a WiFi connection. In 2005, a Florida man was arrested and hit with a third-degree felony for surfing an open WiFi network from his SUV. Similarly, an Illinois man was arrested in 2006 for, again, using an unsecured WiFi network from his car. He pleaded guilty to the charges and was given one year's court supervision and a $250 fine. A Washington man was also arrested in 2006 for parking outside of a coffee shop and using the open WiFi connection without purchasing anything. And just earlier this year, an Alaska man was arrested for using the WiFi network from the public library after hours to play games from—you guessed it—his car in the parking lot.

Whether or not you agree with the legality of using an open WiFi network without the owner's authorization, one thing is painfully clear: if you're going to leech, try not to do it from a parked car right in front of the building.
http://arstechnica.com/news.ars/post...m-his-car.html





Wi-Fi: a Warning Signal

Britain is in the grip of a Wi-Fi revolution with offices, homes and classrooms going wireless - but there is concern the technology could carry health risks.

The Government insists Wi-Fi is safe, but a Panorama investigation shows that radio frequency radiation levels in some schools are up to three times the level found in the main beam of intensity from mobile phone masts.

There have been no studies on the health effects of Wi-Fi equipment, but thousands on mobile phones and masts.

The radiation Wi-Fi emits is similar to that from mobile phone masts. It is an unavoidable by-product of going wireless.

In the last 18 months another two million of us in the UK have begun using Wi-Fi.

Entire cities have become what are known as wireless hotspots.

Precautionary approach

In 2000, Sir William Stewart, now chairman of the Health Protection Agency, headed the government's inquiry into the safety of mobile phone masts and health. He felt the scientific research was sufficient to apply a precautionary approach when siting masts near schools.

During that same year, the government sold off the 3G licences for £22.5bn.

Sir William recalls: "We recommended, because we were sensitive about children... that masts should not necessarily impact directly on areas where children were exposed, like playgrounds and that."

But what about Wi-Fi? The technology is similar to mobile phone masts and in use in 70 per cent of secondary schools and 50 per cent of primary schools.

Panorama visited a school in Norwich, with more than 1,000 pupils, to compare the level of radiation from a typical mobile phone mast with that of Wi-Fi in the classroom.

Readings taken for the programme showed the height of signal strength to be three times higher in the school classroom using Wi-Fi than the main beam of radiation intensity from a mobile phone mast.

The findings are particularly significant because children's skulls are thinner and still forming and tests have shown they absorb more radiation than adults.

Safety limits

The readings were well beneath the government's safety limits - as much as 600 times below - but some scientists suspect the whole basis of our safety limits may be wrong.

Panorama spoke to a number of scientists who questioned the safety limits and were concerned about the possible health effects of such radiation.

"If you look in the literature, you have a large number of various effects like chromosome damage, you have impact on the concentration capacity and decrease in short term memory, increases in the number of cancer incidences," said Professor Olle Johansson of the Karolinska Institute in Sweden.

Another scientist, Dr Gerd Oberfeld, from Salzburg is now calling for Wi-Fi to be removed from schools.

He said: "If you go into the data you can see a very very clear picture - it is like a puzzle and everything fits together from DNA break ups to the animal studies and up to the epidemiological evidence; that shows for example increased symptoms as well as increased cancer rates."

The clear advice from Sir William Stewart to the government on mobile phone masts was that the beam of greatest intensity should not fall on any part of the school grounds, unless the school and parents agreed to it.

Yet the levels tested in the classroom from Wi-Fi were much higher - three times the highest level of the mast.

Panorama contacted 50 schools at random - and found not one had been alerted by the government to any possible health effects.

Philip Parkin, general secretary of the Professional Association of Teachers said: "I think schools and parents will be very worried about it...

"I am asking schools to consider very seriously whether they should be installing Wi-Fi networks now and this will make them think twice or three times before they do it.

"I think the precautionary approach doesn't seem to have worked because it is being rolled out so rapidly...

"It's a bit like King Canute. We can't stop the tide and I am afraid if schools are told that there is a serious health implication for having these networks in schools, it is going to be a very serious matter to say to schools, you have to switch them off."

Low power

At Washington state university, Professor Henry Lai, a biologist respected by both sides of the argument says he has found health effects at similar levels of radiation to Wi-Fi.

He estimates that of the two to three thousand studies carried out over the last 30 years, there is a 50-50 split - half finding an effect with the other half finding no effect at all.

But the Health Protection Agency has said Wi-Fi devices are of very low power - much lower than mobile phones.

The Government says there is no risk and is backed up by the World Health Organisation which is robust in its language saying there are "no adverse health effects from low level, long-term exposure".

The scientist responsible for WHO's position is Dr Mike Repacholi, who headed up the health organisation's research programme into radio frequency radiation.

He was also the founder of the International Committee on Non-Ionizing Radiation Protection (ICNIRP).

He said the statement of "no adverse health effects" was based on the weight of evidence.

In order for a health effect to be established it must mean it has been repeated in a number of laboratories using very good study techniques. The findings of any published studies had been put in the mix before reaching a conclusion, he said.

"It is called a weight of evidence approach - and if that weight of evidence is not for there being an effect or not being an effect that is the only way you can tell whether there really is an adverse health effect," he said.


Wi-Fi: a warning signal, Panorama, Monday, 8.30pm, BBC1. VOTE Are you concerned about wi-fi safety? Yes No Results are indicative and may not reflect public opinion
http://news.bbc.co.uk/go/pr/fr/-/2/h...ma/6674675.stm





Blinding them with science

Wi-Fi Wants To Kill Your Children
Ben Goldacre

Won’t somebody, please, think of the children? Three weeks ago I received my favourite email of all time, from a science teacher. “I’ve just had to ask a BBC Panorama film crew not to film in my school or in my class because of the bad science they were trying to carry out,” it began, describing in perfect detail the Panorama which aired this week.

[ you’ll need to skip through the last two minutes of Eastenders to watch it…]

This show was on the suppressed dangers of radiation from Wi-Fi networks, and how they are harming children. There was no science in it, just some “experiments” they did for themselves, and some conflicting experts. Panorama disagreed with the WHO expert, so he was smeared for not being “independent” enough, and working for a phone company in the past. I don’t do personal smear. But Panorama started it. How independent were they, and the “experiments” they did?

They had 28 minutes, I have under 700 words. Here we go. In the show, you can see them walking around Norwich with a special “radiation monitor”. Radiation, incidentally, is their favourite word, and they use it 30 times, although Wi-Fi is “radiation” in the same sense that light is.

“Ooh its well into the red there,” says reporter Paul Kenyon, holding up the detector (19 minutes in). Gosh that sounds bad. Well into the red on what? It’s tricky to callibrate measurements, and to decide what to measure, and what the cut off point is for “red”. Panorama’s readings were “well into the red” on “The COM Monitor”, a special piece of detecting equipment designed from scratch and built by none other than Alasdair Philips of Powerwatch, the man who leads the campaign against WiFi. His bespoke device is manufactured exclusively for Powerwatch, and he will sell one to you for just £175. Alasdair decided what “red” meant on Panorama’s device. So not very independent then.

Panorama did not disclose where this detector came from. And they know that Alasdair Philips is no ordinary “engineer doing the readings”, because they told us in the show, but they didn’t tell the school that. “They wanted to take some mesurements in my class room, compare them to the radiation from a phone mast and film some kids using wireless laptops. They introduced “the engineer”, whom I googled.”

He found it was the same man who runs Powerwatch, the pressure group campaigning against mobile phones, Wi-Fi, and “electrosmog”. In Alasdair’s Powerwatch shop you can buy shielded netting for your windows at just £70.50 per metre, and special shielding paint at £50.99 per litre. To paint a small eleven foot square bedroom in your house with Powerwatch’s products you would need about 10 litres, costing you £500.

When the children saw Alasdair’s Powerwatch website, and the excellent picture of the insulating mesh beekeeper hat that he sells (£27) to “protect your head from excess microwave exposure”, they were astonished and outraged. Panorama were calmly expelled from the school.

So what about Panorama’s classroom experiment? Not very independent, and not very well designed, as the children pointed out. “They set about downloading the biggest file they could get hold of – so the Wi-Fi signal was working as powerfully as possible - and took the peak reading during that,” says our noble science teacher. It was a great teaching exercise, and the children made valuable criticisms of Panorama’s methodology, such as “well, we’re not allowed to download files so it wouldn’t be that strong”, “only a couple of classes have wifi”, and,“we only use the laptops a couple of times a week”.

Panorama planned to have the man from Powerwatch talk to the students for about 10 minutes about how Wi-Fi worked, and what effects it had on the human body. Then they were going to reveal the readings he had got from the mast, compare them to what Powerwatch had measured in the classrom, and film the kids reaction to the news. So not very independent then.

“Surprisingly enough the readings in my room were going to be higher (about 3 times higher I believe) and with the kids having been briefed by the engineer from Powerwatch first they were hoping for a reaction that would make good telly.” Sadly for them it didn’t happen. “We told Panorama this morning that as they hadn’t been honest with us about what was going on and because of the bad science they were trying to pass off, we didn’t want them to film in the school or with our students.”

The images you see of children in the show are just library footage. I’m sure there should be more research into Wi-Fi. If Panorama had made a 28 minute show about the scientific evidence, we would be discussing that. Instead they produced “radiation” scares, and smears about whether people are “independent”. People in glass houses are welcome to throw stones, at their own risk.

A BBC Spokesperson said: “Alistair Phillips is one of a handful of people with the right equipment to do this test. He was only used in this capacity and was not given an opportunity to interpret the readings let alone campaign on them in the film. We filmed the tests taken at the school and didn’t return.”

Please send your bad science to bad.science@guardian.co.uk

More:

There’s some more general criticism of the program and a response here, and if you were going to make a complaint, you might be disappointed by the response, since it was written before you complained.

To me this is a very uncomplicated situation of heinous scaremongering and bias. If they really had wanted to measure exposure in classrooms, for example, they could have simply taken some readings up close, a metre away from a laptop (some while it was downloading hard, some while it wasn’t), and lots of ambient measurements from around the room, and combined them. This would have provided a meaningful, naturalistic, real world figure describing what a child is actually exposed to during a day. I can’t see any sense in measuring anything other than that.

Instead, while throwing around accusations of other people being biased, to produce a scare, Panorama - quite unnecessarily - took an “electrosmog” pressure group campaigner, let him decide what to measure, how, where, and with what equipment. They completely failed to come clean on this. The reality is, the producers probably didn’t even know what they were having measured. They say it was because there was nobody else to ask: a nation of engineers reaches for another beer.

And that’s just looking at those parts of the program.

There is the issue of Panorama’s other experts, like associate professor Olle Johansson, awarded Misleader Of The Year 2004 in his native country.

But there are far bigger issues, and ones where Panorama were unambiguously scurrilous. They spent a long time covering “electrosensitivity”. There are over 30 double blind studies of people who believe that their symptoms, such as dizziness and heaches, are caused by immediate exposure to electromagnetic signals: essentially these studies all show that sufferers cannot tell when a source of signal is present or absent (full story and references here).

But there was no mention of these studies in Panorama. Instead they showed us just one subject in an unfinished, unpublished study: Why? Apparently she has guessed if the signal is on or not, correctly, 2/3 of the time. Is that statistically significant? What about the other subjects in the study? It’s meaningless: it’s an anecdote dressed up as science with some pictures of some measuring equipment.

It will be very interesting if the results of this study overall are positive, and it will be very interesting to try and understand why theirs were positive, given that over thirty other studies were negative. If the Essex results are negative, will Panorama broadcast that too? I’d guess “no”, and here’s just one reason why.

Then they talk about how the Swedish government officially “recognises” electrosensitivity. They praise Sweden for paying for special paint (like that sold by Alasdair at Powerwatch at £50.99 a litre).

But in fact Sweden simply pays disability benefits for some people who believe they have the condition, in a spirit of compassionate pragmatism (and quite right too). They seem to be making a spectacular fuss about some largely administrative differences in the generous Swedish disability benefits system.

Let’s remember that 13% of Sweden’s working age population claim disability benefits, and the Wall Street Journal reported prominently just two weeks ago that they are cutting back, and specifically on payments for electrosensitivity.

I could go on.

Of course you should be vigilant about health risks. I don’t question that there may be some issues worth sober investigation around Wi-Fi safety. But this documentary was the lowest, most misleading scaremongering I have seen in a very long time.

It gets trashed on BBC24 here:

news.bbc.co.uk/newswatch/ukfs/hi/default.stm

transcript of that program here.

Meanwhile over the past few days badscience.net has been just one small part of the mass destruction in the blogosphere:

www.badscience.net/?p=414

www.badscience.net/?p=415

www.badscience.net/?p=416

www.badscience.net/?p=417

qurl.com/njqhh

www.theregister.co.uk/2007/04/24/open_letter/

http://www.twonilblankblank.com/2007...-as-particles/

wongablog.co.uk/2007/05/22/panorama-on-wi-fi/

http://www.wellingtongrey.net/miscel...s-devices.html

education.guardian.co.uk/schools/story/0,,2084525,00.html

http://www.quackometer.net/blog/2007...arliament.html

news.bbc.co.uk/1/hi/technology/6676129.stm

newsforums.bbc.co.uk/nol/thread.jspa?threadID=6357&&&edition=1&ttl=20070521165820

timworstall.typepad.com/timworstall/2007/05/wifi_is_killing.html

commentisfree.guardian.co.uk/james_randerson/2007/05/why_fear_wifi.html

blog.bibrik.com/archives/2007/05/wifi_fears.html

keithprimaryict.blogspot.com/2007/05/more-on-wi-fi-health-debate.html

p10.hostingprod.com/@spyblog.org.uk/blog/2007/05/bbc_tv_panorama_conflates_wifi_radiation_fears_with_mobile_p hone_masts_ignores_handsets_in_schools.html

blogs.guardian.co.uk/technology/archives/2007/05/21/the_dangers_of_wifi_radiation_updated.html

news.bbc.co.uk/1/hi/technology/6676129.stm

media.guardian.co.uk/broadcast/story/0,,2084219,00.html

newsvote.bbc.co.uk/1/hi/technology/default.stm?dynamic_vote=ON#vote_vote_wifi

www.ts0.com/labels/media.asp

www.qnoodle.com/public/blog/2396

To name just a few.
http://www.badscience.net/?p=418





Wi-Fi and RFID Used for Tracking People
BBC

Wireless tracking systems could be used to protect patients in hospitals and students on campuses, backers of the technology said.

The combination of Radio Frequency Identification (RFID) tags and wi-fi allows real-time tracking of objects or people inside a wireless network.

Angelo Lamme, from Motorola, said tracking students on a campus could help during a fire or an emergency.

"You would know where your people are at any given moment," he said.

Marcus Birkl, head of wireless at Siemens, said location tracking of assets or people was one of the biggest incentives for companies, hospitals and education institutions to roll out wi-fi networks.

Both firms were at The Wireless Event, in London, this week selling new products in the area of so-called real-time location services.

Siemens is pushing a complete system, developed with Finnish firm Ekahau, which can track objects or people.

Battery powered

Battery-powered RFID tags are placed on an asset and they communicate with at least three wireless access points inside the network to triangulate a location.

Mr Birkl said: "The tags have a piece of software on them and they detect the signal strength of different access points.

"This information is sent back to the server and it then models the movement of the tag depending on the shift in signal strength detected."

For the system to work, the building or area that has been deployed with a wireless network needs to have been mapped and calibrated.

To effectively locate objects a wireless access point is needed every 30 metres and Siemens said it was able to pinpoint assets to within a metre of their actual position.

Mr Birkl said: "It's very useful for the health care industry - where there are highly expensive pieces of mobile equipment that move around a hospital.

"At every point in the day health staff need to know where it is."

The system can also be used to track wi-fi equipped devices, such as laptops, tablet PCs and wi-fi enabled phones.

"You can record movements over a period of time. You can see if the security guard in the night makes the right rounds, for example," said Mr Birkl.

He added: "You can set certain boundaries and parameters. If a certain device enters or leaves an area it could trigger an alarm."

'More popular'

As wi-fi becomes more popular in schools, the technology could also be used to track students.

"It has to be aligned with the understanding of the people who are tracked," said Mr Birkl.

There have been privacy concerns expressed in some quarters about RFID tags, especially around the possible use of tags on shopping goods to monitor consumer spending habits.

RFID supporters have pointed out that the tags cannot be read at a great distance, but combining the technology with wi-fi raises the possibility of remote tracking.

Tags on products are typically passive - they have no power source and are only activated when read by a scanner in close proximity. These tags contain only an identifying number and can be small enough to embed in a sheet of paper.

But the tags used in conjunction with a wi-fi network have to be active - they need a power source and have software installed on them that communicates with the wireless access points.

The tags, therefore, are larger in size, and currently are impractical for use on anything other than high value consumer goods or, potentially, on people.

"There needs to be standards put in place so the data is not abused for other purposes," said Mr Birkl.

He added: "But there are clear benefits to keeping people safe."

More than half of respondents to a recent pan-Europe consultation on RFID said regulations were needed to police the use of tags.
http://news.bbc.co.uk/go/pr/fr/-/2/h...gy/6691139.stm





New Police 'Spy Drone' Takes to Sky

The UK's first police "spy drone" has taken to the skies.

The remote control helicopter, fitted with CCTV cameras, will be used by officers in Merseyside to track criminals and record anti-social behaviour.

The drone is only a metre wide, weighs less than a bag of sugar, and can record images from a height of 500m.

It was originally used for military reconnaissance but is now being trialled by a mainstream police force.

The spy plane was launched as a senior police officer warned the surveillance society in the UK is eroding civil liberties.

Ian Readhead, deputy chief constable of Hampshire Police, said Britain could face an Orwellian situation with cameras on every street corner.

However, senior officers in Merseyside, who are trialling the drone, said they did not believe it was the next phase in creating a Big Brother society.

Assistant chief constable Simon Byrne said: "People clamour for the feeling of safety which cameras give.

"Obviously there is a point of view that has been expressed but our feedback from the public is anything we can do to fight crime is a good thing.

"There are safeguards in place legally covering the use of CCTV and the higher the level of intrusion, the higher the level of authority needed within the police force to use it. So there is that balance there."
http://www.guardian.co.uk/uklatest/s...649187,00.html





'Super Wardens' Go on Patrol
Alan Salter



PRIVATELY-employed `super wardens' are to go on patrol in Greater Manchester wearing head-mounted video cameras.

The 20 parking attendants, who work for NCP Services, will be the first in the country to be issued with the equipment.

Their main role is to issue parking tickets but under legislation brought in last year they will also have powers to give on-the-spot fines for anti-social behaviour.

Salford council has asked the wardens to issue penalties up to £80 for offences which include littering, flyposting and allowing dogs to foul the pavement. NCP will use the film as evidence to back up their wardens if any fine is challenged and also in the event of any attack or abuse.

In some cases the footage could be handed to police and used in court.

The first wardens fitted with the RoboCop style cameras will go on patrol in Salford from the NCP HQ in Eccles next month.

The use of head-mounted cameras was piloted by British Transport Police in Manchester last year and Greater Manchester Police followed suit seven months ago in Little Hulton, Salford, when two officers began using them on the beat.

Local authorities were given greater powers to tackle anti social behaviour under the 2006 Clean Neighbourhoods Act and Salford is one of the first to take advantage of the legislation.

Coun Derek Antrobus said: "We have 20 parking attendants walking around the city and we decided that they might as well look at more than just cars. One of the biggest issues on people's minds is the disrespect that some are showing to our environment. The police have not got the resources when they are chasing criminals so this makes a lot of sense.

"We will be monitoring it very carefully and hopefully the residents of Salford will notice the difference."

NCP's James Pritchard said: "Salford council is very keen to do this and we told them that we were happy for our parking attendants to get involved but they would need a better way of getting evidence.

"The cameras will give a much better standard of evidence in case of disputes or assaults on the attendants.

"We are more than happy to work with the police and pass on any evidence we gather. It can only help them to have people out on the streets with a camera all the time.

"Our attendants do a very good job but they are not police officers and they have very specific powers. It makes the job more interesting."
http://www.manchestereveningnews.co....on_patrol.html





Work Bill Would Create New ID Database
Declan McCullagh

The U.S. Congress is poised to create a set of massive new government databases that all employers must use to investigate the immigration status of current and future employees or face stiff penalties.

The so-called Employment Eligibility Verification System would be established as part of a bill that senators began debating on Monday, a procedure that is likely to continue through June and would represent the most extensive rewrite of immigration and visa laws in a generation. Because anyone who fails a database check would be out of a job, the proposed database already has drawn comparisons with the "no-fly list" and is being criticized by civil libertarians and business groups.

All employers--at least 7 million, according to the U.S. Chamber of Commerce--would be required to verify identity documents provided by both existing employees and potential hires, the legislation says. The data, including Social Security numbers, would be provided to Homeland Security, on penalty of perjury, and the government databases would provide a work authorization confirmation within three business days.

There is no privacy requirement that the federal government delete the information after work authorization is given or denied. Employers would be required to keep all the documentation in paper or electronic form for seven years "and make it available for inspection by officers of the Department of Homeland Security" and the Department of Labor. It would also open up the IRS' databases of confidential taxpayer information to Homeland Security and its contractors.

Even parents who hire nannies might be covered. The language in the bill, called the Secure Borders, Economic Opportunity and Immigration Reform Act (PDF), defines an employer as "any person or entity hiring, recruiting, or referring an individual for employment in the United States" and does not appear to explicitly exempt individuals or small businesses. (Its Senate sponsors did not immediately respond on Monday to queries on this point.)

Backers of the proposal, including the Bush administration and many members of Congress, argue the changes to U.S. law are necessary to combat fraud and to ensure employees are truly eligible to work in the United States. According to an analysis by the Pew Hispanic Center, about 7.2 million undocumented immigrants were working in the United States as of March 2005.

"This bill brings us closer to an immigration system that enforces our laws and upholds the great American tradition of welcoming those who share our values and our love of freedom," President Bush said in his radio address on Saturday.

But the federal government's hardly stellar track record in keeping its databases accurate and secure is prompting an outcry over the verification system. Opponents argue that errors could unwittingly shut out millions of Americans who are actually eligible to work in the United States.

"All the problems that are attendant to the no-fly list are going to be a problem for a nationwide employment eligibility verification system," said Timothy Sparapani, senior legislative counsel for the American Civil Liberties Union. "And that's because the government as a rule is terrible about setting up massive data systems and then conditioning peoples' exercises of rights and privileges on the proper functioning of these databases."

Supporters of a federal verification requirement argue that some states, including North Carolina, Georgia, Colorado, Idaho and Arizona, already require employers to engage in some sort of verification--but Sparapani says they're far less extensive and intrusive.

One well-known example of buggy federal databases can be found in the no-fly list, which is intended to keep known terrorists off commercial airplanes. But it's flagged many other people, including Sen. Ted Kennedy (D-Mass.), for questioning at security checkpoints.

In 1996, Congress enacted a related law colloquially known as the "deadbeat dad database," that required employers to report new hires to the federal government. But unlike the current proposal, the new-hire database did not have the ability to deny employment authorization.

The verification system also would likely create innumerable headaches for employers charged with the task of screening the estimated 146 million Americans in the workforce today.

"It would be one of the most fundamental shifts in employment verification in our generation," said Mike Aitken, director of governmental affairs for the Society for Human Resource Management, which has expressed strong reservations about that portion of the bill.

Groups that favor lower immigration levels, however, applaud the measure. Jessica Vaughn, senior policy analyst for the Center for Immigration Studies, a nonprofit think tank, said the system "is an efficient one, it's an effective one, and it's very easy for employers to use, and I think they can pretty quickly get to a point where everyone can use it."

Privacy worries
Sen. Patrick Leahy, a Vermont Democrat who heads the Judiciary Committee, said on Monday that the immigration proposal was problematic.

"Like the Real ID Act that was forced on the American people outside the normal legislative process, this requirement is yet another example of the administration's consistent denigration of Americans' rights, including the right to privacy," Leahy said. "From America's country stores to our largest corporations, employers will now be de facto immigration officials, and potential employees will be presumed illegal until they prove themselves citizens."

Support for the bill is broader than the Bush administration, however. Prominent Democrats, including Kennedy and Senate Majority Leader Harry Reid of Nevada, have endorsed it, with Kennedy calling it a reasonable "compromise." Reid said "we have the opportunity to pass a law that treats people fairly and strengthens our economy."

Jim Harper, director of information policy studies at the Cato Institute, said he worries that the verification system, or the EEVS, will grow beyond its initial plan.

"The system will migrate to all kinds of new uses," Harper said. "Our pictures will be available for government officials to pull up whenever we deal with them and the federal surveillance infrastructure will grow. Watch for news in a few years of government officials and employers using EEVS to play 'Hot or Not' using drivers license photos."

Hiring or continuing to keep illegal aliens on a company's payroll could carry civil fines for employers ranging from $5,000 to $75,000 per unauthorized employee, and failure to keep the requisite records could carry fines of up to $15,000 per violation. Employers that engage in "a pattern or practice of knowing violations" could receive monetary penalties of up to $75,000 or up to six months in prison.

Another privacy concern, Harper said, is how the proposal opens access to IRS databases. It says that information on "each person who has filed" a tax return after 2005 will be available to Homeland Security and its contractors, who are required to undergo a privacy assessment every three years to ensure that confidential data is not lost, stolen or misused.

"I think it's a horrifyingly bad idea," said James Carafano, a senior fellow at the Heritage Foundation, of the mandatory verification system. "Most people in this country who unlawfully get a job do so through document fraud, which means they get some legitimate data and then they just present it as their own. This system is not going to check that."

The idea of electronically verifying the eligibility of employees to work in the United States is not new, but so far, it has been almost exclusively optional. Some 14,000 U.S. employers--including all Dunkin' Donuts franchises, as highlighted by President Bush last summer--are already enrolled in a voluntary program, called the Employment Eligibility Verification Basic Pilot.

Through that system, employers key data from I-9 employment eligibility forms into an online interface and transmit it to Homeland Security. That department then checks the validity of the person's name, Social Security number, date of birth and citizenship information against Social Security Administration databases and responds, typically within a day or two, with an answer about the applicant's eligibility for work.

Making such a system mandatory is not exactly a new idea. In December 2005, the House of Representatives voted 239-182 for a so-called border security bill that included related--but not identical--requirements for employer verification.

The idea also appeared in the Senate's immigration proposal last year. But a handful of U.S. senators including Chuck Grassley, an Iowa Republican, and Barack Obama, an Illinois Democrat, introduced an amendment, which passed 59-39, that was supported by the ACLU because it would have allowed Americans to sue the government for back pay and attorneys fees if they were wrongfully denied employment through the electronic screening process. The latest bill does not allow such an option.

"We need an electronic verification system that can effectively detect the use of fraudulent documents, significantly reduce the employment of illegal workers, and give employers the confidence that their workforce is legal," Obama said in a statement at the time.

According to a Congressional Budget Office report last year (PDF), similar verification requirements in last year's Senate bill were expected to produce "very few EEVS errors that would lead to compensation for lost wages, particularly for native-born workers." Specifically, the office predicted 10 errors per million inquiries for native-born workers and a 0.4 percent error rate for foreign-born workers, which they estimated would decline to 0.025 percent by 2011 because of "system improvements."

But Sparapani argued that virtually all government databases are "riddled with errors" and predicted inconveniences for workers beholden to the nationwide system would be commonplace, particularly if "they set this thing up and do not build in an instant 24-hour hotline with people at an administrative agency, hiring thousands of (people) to answer phone calls and handle data requests."

A year and a half to comply
The immigration bill currently being debated would effectively make today's voluntary system mandatory and expand it to check birth and death records, Department of State passport and visa records, and state drivers license records. By 2013, if a person wanted to present a drivers license, it would have to be one that complies with the requirements of the controversial Real ID Act. (A valid passport or a combination of vital records documents could generally be substituted.)

The changes would take place through multiple steps. Within 18 months of the bill's enactment, all employers would be required to verify new hires or any existing employees whose documentation had expired. Some industries, such as those that deal with homeland security or contract with the government, would be compelled to participate almost immediately upon the bill's passage.

The Society for Human Resource Management estimates that 25,000 to 30,000 employers would have to be enrolled in the system each day to get everyone covered during that period, which would likely require huge budget increases and the creation of a new bureaucracy at Homeland Security.

No later than three years after the bill's enactment, all employers would be required to verify the work eligibility of each of their employees--regardless of how long they have been employed--who had not yet been screened.

According to the Government Accountability Office, last year's immigration proposal, which included similar provisions, was estimated to cost $11.7 billion per year. Bush administration officials fielding reporters' questions at a press conference last week weren't able to pin a number to this year's effort.

Another glaring problem, critics say, is that the current screening system has not proven itself resistant to fraud.

Notably, it doesn't have any way of directly determining whether a job applicant has presented an entirely fabricated identity, which is what led to a high-profile flap last year involving illegal workers at six meatpacking facilities operated by Swift & Co.

Raids by Homeland Security Department agents in December resulted in thousands of immigration-related arrests, including charges that hundreds of people had stolen others' identities to secure jobs with the Greeley, Colo.-based company. But as a Swift executive told a House of Representatives committee last month, the company had "played by all the rules," counting itself as one of the few U.S. employers that had used Basic Pilot since 1997, but had concluded as a result of the raids that the system is "fatally flawed."

"As currently structured, Basic Pilot does not detect duplicate active records in its database," John Shandley, the company's senior vice president of human resources, told politicians. "The same Social Security number could be in use at another employer, and potentially multiple employers, across the country."

In a recent statement about the bill, the White House maintained that the proposal will allow for "unprecedented" information sharing among federal and state agencies, and that Homeland Security will be able to receive "information on multiple uses of the same Social Security number by more than one individual."

One provision in the bill calls for the design of the verification system to "allow for auditing use of the system to detect fraud and identify theft," including development of algorithms that "detect potential identity theft, such as multiple uses of the same identifying information or documents."

The expanded, mandatory system would also have to be devised in a way that allows employers to compare the photograph of a person on an identity document presented during the hiring process against digital photographs stored in databases by whoever issued the identity card, such as a motor vehicle employee.

Finally, the bill includes broadly worded provisions that attempt to make the underlying documents less prone to counterfeiting. It calls for the Social Security Administration, for instance, to issue "fraud-resistant, tamper-resistant and wear-resistant" cards and to consider the feasibility of including a photograph and other biometric information as well.

The ACLU's Sparapani argued that the bill's penalties for noncompliance aren't tough enough to discourage unscrupulous employers from continuing to pay undocumented workers under the table. Under the new rules, "the black market economy is likely to grow rather than shrink," he said.
http://news.com.com/Work+bill+would+...3-6185466.html





New Software Can Identify You From Your Online Habits
Paul Marks

IF YOU thought you could protect your privacy on the web by lying about your personal details, think again. In online communities at least, entering fake details such as a bogus name or age may no longer prevent others from working out exactly who you are.

That is the spectre raised by new research conducted by Microsoft. The computing giant is developing software that could accurately guess your name, age, gender and potentially even your location, by analysing telltale patterns in your web browsing history. But experts say the idea is a clear threat to privacy - and may be illegal in some places.

Previous studies show there are strong correlations between the sites that people visit and their personal characteristics, says software engineer Jian Hu from Microsoft's research lab in Beijing, China. For example, 74 per cent of women seek health and medical information online, while only 58 per cent of men do. And 34 per cent of women surf the internet for information about religion, whereas 25 per cent of men do the same.

While each offers only a fairly crude insight, analytical software could use a vast range of such profiles to perform a probabilistic analysis of a person's browsing history. From that it could make a good guess about their identity, Hu and his colleagues last week told the World Wide Web 2007 conference in Banff, Canada.

Hu's colleague Hua-Jun Zeng says the software could get its raw information from a number of sources, including a new type of "cookie" program that records the pages visited. Alternatively, it could use your PC's own cache of web pages, or proxy servers could maintain records of sites visited. So far it can only guess gender and age with any accuracy, but the team say they expect to be able to "refine the profiles which contain bogus demographic information", and one day predict your occupation, level of qualifications, and perhaps your location. "Because of its hierarchical structure - language, country, region, city - we may need to design algorithms to better discriminate between user locations," Zeng says.

However, Ross Anderson, a computer security engineer at the University of Cambridge, thinks the idea could land Microsoft in legal trouble. "I'd consider it somewhat pernicious if Microsoft were to deploy such software widely," he told New Scientist. "They are arguably committing offences in a number of countries under a number of different laws if they make available software that defeats the security procedures internet users deploy to protect their privacy - from export control laws to anti-hacking laws."
http://www.newscientisttech.com/arti...ng-human_rss20





The Visible Man: An FBI Target Puts His Whole Life Online
Clive Thompson

Hasan Elahi whips out his Samsung Pocket PC phone and shows me how he's keeping himself out of Guantanamo. He swivels the camera lens around and snaps a picture of the Manhattan Starbucks where we're dinking coffee. Then he squints and pecks at the phone's touchscreen. "OK! It's uploading now," says the cheery, 35-year-old artist and Rutgers professor, whose bleached-blond hair complements his fluorescent-green pants. "It'll go public in a few seconds. "Sure enough, a moment later the shot appears on the front page of his Web site, TrackingTransience.net.

There are already tons of pictures there. Elahi will post about a hundred today — the rooms he sat in, the food he ate, the coffees he ordered. Poke around his site and you'll find more than 20,000 images stretching back three years. Elahi has documented nearly every waking hour of his life during that time. He posts copies of every debit card transaction, so you can see what he bought, where, and when. A GPS device in his pocket reports his real-time physical location on a map .

Elahi's site is the perfect alibi. Or an audacious art project. Or both. The Bangladeshi-born American says the US government mistakenly listed him on its terrorist watch list — and once you're on, it's hard to get off. To convince the Feds of his innocence, Elahi has made his life an open book. Whenever they want, officials can go to his site and see where he is and what he's doing. Indeed, his server logs show hits from the Pentagon, the Secretary of Defense, and the Executive Office of the President, among others.

The globe-hopping prof says his over exposed life began in 2002, when he stepped off a flight from the Netherlands and was detained at the Detroit airport. He says FBI agents later told him they'd been tipped off that he was hoarding explosives in a Florida storage unit; subsequent lie detector tests convinced them he wasn't their man. But with his frequent travel — Elahi logs more than 70,000 air miles a year exhibiting his art work and attending conferences — he figured it was only a matter of time before he got hauled in again. He might even be shipped off to Gitmo before anyone realized their mistake. The FBI agents had given him their phone number, so he decided to call before each trip; that way, they could alert the field offices. He hasn't been detained since.

So it dawned on him: If being candid about his flights could clear his name, why not be open about everything? "I've discovered that the best way to protect your privacy is to give it away," he says, grinning as he sips his venti Black Eye. Elahi relishes upending the received wisdom about surveillance. The government monitors your movements, but it gets things wrong. You can monitor yourself much more accurately. Plus, no ambitious agent is going to score a big intelligence triumph by snooping into your movements when there's a Web page broadcasting the Big Mac you ate four minutes ago in Boise, Idaho. "It's economics," he says. "I flood the market."

Elahi says his students get it immediately. They've grown up spilling their guts online — posting Flickr photo sets and confessing secrets on MySpace. He figures the day is coming when so many people shove so much personal data online that it will put Big Brother out of business.

For now, though, Big Brother is still on the case. At least according to Elahi's server logs. "It's really weird watching the government watch me," he says. But it sure beats Guantanamo.
http://www.wired.com/techbiz/people/...s_transparency





Report Slams FBI Network Security

FBI network vulnerable to insider attacks, government watchdog group says
Ellen Messmer

The Government Accountability Office, the federal government’s watchdog agency, Thursday released a report critical of the FBI’s internal network, asserting it lacks security controls adequate to thwart an insider attack.

In the report, titled “Information Security: FBI Needs to Address Weaknesses in Critical Network,” the authors -- Gregory Wilshusen, GAO’s director of information security issues, and Chief Technologist Keith Rhodes -- said the FBI lacks adequate network security controls.

The FBI “has an incomplete security plan,” the report concluded.

The bureau, which had the opportunity to review the GAO’s findings before publication, responded that it wasn’t arguing with some of the technical observations expressed in the GAO report, but disagreed that the FBI is open to unacceptable risk of an insider attack.

In a letter of response to the GAO, Dean Hall, the FBI’s deputy CIO, and Zalmal Azni, the FBI’s CIO, noted, “The FBI concurs with many of the GAO’s technical recommendations and the programmatic recommendation to continue the implementation of information security activities in order to fully establish a comprehensive Information Assurance Program.”

Hall and Azni defended the FBI’s risk-management posture, however, emphasizing, “The FBI does not agree that it’s placed sensitive information at an unacceptable risk for unauthorized disclosure, modification or insider threat.”

The GAO, however, stated in the report that an evaluation of the effectiveness of the FBI’s security controls over routers, switches, servers, network management, firewalls and other IT infrastructure at FBI headquarters, revealed the FBI “did not consistently configure network devices and services to prevent unauthorized insider access.”

Among its other findings, the GAO said the FBI did not adequately “identify and authenticate users to prevent unauthorized access.” The GAO report also criticized FBI network security in other regards, saying that there was a lack of encryption to protect sensitive data and patch management wasn’t being done in a timely manner.

The GAO’s analysis of the FBI internal network had been requested by Rep. James Sensenbrenner, chair of the Judiciary Committee in the U.S. House of Representatives.
http://www.networkworld.com/news/200...-security.html





Congress Discovers Spine, Starts Examining NSA Surveillance
Nate Anderson

Silvestre Reyes (D-TX), the chairman of the House Intelligence Committee, this week announced his intention to hold hearings that will probe the extent of the cooperation between telephone companies and the NSA.

After resistance from the White House last year, the Republican-controlled Congress chose not to examine the issue, but it has been raised once again by a recent Bush administration request for immunity for the phone companies. That retroactive immunity was included in the government's most recent House and Senate funding requests for US intelligence services, and it would grant the telephone companies immunity regardless of the legality of their actions.

Before granting such immunity, though, Reyes is determined to find out exactly what these companies might have done. In a statement issued by the Intelligence Committee, Reyes said, "Before granting immunity for any activities, it will be important to review what those activities were, what was the legal basis for those activities, and what would be the impact of a grant of immunity."

To find out, Reyes plans to hold hearings in June to determine the nature of the NSA's surveillance program and to find out whether it was legal. The hearings will also consider the issue of whether laws need to be changed to allow intelligence agencies to better track terrorist communications.

Reyes says that he "will not prejudge the outcome of these hearings," but the fact that he has serious questions about the retroactive immunity suggests that he won't be easily persuaded to sign off on it. That's good news for organizations like the EFF, which is embroiled in a lawsuit with AT&T over the issue. Even if the administration doesn't get its way in Congress, it will continue to push for the courts to throw out such cases on the grounds that they will expose state secrets. Thus far, though, the combined case against the telephone companies remains alive.
http://arstechnica.com/news.ars/post...veillance.html





Action Alert: Tell Congress Not to Let Telcos Off the Hook for Illegal Spying
Press Release

The Bush Administration is pushing legislation that could let telecommunications providers off the hook for illegally assisting the NSA's domestic spying program, and one of your Senators may be on the key committee that can stop it. Use the form below and defend your rights.

In January 2006, EFF filed suit against telco giant AT&T for violating its customers' privacy and helping the NSA spy on millions of Americans' telephone and Internet communications. Congress is now considering a bill proposed by the Administration that could threaten cases like EFF's. That proposal appears intended to not only gut current privacy safeguards but also give blanket immunity to anyone who collaborated with the government's spying.

While no Congressional representatives have sponsored the Administration's proposal so far, we're hearing credible rumors that the proposal may soon be taken up by the Senate Intelligence Committee.

Don't let the Administration get away with it.

Keep your call to your member of Congress short and polite. Whether or not he or she agrees with you, what is important is that the staffer knows you are a constituent, and that you want your Senator to oppose the "FISA Modernization" legislation proposed by the Administration:

"Hello, I'm a constituent, and I want to urge my Senator to reject the 'FISA Modernization' proposal drafted by the Director of National Intelligence and the Department of Justice. I oppose the NSA's domestic spying program and demand immediate investigations to help stop this unprecedented violation of Americans' rights and the Constitution. I also oppose any attempt to grant immunity to companies or individuals that helped the program."
https://secure.eff.org/site/Advocacy...3&pg=makeACall





Hack My Son's Computer, Please
Jennifer Granick

Can an elderly father give police permission to search a password-protected computer kept in his adult son's bedroom, without probable cause or a warrant? In April, a three judge panel of the 10th Circuit Court of Appeals said yes.

This week, the son's attorney, Melissa Harrison, an assistant federal public defender in Kansas City, will ask the court to reconsider the panel's ruling. At stake is whether law enforcement will have any responsibility to respect passwords and other expressions of user privacy when searching devices which contain the most sensitive kinds of private information.

In United States v. Andrus (.pdf), agents suspected that the defendant was accessing websites containing child pornography, but after eight months of investigation still did not have sufficient probable cause to get a search warrant. Instead, they decided to drop by the defendant's house for an impromptu conversation.

The suspect was not at home. However, his 91-year-old father answered the door in his pajamas, invited the agents in, and eventually gave them permission to enter his son's bedroom and search the hard drive on his son's password-protected computer. The agents used EnCase to perform the search, a common forensic tool programmed to ignore Windows logon passwords. Agents found child pornography on the computer.

Without a judge's permission, the search depended on the father's authority to allow police access to his son's computer. On this point, the fact that the son locked his parents out of the computer with a password is critical.

The Fourth Amendment generally prohibits warrantless searches of an individual's home or possessions. There is an exception to the warrant requirement when someone consents to the search. Consent can be given by the person under investigation, or by a third party with control over or mutual access to the property being searched. Because the Fourth Amendment only prohibits "unreasonable searches and seizures," permission given by a third party who lacks the authority to consent will nevertheless legitimize a warrantless search if the consenter has "apparent authority," meaning that the police reasonably believed that the person had actual authority to control or use the property.

Under existing case law, only people with a key to a locked closet have apparent authority to consent to a search of that closet. Similarly, only people with the password to a locked computer have apparent authority to consent to a search of that device. In Andrus, the father did not have the password (or know how to use the computer) but the police say they did not have any reason to suspect this because they did not ask and did not turn the computer on. Then, they used forensic software that automatically bypassed any installed password.

The majority held that the police officers not only weren't obliged to ask whether the father used the computer, they had no obligation to check for a password before performing their forensic search. In dissent, Judge Monroe G. McKay criticized the agents' intentional blindness to the existence of password protection, when physical or digital locks are such a fundamental part of ascertaining whether a consenting person has actual or apparent authority to permit a police search. "(T)he unconstrained ability of law enforcement to use forensic software such at the EnCase program to bypass password protection without first determining whether such passwords have been enabled ... dangerously sidestep(s) the Fourth Amendment."

If the 10th Circuit rehears the case, it will have the opportunity to recalculate the balance between individuals' efforts to protect computer privacy and security, and law enforcement efforts to make searches based on mere hunches without judicial supervision.

In this case, the defendant could not have done much more to keep his computer private, other than tape a piece of paper to the monitor like a teenager might post on the door to his room (Do Not Enter Or Else!!). On the other hand, the officers could have simply asked the father whether he had permission to access his son's computer, switched the computer on to see if there was a password prompt, or used a forensic program that notifies investigators when a machine is password protected. It's as if the police entered the defendant's room with x-ray specs on and searched his bureau, closet and footlocker without needing to even ask his father whether these things were private or shared.

The Supreme Court expressly disavowed this technique in Kyllo v. United States, where it held that "obtaining by sense-enhancing technology any information regarding the interior of the home that could not otherwise have been obtained without physical 'intrusion into a constitutionally protected area,' constitutes a search -- at least where ... the technology in question is not in general public use."

If courts are going to treat computers as containers, and if owners must lock containers in order to keep them private from warrantless searches, then police should be required to look for those locks. Password protected computers and locked containers are an inexact analogy, but if that is how courts are going to do it, then its inappropriate to diminish protections for computers simply because law enforcement chooses to use software that turns a blind eye to owners' passwords.
http://www.wired.com/politics/law/co...cuitcourt_0523





Connecticut Man Charged in Toilet Bombings Gets 5 Years
AP

A Weston man once called one of the Internet’s most notorious pirates of music and movies was sentenced today to five years in prison for plowing up a portable toilet, prosecutors said.

Bruce Forest, 50, was charged last year with a series of toilet explosions in 2005 and 2006. But under a plea agreement, Forest admitted only to blowing up one toilet in Weston in February 2006. No one was injured in any of the blasts.

His defense attorney and his wife said the incident was completely out of character for Forest. They said he had been addicted to painkillers initially taken for migraine headaches caused by a severe fall about 10 years ago. A prescribed drug intended to wean him off the painkillers caused psychotic episodes, they said.

Forest was an Internet pirate in the late 1990s, said J.D. Lasica, a San Francisco writer who dubbed Forest "Prince of the Darknet" in his 2005 book "Darknet: Hollywood’s War Against the Digital Generation."

His wife also discounted those claims. She said he actually worked with the federal government to tighten safeguards against piracy.

Prosecutors said Forest began a string of bombings in Weston where he blew up portable toilets in 2005 and 2006. He was also charged in explosions at the former Fitch School in Norwalk and at an abandoned gas station in Weston.

Most of the explosions occurred at night in isolated areas, but the last blast in Norwalk occurred during the day in a heavily populated area, authorities said. The explosives involved a mixture of chemicals, police said. Prosecutors said they were detonated by an assault rifle.
http://www.newstimeslive.com/news/story.php?id=1054407





N.J. Sues YouTube over Deadly Crash Footage

The New Jersey Turnpike Authority is suing several video sites, including YouTube, for infringing on the copyright of car crash footage recorded on the turnpike.

The footage in question was recorded by a NJTA video camera. The video depicts a car traveling southbound on the New Jersey Turnpike and crashing into the Great Egg Harbor toll plaza on May 10. The driver, a 52-year-old New Jersey resident, was killed.

The NJTA is also suing NextPoint LLC, the owner of video-sharing site break.com. The complaint names UK-based LiveLeak.com as a defendant as well, though according to LiveLeak the NJTA has voluntarily removed them from the lawsuit after they removed the video.

The NJTA is suing for direct copyright infringement by public performance, public display and reproduction, as well as inducement, contributory and vicarious copyright infringement.

"The video serves no worthwhile purpose and shows a tremendous lack of common human decency towards the family of the victim," the complaint reads. "Nevertheless, defendants have either refused or failed to remove the video from their Web sites."

According to the complaint, the NJTA requested the video's removal from YouTube upon learning of its existence. YouTube complied, but the video had already been copied by other users and remains on the site.

"YouTube did not try to prevent the very same video from being uploaded again by users immediately after it was purportedly removed," the complaint reads.

A Youtube spokesperson said the company removed the video "because it violated our terms of services. Because our removal also complied with our obligations under the Digital Millenium Copyright Act, we see no legal basis for a claim." Last month Google CEO Eric Schmidt said YouTube would soon launch an automated system that would help copyright holders detect and deter abuse.

LiveLeak removed the video after receiving a formal court request, according to co-founder Hayden Hewitt.

Hewitt said the lawsuit is guaranteed to bring more publicity to the video.

"To be honest I think it's kind of a strange situation," he said. "Usually you just file a nice, low level, discrete DMCA takedown...And usually these lawsuits are around entertainment video, where there's a financial stake. I don't understand it."

According to the complaint, the offending video has been viewed 19,833 times on YouTube, 189,037 times on LiveLeak.com and 6,933 times on break.com as of May 21. Less than 24 hours later, on May 22, the videos had been viewed 24,346 times, 213,295 times and 16,812 times, respectively.

The NJTA also is suing unnamed corporations and individuals who may have helped distribute the stolen video.
http://www.physorg.com/news99326515.html





Why Don't You Pay for Software?
David Chartier

This is a post for the crack 'torrenters, the chronic non-donators and the I'll-stick-with-the- free - alternative'ers in the crowd: we want to hear your thoughts on why you don't pay for software. We aren't talking about those of us who simply can't afford this or that; if you're using iPhoto and can't cover the $300 (or $150 educational) price of Aperture that's one thing. If you're just plain happy with what a free version/alternative does, that's fine too.

We're talking to those of you who download that great piece of donationware that beats the pants off the $40 alternative, but still don't even drop so much as $1 in the PayPal tip jar. We wanted to pop this question because we're seeing better and better software coming from open source, donationware and shareware developers, and yet many of them are still having a hard time making a living doing something they love, which is creating the products we obviously appreciate.

So what's up? Are you surviving on a strict diet of ramen noodles and that discount Brand X version of Mountain Dew, with little room to spend on 1s and 0s? Do you simply not believe in paying for bits and bytes? Or are you just a stickler for anything you can get for free?

We would really like to hear thoughts from the community on why you either chose not to - or simply can't - pay for software.
http://www.downloadsquad.com/2007/05...-for-software/





Local news

The Show Will Go On, But the Art Will Be Shielded
Randy Kennedy



You enter through a place that looks like the very last picture show, an old movie theater with soda-stained carpet and a busted popcorn machine. Sleeping bags and clothes are scattered around, as if the theater has served as a shelter from some unnamed danger outside its doors.

Beyond those doors sits a tiny mud-brick house, an eerie replica of the one where Saddam Hussein was living when he was captured in his spider hole. And past that, nearly filling a warehouse the size of a football field, loom dozens more unsettling sights: a wrecked police car, a carnival ride rigged with bomb casings, a dilapidated two-story house, a rusted oil tanker, an interrogation chamber.

If it seems that some sort of disaster has taken place here, it has, at least in the view of the Massachusetts Museum of Contemporary Art in North Adams, where the warehouse serves as its biggest exhibition space. The pieces make up an immense art installation that was supposed to open last December, created by Christoph Büchel, a Swiss artist known for building elaborate, politically provocative environments for viewers to wander, and sometimes to crawl, through.

But after work began last fall on this installation, one of his most ambitious, it became increasingly more complex under Mr. Büchel’s direction; the $160,000 budget doubled; and relations between the artist and the museum degenerated into an angry standoff, according to Joseph C. Thompson, the museum’s director. Now, after months of frustration, the museum has decided to take an extraordinary step: On Saturday it will open the doors to the show anyway, without Mr. Büchel’s permission or cooperation.

But there is a catch, one that seems in keeping with the surreal nature of the artwork itself. Because of concerns about legal action by Mr. Büchel, the museum will shield all the huge objects in the warehouse from view with tall plastic tarps, as if Christo and Jeanne-Claude had intervened at the last minute. Viewers will be allowed to wend their way through the cavernous hall but they will have to rely on their imaginations, mostly, to appreciate the show.

The decision is intended as an artistic and provocative solution to a difficult situation, but it is one that the museum wants to be only temporary. Yesterday it filed a request in federal district court in Springfield, Mass., seeking protections that would allow it to open the unfinished show full-on, without wrappers.

Even in the ever more expensive and involved world of huge contemporary installations, such a pitched battle between an artist and a museum is virtually unheard of. Mr. Thompson said he believed his institution, known as MASS MoCA, had not only a right but also an obligation to open the show, given its limited overall budget and the effort it has put into bringing the exhibition as far along as it has come.

“I feel like I have a responsibility to our visitors, to our donors, to the people who have provided resources, to the townspeople who have put in lots of blood, sweat and tears and have donated items,” he said.

Cornelia Providoli, a director of Hauser & Wirth, Mr. Büchel’s gallery in Zurich, said yesterday that Mr. Büchel was working on an exhibition that was about to open in London and was unable to comment, though she said he intended to do so later.

But in messages to Mr. Thompson and to The Boston Globe, which published an article about the standoff in late March, he has accused the museum of mismanaging the project, spending more than necessary on some of the bigger pieces, like the two-story house.

“The institution proved not to be capable — neither logistically, neither schedule- nor budget-wise — to manage the project,” he said in a statement to The Globe.

A list of demands Mr. Büchel sent to the museum after he left Massachusetts said, in part, “The artist will not accept any orders and any more pressure or compromises as to how things have to be done from the museum director or museum’s technicians.”

“The artist demands full autonomy with regard to his artwork,” he wrote.

Donn Zaretsky, a lawyer representing Mr. Büchel, contends that MASS MoCA never had a written agreement with his client about the project. Mr. Thompson said yesterday that the museum did have had a basic written agreement with Mr. Büchel, in the form of a letter and follow-up e-mail messages, that set a budget for the project and a timeline for opening it. Mr. Thompson said the agreement did not specify individual objects that would be acquired for the exhibition.

Mr. Büchel contends that during the course of the work, the museum began to “treat the project as though it was the artist’s wish list for Christmas, eliminating necessary and key elements” that had been agreed upon earlier. He says that in its current state it is only half completed.

Mr. Thompson denies Mr. Büchel’s assertions and says that the museum tried very hard, within the budget, to provide everything it was asked for.

Yesterday Mr. Zaretsky denounced the museum’s actions. “To me, this is an unheard of, unprecedented act, for a fine-art museum to go to court to try to show an artist’s work in an unfinished state,” he said.

Called “Training Ground for Democracy,” the installation is intended to draw a viewer into a Grand Guignol maze in which artifacts of everyday Western culture — a movie theater, a home, a voting booth — are jammed together with scenes that seem to have been airlifted from a land of perpetual war and paranoia. In a recent walk through the space, the big elements, like the houses and the tanker truck, were striking.

Mr. Büchel was also concerned with the appearance of even the smallest detail, like a soiled rag hanging near a jail-cell sink or a dusty bag of sunflower seeds atop a television set. As the project grew, the museum says, this kind of obsessiveness began to have its costs. Even by the time the show should have opened, it had run well over budget, including $100,000 alone for the installation of the two-story house, which had to be cut into pieces and reassembled.

The museum bought a second mobile home for the space after Mr. Büchel disapproved of the first one; it reassembled the complete interior of a defunct movie theater, including the wallpaper and carefully numbered ceiling tiles; and it decontaminated the oil tanker, which had once been filled with No. 6 fuel oil.

The museum’s overall visual arts budget is about $800,000 a year, including staff salaries, so “you can imagine what this has done to our budget,” Mr. Thompson said of the show’s price tag, now more than $300,000, none of which was publicly financed.

Mr. Büchel has not worked on the show since early last December, when he returned to Europe to attend to other projects and left behind a list of additional objects that he wanted the museum to find. The list included one item that Mr. Thompson described as a final straw: the fuselage from a large jetliner, like a 767, that Mr. Büchel wanted to be burned and bomb-damaged and then hung from the ceiling.

“That’s when I began to put on the brakes on the project,” Mr. Thompson said in a recent interview at the museum, adding that he and others there had grown “to suspect that there might not ever be an end” to Mr. Büchel’s vision for the space.

“We had clearly bent over backwards and done everything we could do to get all these major elements in place,” Mr. Thompson said, “but at some point the realities of our budget, resources and staff imposed themselves.”

The show that is to open on Saturday will be titled “Made at MASS MoCA.” In addition to the shrouded objects, it will display some installation photographs of them, along with images from other large, complex installations it has done, involving artists like Gregory Crewdson, Cai Guo-Qiang and Ann Hamilton.

The intention, Mr. Thompson said, is to show the public how MASS MoCA works with artists, often in highly collaborative and risky exhibitions. “When you experiment seriously, and at a scale which is our habit, the results can be unexpected,” he said. “Not everything works.”

Some people in the art world have suggested to him that Mr. Büchel might have purposely forced the exhibition to grind to a halt as the final act of the work itself — a literal demonstration of the kind of futility and absurdity that he seeks to communicate in the exhibition, with war, religion and the news media as his motifs.

It would not be the first time that Mr. Büchel has used his work to tweak the art establishment. In 2002 he sold his invitation to participate in Manifesta, an international art exhibition in Frankfurt, for $15,000 in an e-Bay auction to allow the winner to take his place.

Mr. Thompson said he had no way to know whether Mr. Büchel’s actions might be part of an elaborate art stunt. “At times it’s certainly felt that way to me,” he said.

“Made at MASS MoCA” will continue into the fall — a closing date has not been set — at the Massachusetts Museum of Contemporary Art, 87 Marshall Street, North Adams, Mass.; (413) 664-4481, www.massmoca.org.
http://www.nytimes.com/2007/05/22/ar...gn/22muse.html


















Until next week,

- js.



















Current Week In Review





Recent WiRs -

May 19th, May 12th, May 5th, April 28th, April 21st

Jack Spratts' Week In Review is published every Friday. Submit letters, articles and press releases in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. Questions or comments? Call 213-814-0165, country code U.S..


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
JackSpratts is offline   Reply With Quote